lqdev

While the size of these smaller language models is significantly less than the trillion parameter models like GPT, they still take up a lot of storage space. Playing around with Mistral 7B Instruct v0.2, the safetensor files containing the weights take up roughly 15GB of space. I'm thinking of playing around with blobfuse to mount a storage container to my local file system. That way, I'm only caching and accessing the models I'm working with at any given time.


Send me a message or webmention
Back to feed