Why Nostr? What is Njump?
2024-05-31 09:10:20

Pablo Xannybar on Nostr: The Mistral LLM running on my local machine is absolutely crazy good. I don't even ...

The Mistral LLM running on my local machine is absolutely crazy good. I don't even have a GPU just using the quantized model on my CPU but damn it is doing some impressive shit.

Local LLMs are easily exceeding GPT 3.5 at this point. If you have a modern CPU and at least 32GB RAM, check out Ollama (command line) or LLM Studio (GUI) both are open source and use the same models.

You can run it on 16GB RAM but it'll lag, ideally you do want at least 32GB, the quantized models use a lot of memory to run on a CPU.
Author Public Key
npub17rlc0emedw5xljztfqrmykjaacsx6ujvdas64zznjadrnhhwlavq4jjtgg