Pablo Xannybar on Nostr: The Mistral LLM running on my local machine is absolutely crazy good. I don't even ...
The Mistral LLM running on my local machine is absolutely crazy good. I don't even have a GPU just using the quantized model on my CPU but damn it is doing some impressive shit.
Local LLMs are easily exceeding GPT 3.5 at this point. If you have a modern CPU and at least 32GB RAM, check out Ollama (command line) or LLM Studio (GUI) both are open source and use the same models.
You can run it on 16GB RAM but it'll lag, ideally you do want at least 32GB, the quantized models use a lot of memory to run on a CPU.
Published at
2024-05-31 09:10:20Event JSON
{
"id": "d234c95eed681d74f4fe8991f228b3fd4f1ed88fc5c7e3de6b254ed5ab6ad2f5",
"pubkey": "f0ff87e7796ba86fc84b4807b25a5dee206d724c6f61aa8853975a39deeeff58",
"created_at": 1717146620,
"kind": 1,
"tags": [],
"content": "The Mistral LLM running on my local machine is absolutely crazy good. I don't even have a GPU just using the quantized model on my CPU but damn it is doing some impressive shit.\n\nLocal LLMs are easily exceeding GPT 3.5 at this point. If you have a modern CPU and at least 32GB RAM, check out Ollama (command line) or LLM Studio (GUI) both are open source and use the same models.\n\nYou can run it on 16GB RAM but it'll lag, ideally you do want at least 32GB, the quantized models use a lot of memory to run on a CPU. ",
"sig": "e7b17c6ba275c54e297b293f0353abb91736c7515d91196fd60d1c4f51e39708d2b8b98ef4cea325ac1486969ae206f50ecf88beda00f50478ef96392598d6bb"
}