Why Nostr? What is Njump?
2025-05-23 18:00:09

ChipTuner on Nostr: Has anyone run a large parameter model locally? Like a 60+b parameter model? How does ...

Has anyone run a large parameter model locally? Like a 60+b parameter model? How does it compare for your work? I know qwen 2.5 coder has a 32b model but that's still out of my hardware range for right now.
#asknostr
yeah lmfao. I don't have the hardware to run anything more than a ~14b model, so they can't code well, but they can really help with thought streams, summarizing docs, teaching you new things, everything but write/refactor new code IMO.
Author Public Key
npub1qdjn8j4gwgmkj3k5un775nq6q3q7mguv5tvajstmkdsqdja2havq03fqm7