ChipTuner on Nostr: Has anyone run a large parameter model locally? Like a 60+b parameter model? How does ...
Has anyone run a large parameter model locally? Like a 60+b parameter model? How does it compare for your work? I know qwen 2.5 coder has a 32b model but that's still out of my hardware range for right now.
#asknostr
yeah lmfao. I don't have the hardware to run anything more than a ~14b model, so they can't code well, but they can really help with thought streams, summarizing docs, teaching you new things, everything but write/refactor new code IMO.
Published at
2025-05-23 18:00:09Event JSON
{
"id": "d090889dfc19e313ca6d93f90197e1095105ea76fa4c8235e7cb8597f9953694",
"pubkey": "036533caa872376946d4e4fdea4c1a0441eda38ca2d9d9417bb36006cbaabf58",
"created_at": 1748023209,
"kind": 1,
"tags": [
[
"q",
"6c5497c816bbc1cc1d25d8f544959e6420ef70c53a878ab7af9c1facf73be2ae",
"wss://relay.nostr.band/",
"036533caa872376946d4e4fdea4c1a0441eda38ca2d9d9417bb36006cbaabf58"
],
[
"t",
"asknostr"
]
],
"content": "Has anyone run a large parameter model locally? Like a 60+b parameter model? How does it compare for your work? I know qwen 2.5 coder has a 32b model but that's still out of my hardware range for right now. \n#asknostr\n\nnostr:nevent1qvzqqqqqqypzqqm9x092su3hd9rdfe8aafxp5pzpak3cegkem9qhhvmqqm96406cqythwumn8ghj7un9d3shjtnwdaehgu3wvfskuep0qythwumn8ghj7un9d3shjtnswf5k6ctv9ehx2ap0qqsxc4yheqtthswvr5ja3a2yjk0xgg80wrzn4pu2k7hec8av7ua79tssywfph",
"sig": "f5a643eb63ae420d216717dc44531615beeb0170b6fcb7f841027c3bf185898e37594a2cb6823d01c453b102e3c79b00b5486a5e4182952622f7a41b724b61b1"
}