TheGuySwann on Nostr: GPT4All is a great one, and also despite a bunch of models requiring like an A100 or ...
GPT4All is a great one, and also despite a bunch of models requiring like an A100 or V100 card to run, there are a number of decent models available with Prem ai too, plus a variety of non-LLM stuff.
For image generation, my favorites have been Automatic1111, and ComfyUI both using stable diffusion. Great places to find models, LoRAs, embedding, etc are civitai and huggingface.co. I know there is another aggregator I used while I was focused on stable diffusion but I can’t remember right now.
Unfortunately LLMs locally run are not the best, the GPU power needed is still just outside of the “prosumer” capacity so you might need a hosted option (Google collab or something) and/or leaning on some of the bigger tools like ChatGPT before the wave of GPU sharing networks finally lands.
Still looking for the best local run LLM stuff and I’ll be sure to discuss it if I find some secret sauce.
Published at
2023-09-09 17:23:15Event JSON
{
"id": "09725ba388e812f590e2670578afece37ae4348e90bd6870bd3f8a5e5b54d160",
"pubkey": "b9e76546ba06456ed301d9e52bc49fa48e70a6bf2282be7a1ae72947612023dc",
"created_at": 1694280195,
"kind": 1,
"tags": [
[
"e",
"8266c14395ce1c6a0debe0bf5ed608f875eeca0b8b3db34c06ae03edf4ebd64b"
],
[
"e",
"9233aa45cbc4d692c71dcb2b1e862e7572fcee7079663ae4e4cfef197806369b"
],
[
"p",
"03612b0ebae0ec8d30031c440ba087ff9bd162962dffba4b6e021ec4afd71216"
],
[
"p",
"0b963191ab21680a63307aedb50fd7b01392c9c6bef79cd0ceb6748afc5e7ffd"
]
],
"content": "GPT4All is a great one, and also despite a bunch of models requiring like an A100 or V100 card to run, there are a number of decent models available with Prem ai too, plus a variety of non-LLM stuff.\n\nFor image generation, my favorites have been Automatic1111, and ComfyUI both using stable diffusion. Great places to find models, LoRAs, embedding, etc are civitai and huggingface.co. I know there is another aggregator I used while I was focused on stable diffusion but I can’t remember right now.\n\nUnfortunately LLMs locally run are not the best, the GPU power needed is still just outside of the “prosumer” capacity so you might need a hosted option (Google collab or something) and/or leaning on some of the bigger tools like ChatGPT before the wave of GPU sharing networks finally lands.\n\nStill looking for the best local run LLM stuff and I’ll be sure to discuss it if I find some secret sauce.",
"sig": "a1ca59bee3ae8d13c368f04fbdc3ed4fa01adb54d2d0fb6af7a5218358b8d5c55f30ae50f21a37632685661c5d75f165e44125224b623d85c4a1c7d2157b905a"
}