Event JSON
{
"id": "85f6751d57a48ca6b5daec7638c1082146856543718cb5d7dea4f8d1e8840d4f",
"pubkey": "c6f7077f1699d50cf92a9652bfebffac05fc6842b9ee391089d959b8ad5d48fd",
"created_at": 1726416215,
"kind": 1,
"tags": [
[
"e",
"ac534a46b9129bc26074eb7f22b4dc4044e77b7e6df276e0e34785c2eaebf9e7",
"",
"root"
],
[
"e",
"12bb81cda0c145a54bdfd0bdd864d1a005e571c069fbca1056a8a769b57733d5",
"",
"reply"
],
[
"p",
"c6f7077f1699d50cf92a9652bfebffac05fc6842b9ee391089d959b8ad5d48fd"
],
[
"p",
"7d4417d5df435a97b8f55c8f2e7e2ef533e2371ce5e1cffd595c179a3eaf36d4"
],
[
"r",
"https://dev.to/grahamthedev/windowai-running-ai-locally-from-devtools-202j)"
],
[
"r",
"https://nostr-local-ai.vercel.app/"
]
],
"content": "Honestly speaking, we don't really have any good solutions right now. It is super centralized, with virtually no privacy.\n\nThere is hope that Chromium browsers might start supporting local LLM models. \n\nChrome ai docs: https://dev.to/grahamthedev/windowai-running-ai-locally-from-devtools-202j) \n\nMeanwhile, I have built a web app using WebLLM and WebGPU, where you can run any LLM model locally in your browser without any performance compromises. (use pc, for larger models) \n\nhttps://nostr-local-ai.vercel.app/\n\nRight now, I am just waiting for the right technology.",
"sig": "aa8c081e559270f329751b825816b2b24e901eb519040c58dc78ed5677340963e1bc61962f13950e9472772599efcee8f352fbc4d079becebdaaa2a12a02dec8"
}