Event JSON
{
"id": "8255024d7071f246adbcf01b50e1cede3d6015c6be91ce07e666e7eae9c59ff8",
"pubkey": "16f1a0100d4cfffbcc4230e8e0e4290cc5849c1adc64d6653fda07c031b1074b",
"created_at": 1712710577,
"kind": 1,
"tags": [
[
"e",
"fe072305e59643151e625e788dfb8c4beb52bd101f48b3f3fbc4d4effdfc2a76",
"",
"root"
],
[
"e",
"d8e9671ade4f7f68885b25bf86797c27e6f0c15bc44fb49a11564de65877e807",
"",
"reply"
],
[
"p",
"16f1a0100d4cfffbcc4230e8e0e4290cc5849c1adc64d6653fda07c031b1074b"
],
[
"p",
"ae1008d23930b776c18092f6eab41e4b09fcf3f03f3641b1b4e6ee3aa166d760"
],
[
"p",
"e2ccf7cf20403f3f2a4a55b328f0de3be38558a7d5f33632fdaaefc726c1c8eb",
"",
"mention"
]
],
"content": "Well I can help you if you have questions. But running large local LLMs still won't be able to achieve what Large Language Models at data enters can deliver. nostr:npub1utx00neqgqln72j22kej3ux7803c2k986henvvha4thuwfkper4s7r50e8 has more experience building a rig specifically for this with 3 2070s if I remember right. He may have something to say on how well that can realistically perform. ",
"sig": "c25f1f6556a6252c4f0a4723a5fd85869d68c6f733e429c2be5ac403cde67deb1a5d92b0349bfaa9887cd91b0c8d10df236b0d74c01dcae5ed97a09cab790a09"
}