Rm -rf on Nostr: I came to the same conclusion a while ago. A good way to think about llm is as a ...
I came to the same conclusion a while ago. A good way to think about llm is as a lossy archive of text data. You enter a text input as a path, and it extracts data based on that path. The smaller the model, the lerger the loss in data. Too large models will have paths that lead nowhere
Published at
2025-03-29 12:20:29Event JSON
{
"id": "25348763a634b4f4a00882a09b5a40652bcbd0e8f36ea30c7e9c982c239ce147",
"pubkey": "11deb685d519a1e1f2c30cad6bf30b7f6cde203440d4cef01065211cec4fa7dc",
"created_at": 1743250829,
"kind": 1,
"tags": [
[
"p",
"fcf70a45cfa817eaa813b9ba8a375d713d3169f4a27f3dcac3d49112df67d37e",
"ws://192.168.18.7:7777"
],
[
"e",
"6f4660ced8022cb2d0ed5e647ed3ce7120ef029e7a7d4b6a21c85dec0cb2939b",
"ws://192.168.18.7:7777",
"root",
"fcf70a45cfa817eaa813b9ba8a375d713d3169f4a27f3dcac3d49112df67d37e"
]
],
"content": "I came to the same conclusion a while ago. A good way to think about llm is as a lossy archive of text data. You enter a text input as a path, and it extracts data based on that path. The smaller the model, the lerger the loss in data. Too large models will have paths that lead nowhere",
"sig": "79799f1e729ac87f3f9b3319551221e84dae9644a1f0e566410cd74608b50346be37f28b9fe7a80f1b00c7f5b364f66dbce1dd8c23e832f3ccc573ac9556ff9c"
}