sway on Nostr: one of the things that turned of off about llm's is that it can still misspell words. ...
one of the things that turned of off about llm's is that it can still misspell words. google's gemini for example spelled _lableing_ instead of labeling.
i get that it's learning but which english dictionary did it ingest for it to misspell that word.
i asked it how this is possible? one of its reasons is that it sometimes _hallucinates_
"Like all large language models, I can sometimes "hallucinate" information, including incorrect spellings. This means that I can produce text that seems correct but is actually wrong."
Published at
2025-04-07 01:57:12Event JSON
{
"id": "bb35148bc1c50d9a345b17b5c1a583c762c929e8d548548ee663ccb0d0d721a1",
"pubkey": "0b817937c9d0eafb3095e6b05cb4211f756dd20f3b39f1b7185b5fe7d4d97df8",
"created_at": 1743991032,
"kind": 1,
"tags": [],
"content": "one of the things that turned of off about llm's is that it can still misspell words. google's gemini for example spelled _lableing_ instead of labeling.\n\ni get that it's learning but which english dictionary did it ingest for it to misspell that word. \n\ni asked it how this is possible? one of its reasons is that it sometimes _hallucinates_\n\n\"Like all large language models, I can sometimes \"hallucinate\" information, including incorrect spellings. This means that I can produce text that seems correct but is actually wrong.\"",
"sig": "d3c101b1a53598453cc6f68db20a3975d618c41600848a29d79db06557aa416838b067e6b94bafe47722e78777f70ccd218a6d719fe148766aa29de8cd552443"
}