Tom Morris on Nostr: When people point out the "hallucinaton" problem with LLMs, the response comes back: ...
When people point out the "hallucinaton" problem with LLMs, the response comes back: "it's not great for facts, but it's good at producing convincing prose".
I'm doubtful on that too.
I've tested a certain chatbot with a bunch of requests for persuasive language (academic essays, political speeches, sermons, work-related emails/letters) and it just churns out inanity ("innovative", "leverage" etc.) and cliche ("I have a passion for...").
Words disconnected from reality aren't very convincing.
Published at
2024-01-27 18:02:57Event JSON
{
"id": "8dfd95da9a854e05c971e6d426ec64550eaae2c65819df30639705d89477bbcf",
"pubkey": "3fb03273ccfb93b475b4da1adbc77095e599011eb4cd3d3eb32200cd3ea04668",
"created_at": 1706378577,
"kind": 1,
"tags": [
[
"proxy",
"https://mastodon.social/users/tommorris/statuses/111829226485887413",
"activitypub"
]
],
"content": "When people point out the \"hallucinaton\" problem with LLMs, the response comes back: \"it's not great for facts, but it's good at producing convincing prose\".\n\nI'm doubtful on that too.\n\nI've tested a certain chatbot with a bunch of requests for persuasive language (academic essays, political speeches, sermons, work-related emails/letters) and it just churns out inanity (\"innovative\", \"leverage\" etc.) and cliche (\"I have a passion for...\").\n\nWords disconnected from reality aren't very convincing.",
"sig": "4632b9f4f0cb428fdf69e89f02db852d052bbb85a10f6782a870a092b6a06b8b97666b932f7ed5d973298c867442739c0c95806a00b59d56e29ac912fc42fac8"
}