Marco Rogers on Nostr: That's right. It's sort of the core issue in educating people about LLMs. They don't ...
That's right. It's sort of the core issue in educating people about LLMs. They don't "sometimes hallucinate". They always hallucinate. By design.
But I also think we should graduate from saying they "just" make things up. They have a sophisticated inner model that means they're more likely to hallucinate towards things that seem correct when read by humans. And studying how they are able to do that is actually interesting.
https://mastodon.social/@GeePawHill/112203183163246574Published at
2024-04-02 19:29:01Event JSON
{
"id": "7c9760a2cf03d3d3022b5e5bd9559fec66ccec1ebd12086ee619d853c15cfbc5",
"pubkey": "0dbd4906423c91cf2e53ae571a4794d00962f64d51345915416751fba70efa28",
"created_at": 1712086141,
"kind": 1,
"tags": [
[
"proxy",
"https://social.polotek.net/users/polotek/statuses/112203277400698082",
"activitypub"
],
[
"L",
"pink.momostr"
],
[
"l",
"pink.momostr.activitypub:https://social.polotek.net/users/polotek/statuses/112203277400698082",
"pink.momostr"
]
],
"content": "That's right. It's sort of the core issue in educating people about LLMs. They don't \"sometimes hallucinate\". They always hallucinate. By design.\n\nBut I also think we should graduate from saying they \"just\" make things up. They have a sophisticated inner model that means they're more likely to hallucinate towards things that seem correct when read by humans. And studying how they are able to do that is actually interesting.\nhttps://mastodon.social/@GeePawHill/112203183163246574",
"sig": "2312ea7c8e77280ccb386fc2529dffaa8a157253e3d12e62396c04ed0a55cd295d187e949c1b6737afb76c3854bc578db7feea070175cc213b9cbd919e539cc6"
}