Lauren Weinstein on Nostr: Here's the bottom line on LLM AI systems. If you can't trust that their answers are ...
Here's the bottom line on LLM AI systems. If you can't trust that their answers are accurate, they are essentially worthless. It doesn't matter if parts of their answers are correct and parts are incorrect -- the incorrect parts "contaminate" the entire response and render it useless. Actually, even worse than useless, because this becomes a perfect vehicle for spreading misinformation that is combined and given gravitas by accurate information -- a horrific and dangerous toxic brew.
Published at
2023-12-06 17:37:13Event JSON
{
"id": "b214aa91d2aafc69b08948d52c39a6206ba8af8357e7796a3ea20759650bf2bd",
"pubkey": "a447aea32251d6c533dc9cbbf15d3e41529f8a6719c99cee944bfd9aa0928ef0",
"created_at": 1701884233,
"kind": 1,
"tags": [
[
"proxy",
"https://mastodon.laurenweinstein.org/users/lauren/statuses/111534685124910646",
"activitypub"
]
],
"content": "Here's the bottom line on LLM AI systems. If you can't trust that their answers are accurate, they are essentially worthless. It doesn't matter if parts of their answers are correct and parts are incorrect -- the incorrect parts \"contaminate\" the entire response and render it useless. Actually, even worse than useless, because this becomes a perfect vehicle for spreading misinformation that is combined and given gravitas by accurate information -- a horrific and dangerous toxic brew.",
"sig": "76ffb9a347098e56a185cdc21f3e867e078d10535e6401f6d033c5ca2c03f6542116b8a3d701c883eea7fd61f2093dc221f4bf6d3c351797befe6d5998b5d194"
}