Androcat on Nostr: If you understand Virtue Epistomology (VE), you cannot accept any LLM output as ...
If you understand Virtue Epistomology (VE), you cannot accept any LLM output as "information".
VE is an attempt to correct the various omniscience-problems inherent in classical epistemologies, which all to some extent require a person to know what the Truth is in order to evaluate if some statement is true.
VE prescribes that we should look to how the information was obtained, particularly in two ways:
1) Was the information obtained using a well-understood method that is known to produce good results?
2) Does the method appear to have been applied correctly in this particular case?
LLM output always fails on pt1. An LLM will not look for the truth. It will just look for what is a probable combination of words. This means that an LLM is just as likely to combine a number of true statements in a way that is probable but false, as it is to combine them in a way that is probable and true.
LLMs only sample the probability of word combinations. It doesn't understand the input, and it doesn't understand its own output.
Only a damned fool would use it for anything, ever.
#epistemology #LLM #generativeAI #ArtificialIntelligence #ArtificialStupidity
philosophy group (nprofile…0ak9)Published at
2025-04-03 10:16:28Event JSON
{
"id": "74cdbc058a9bbeec681b61bd634abce297ea79fd9829c127cb645fe9fa2851d1",
"pubkey": "59061b8a4cac9a15cbd071bc85545bf2abf15b0fcb2e31a6c2a6bbb8763a6cba",
"created_at": 1743675388,
"kind": 1,
"tags": [
[
"p",
"aeb911155997514bf6c8b772028491584e7aaab5cf77731f7dbc76f3afbdcc89",
"wss://relay.mostr.pub"
],
[
"t",
"epistemology"
],
[
"t",
"llm"
],
[
"t",
"generativeAI"
],
[
"t",
"artificialintelligence"
],
[
"t",
"artificialstupidity"
],
[
"content-warning"
],
[
"proxy",
"https://toot.cat/users/androcat/statuses/114273510267756123",
"activitypub"
]
],
"content": "If you understand Virtue Epistomology (VE), you cannot accept any LLM output as \"information\".\n\nVE is an attempt to correct the various omniscience-problems inherent in classical epistemologies, which all to some extent require a person to know what the Truth is in order to evaluate if some statement is true.\n\nVE prescribes that we should look to how the information was obtained, particularly in two ways:\n1) Was the information obtained using a well-understood method that is known to produce good results?\n2) Does the method appear to have been applied correctly in this particular case?\n\nLLM output always fails on pt1. An LLM will not look for the truth. It will just look for what is a probable combination of words. This means that an LLM is just as likely to combine a number of true statements in a way that is probable but false, as it is to combine them in a way that is probable and true. \n\nLLMs only sample the probability of word combinations. It doesn't understand the input, and it doesn't understand its own output.\n\nOnly a damned fool would use it for anything, ever.\n\n#epistemology #LLM #generativeAI #ArtificialIntelligence #ArtificialStupidity nostr:nprofile1qy2hwumn8ghj7un9d3shjtnddaehgu3wwp6kyqpq46u3z92ejag5hakgkaeq9py3tp884244eamhx8mah3m08taaejysly0ak9",
"sig": "68f030da3e2c6b41945aee34257b848610099db8956e9caf65dae0e40b9bef3d1c0c1703dcb1ff06992d76b1778bedcf5a38f6b81fc9c7bf6b2106f6c2d1b26a"
}