Linux in a Bit on Nostr: Saying ‘but humans make mistakes too’ is not a valid argument for allowing ...
Saying ‘but humans make mistakes too’ is not a valid argument for allowing LLM-generated code contributions.
Humans make mistakes in ways that we have experience accounting for.
Humans can be held accountable for mistakes they make.
Humans can gain a real understanding of a codebase and the decisions that went into it.
LLMs can’t do any of that, and the humans that use them usually can’t either.
#LLM #genAI
Published at
2025-05-10 18:40:51Event JSON
{
"id": "fb553397961bb840d41297f9843238c7693318c4daaf96a408385abe26486cdf",
"pubkey": "cb113a872ea9edc5846e4e73512cc5f6d0da7b4509f09b0007e378c2affb518c",
"created_at": 1746902451,
"kind": 1,
"tags": [
[
"t",
"llm"
],
[
"t",
"genai"
],
[
"proxy",
"https://infosec.exchange/users/Linux_in_a_Bit/statuses/114484999046603666",
"activitypub"
],
[
"client",
"Mostr",
"31990:6be38f8c63df7dbf84db7ec4a6e6fbbd8d19dca3b980efad18585c46f04b26f9:mostr",
"wss://relay.mostr.pub"
]
],
"content": "Saying ‘but humans make mistakes too’ is not a valid argument for allowing LLM-generated code contributions.\n\nHumans make mistakes in ways that we have experience accounting for.\nHumans can be held accountable for mistakes they make.\nHumans can gain a real understanding of a codebase and the decisions that went into it.\n\nLLMs can’t do any of that, and the humans that use them usually can’t either.\n\n#LLM #genAI",
"sig": "92ebc0fd9b053400342093c1118b1f311aeca11c8f0240ac24ea54cf7a24dce619645b1c42df3ad5385a399cb09682d2ecce7a7d0fe36e65c1de5bfba5efbf4e"
}