Tyler Vu on Nostr: If we aren’t racist, how did our #AI become so racist? 🤔 “Technology was more ...
Published at
2024-03-09 15:28:36Event JSON
{
"id": "d7bffe313cdf4cb34a27c81d8dbe7a7458383b2efe680e1b4949de80f2274625",
"pubkey": "585d48f730ee4907a5e49c567588b41aff63ee3ad1c9b7b295ad583b72cda9ee",
"created_at": 1709998116,
"kind": 1,
"tags": [
[
"t",
"ai"
],
[
"t",
"LLMs"
],
[
"t",
"racism"
],
[
"t",
"BlackFedi"
],
[
"proxy",
"https://sfba.social/users/tylervu/statuses/112066436587105026",
"activitypub"
]
],
"content": "If we aren’t racist, how did our #AI become so racist? 🤔\n\n“Technology was more likely to ‘sentence defendants to death’ when they speak English often used by African Americans, without ever disclosing their race.\n\nThe regular way of teaching #LLMs new patterns of retrieving information, by giving human feedback, doesn’t help counter covert racial bias … it could teach language models to \"superficially conceal the #racism they maintain on a deeper level.\"\n\nhttps://www.euronews.com/next/2024/03/09/ai-models-found-to-show-language-bias-by-recommending-black-defendents-be-sentenced-to-dea\n\n#BlackFedi",
"sig": "f2ae6a686deb5bfda5bbd129cf0835eb7f85e119d48ab894cf6bfa8137aada4b5b14cc38691bc781bd657c640f801fe3fa62ba76a38bc777604c9b55ceb30d02"
}