Per Axbom on Nostr: Here's what happens when machine learning needs vast amounts of data to build ...
Here's what happens when machine learning needs vast amounts of data to build statistical models for responses. Historical, debunked data makes it into the models and is preferred by the model output. There is much more outdated, harmful information published than there is updated, correct information. Hence statistically more viable.
"In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions."
In this regard the tools don't take us to the future, but to the past.
No, you should never use language models for health advice. But there are many people arguing for exactly this to happen. I also believe these types of harmful biases make it into more machine learning applications than language models specifically.
In libraries across the world using the Dewey Decimal System (138 countries), LGBTI (lesbian, gay, bisexual, transgender and intersex) topics have throughout the 20th century variously been assigned to categories such as Abnormal Psychology, Perversion, Derangement, as a Social Problem and even as Medical Disorders.
Of course many of these historical biases are part of the source material used to make today's "intelligent" machines - bringing with them the risk of eradicating decades of progress.
It's important to understand how large language models work if you are going to use them. The way they have been released into the world means there are many people (including powerful decision-makers) with faulty expectations and a poor understanding of what they are using.
https://www.nature.com/articles/s41746-023-00939-z#DigitalEthics #AIEthics
Published at
2023-10-22 08:19:08Event JSON
{
"id": "1bce14bac37428c9a0c290b5f2c51670bce8e66c1507206212af74d136078164",
"pubkey": "a10318eb7d626f60c3cf54c89176dc43fdb58815bfd94c49e7aefb791e071ca0",
"created_at": 1697962748,
"kind": 1,
"tags": [
[
"t",
"aiethics"
],
[
"t",
"digitalethics"
],
[
"proxy",
"https://axbom.me/objects/e8108a24-f914-4f39-8637-276fc2e71f31",
"activitypub"
]
],
"content": "Here's what happens when machine learning needs vast amounts of data to build statistical models for responses. Historical, debunked data makes it into the models and is preferred by the model output. There is much more outdated, harmful information published than there is updated, correct information. Hence statistically more viable.\n\n\"In some cases, they appeared to reinforce long-held false beliefs about biological differences between Black and white people that experts have spent years trying to eradicate from medical institutions.\"\n\nIn this regard the tools don't take us to the future, but to the past.\n\nNo, you should never use language models for health advice. But there are many people arguing for exactly this to happen. I also believe these types of harmful biases make it into more machine learning applications than language models specifically.\n\nIn libraries across the world using the Dewey Decimal System (138 countries), LGBTI (lesbian, gay, bisexual, transgender and intersex) topics have throughout the 20th century variously been assigned to categories such as Abnormal Psychology, Perversion, Derangement, as a Social Problem and even as Medical Disorders.\n\nOf course many of these historical biases are part of the source material used to make today's \"intelligent\" machines - bringing with them the risk of eradicating decades of progress.\n\nIt's important to understand how large language models work if you are going to use them. The way they have been released into the world means there are many people (including powerful decision-makers) with faulty expectations and a poor understanding of what they are using.\n\nhttps://www.nature.com/articles/s41746-023-00939-z\n\n#DigitalEthics #AIEthics",
"sig": "784e2cebbed749845d17865eda7bb368f08a577a033841a93bfb8f4effc8d6004e823f5eddd06b79f84cd5bebbe4927943c4b7ad6d8527707bd8784a3435a687"
}