Lauren Weinstein on Nostr: I consider Google's newly reported orders to employees to "take risks" with users by ...
I consider Google's newly reported orders to employees to "take risks" with users by pushing out AI answers that it knows may be inaccurate to be fundamentally unethical. At #Google scale, even a relatively small percentage of wrong answers can be vast numbers of users being shown incorrect information, potentially in harmful ways. And Google's continued "victim blaming" ("how dare our users ask questions that our AI will get wrong?") is utterly shameful.
Published at
2024-06-13 16:09:37Event JSON
{
"id": "1516e710d00646362c9090bac303a63d402b8d3a5c76a2d76c238cebb4c2d7f6",
"pubkey": "a447aea32251d6c533dc9cbbf15d3e41529f8a6719c99cee944bfd9aa0928ef0",
"created_at": 1718294977,
"kind": 1,
"tags": [
[
"t",
"google"
],
[
"proxy",
"https://mastodon.laurenweinstein.org/users/lauren/statuses/112610179639821014",
"activitypub"
]
],
"content": "I consider Google's newly reported orders to employees to \"take risks\" with users by pushing out AI answers that it knows may be inaccurate to be fundamentally unethical. At #Google scale, even a relatively small percentage of wrong answers can be vast numbers of users being shown incorrect information, potentially in harmful ways. And Google's continued \"victim blaming\" (\"how dare our users ask questions that our AI will get wrong?\") is utterly shameful.",
"sig": "de98c84c29abfb7e0cb44bd5aac7d43cb1c55e3e1c7bc4b3c1dcba6259f833b95da44f8a1618c1a1c603d8aba47fac29b7a1b8ac60d7cec04b08755d384fa9ce"
}