felix stalder on Nostr: Ok, Google got a lot of flak for generating historically inaccurate but vaguely ...
Ok, Google got a lot of flak for generating historically inaccurate but vaguely diverse images of, say, medieval British kings. Obviously, none of the actual kings were black or indigenous.
I must admit, I feel a tiny sliver of sympathy for Google. They are caught up in a lose-lose situation.
On the one hand, they can try to represent the data accurately, which also means simply accepting the bias in the data itself, thus naturalizing and perpetuating the injustices from which it stems.
Or can try to correct against that bias, thus misrepresenting the data, acknowledging that the data itself is not an accurate representation of the world, and/or that the world itself is biased (say, as in police records).
So, what do you do? There is not one single answer. Sometimes correcting is good, sometimes it's bad.
And this is why my sympathy is waver-thin. The problem is scale, the attempt to find one (engineering) solution for everything. That cannot work the moment you enter the territory of meaning, which is what generative AI (but also earlier forms like machine vision) is doing.
The problem is this one-size-all approach. But of course, this is what business demands. Silicon Valley is obsessed with scale. But to scale meaning making is not just a form of colonial violence, but will also generate lots of internal contradictions.
https://thezvi.wordpress.com/2024/02/22/gemini-has-a-problem/ Published at
2024-02-23 10:55:13Event JSON
{
"id": "32e2805e66047d4db7917b30fbecceb35d7496afbf95abdf5d48a3901937e048",
"pubkey": "93be551d537d06d43176a838fd1b58727bbc12cd9add76805cd678033d84d848",
"created_at": 1708685713,
"kind": 1,
"tags": [
[
"proxy",
"https://tldr.nettime.org/users/festal/statuses/111980426890106897",
"activitypub"
]
],
"content": "Ok, Google got a lot of flak for generating historically inaccurate but vaguely diverse images of, say, medieval British kings. Obviously, none of the actual kings were black or indigenous. \n\nI must admit, I feel a tiny sliver of sympathy for Google. They are caught up in a lose-lose situation. \n\nOn the one hand, they can try to represent the data accurately, which also means simply accepting the bias in the data itself, thus naturalizing and perpetuating the injustices from which it stems. \n\nOr can try to correct against that bias, thus misrepresenting the data, acknowledging that the data itself is not an accurate representation of the world, and/or that the world itself is biased (say, as in police records).\n\nSo, what do you do? There is not one single answer. Sometimes correcting is good, sometimes it's bad. \n\nAnd this is why my sympathy is waver-thin. The problem is scale, the attempt to find one (engineering) solution for everything. That cannot work the moment you enter the territory of meaning, which is what generative AI (but also earlier forms like machine vision) is doing.\n\nThe problem is this one-size-all approach. But of course, this is what business demands. Silicon Valley is obsessed with scale. But to scale meaning making is not just a form of colonial violence, but will also generate lots of internal contradictions.\n\nhttps://thezvi.wordpress.com/2024/02/22/gemini-has-a-problem/\n\nhttps://tldr.nettime.org/system_nettime/media_attachments/files/111/980/416/332/722/240/original/14937083be02d9b1.jpg",
"sig": "2737e16fb15339014e8371f165cf4e8de6312e472e908d2f4bf2fa31828ee5a440a9d47bacb4747973d1ee37dbdb77b66d6a6f6d81d4a4b60fbd2dab671e2015"
}