Why Nostr? What is Njump?
2024-08-29 22:27:55

Dana Fried on Nostr: Just a reminder that the "existential risk" from AI is not that somehow we'll make ...

Just a reminder that the "existential risk" from AI is not that somehow we'll make Skynet or the computers from The Matrix.

Nobody is going to give a large language model the nuclear codes.

The existential risk is to marginalized people who will be silently refused jobs or health care or parole, or who will be targeted by law enforcement or military action because of an ML model's inherent bias, and that because these models are black boxes, it will be nearly impossible for victims to appeal.
Author Public Key
npub1s5e3qtyggn5edfhqw6nxrgk52ul9npz24n26232dyc2u5sz3w60qqeffjs