The Washington Post on Nostr: OpenAI promised to make its AI safe. Employees say it ‘failed’ its first test. ...
OpenAI promised to make its AI safe. Employees say it ‘failed’ its first test.
==========
OpenAI employees say the company 'failed' its first test to make its AI safe. Last summer, OpenAI promised the White House to safety test new versions of its AI technology. However, some members of OpenAI's safety team felt pressured to speed through a new testing protocol to meet a May launch date. The incident raises questions about the federal government's reliance on self-policing by tech companies to protect the public from AI abuses. OpenAI is one of more than a dozen companies that made voluntary commitments to the White House last year. OpenAI's newest model, GPT-4o, was the company's first big chance to apply the safety framework, but testers compressed the evaluations into a single week. Former OpenAI employees have raised concerns about rushed safety work and a lack of solid safety processes. OpenAI spokesperson Lindsey Held said the company conducted extensive internal and external tests and held back some features initially to continue safety work. OpenAI announced the preparedness initiative to prevent catastrophic risks. The company has launched two new safety teams in the last year. The incident sheds light on the changing culture at OpenAI, where company leaders have been accused of prioritizing commercial interests over public safety.
#Openai #AiSafety #Gpt4o #WhiteHouse #TechCompanies #Selfpolicing #SafetyProtocol
https://www.washingtonpost.com/technology/2024/07/12/openai-ai-safety-regulation-gpt4/Published at
2024-07-12 18:25:02Event JSON
{
"id": "b1bb998a24fc7726be577e6e8732d76e474ac51b62099e1ae348f2c26f7ce2b3",
"pubkey": "9bbadf1ef99272e96a2156abc4aa06bbfb4e8bd4885deef2181ff81ca64b1abd",
"created_at": 1720808702,
"kind": 1,
"tags": [],
"content": "OpenAI promised to make its AI safe. Employees say it ‘failed’ its first test.\n==========\n\nOpenAI employees say the company 'failed' its first test to make its AI safe. Last summer, OpenAI promised the White House to safety test new versions of its AI technology. However, some members of OpenAI's safety team felt pressured to speed through a new testing protocol to meet a May launch date. The incident raises questions about the federal government's reliance on self-policing by tech companies to protect the public from AI abuses. OpenAI is one of more than a dozen companies that made voluntary commitments to the White House last year. OpenAI's newest model, GPT-4o, was the company's first big chance to apply the safety framework, but testers compressed the evaluations into a single week. Former OpenAI employees have raised concerns about rushed safety work and a lack of solid safety processes. OpenAI spokesperson Lindsey Held said the company conducted extensive internal and external tests and held back some features initially to continue safety work. OpenAI announced the preparedness initiative to prevent catastrophic risks. The company has launched two new safety teams in the last year. The incident sheds light on the changing culture at OpenAI, where company leaders have been accused of prioritizing commercial interests over public safety.\n\n#Openai #AiSafety #Gpt4o #WhiteHouse #TechCompanies #Selfpolicing #SafetyProtocol\n\nhttps://www.washingtonpost.com/technology/2024/07/12/openai-ai-safety-regulation-gpt4/",
"sig": "5d7ff306e15b4d3f5ff2bf691aeca2227960ef750858c8ffe8c131a0cf145a94d8e2ff83cfafa4a4171629b994351d74b3f2999fa22e709d8856c832f5444953"
}