Wired Security RSS on Nostr: A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions
Published at
2024-09-14 13:04:04Event JSON
{
"id": "9cbede9aaa6d65bd1f84b1237e9539b62998ebb338f386e5cd39e47d31907ad6",
"pubkey": "0a96c51c0ca412c116c04f9bbb3c1338b1b19385e123682def6e7e803fdbf160",
"created_at": 1726319044,
"kind": 1,
"tags": [
[
"t",
"cybersecurity"
],
[
"t",
"security"
],
[
"t",
"infosec"
]
],
"content": "A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions\n\nhttps://www.wired.com/story/chatgpt-jailbreak-homemade-bomb-instructions/",
"sig": "591d03e22dcccff60531e9cf2a7332bf4e6da1a0dbbb7a5cf5099c9393eb8362b180d4b97fe3efd3772ed972a1f16edd4097be98cea8785183608ec3fdbc3c9c"
}