Wired Security RSS on Nostr: A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions
Published at
2024-09-14 15:35:42Event JSON
{
"id": "c1d41909a52a5895311e4f823c3c89f5fc82cadb33f2e241fd3f52475129707c",
"pubkey": "0a96c51c0ca412c116c04f9bbb3c1338b1b19385e123682def6e7e803fdbf160",
"created_at": 1726328142,
"kind": 1,
"tags": [
[
"t",
"cybersecurity"
],
[
"t",
"security"
],
[
"t",
"infosec"
]
],
"content": "A Creative Trick Makes ChatGPT Spit Out Bomb-Making Instructions\n\nhttps://www.wired.com/story/chatgpt-jailbreak-homemade-bomb-instructions/",
"sig": "4e15b1dc12530e5d75165b10a15370cdfe6c2728b5745b8d23045ba82c13fbe53fe0a6bad9ecfc53947b6f590ca696f3c7b6449323a313fff8a186046b3c505a"
}