Dan Goodin on Nostr: Spammers used OpenAI to generate messages that were unique for each recipient, making ...
Published at
2025-04-09 17:40:54Event JSON
{
"id": "7c50779ea7ca2c9a6a470878b656620ac07ec00afa4a233c4b92fa8c80a22f18",
"pubkey": "213fab2c986489bc5cb7208142003791cb6efd20dae0ec4832d87d0d7b70d20b",
"created_at": 1744220454,
"kind": 1,
"tags": [
[
"proxy",
"https://infosec.exchange/users/dangoodin/statuses/114309231680314818",
"activitypub"
],
[
"client",
"Mostr",
"31990:6be38f8c63df7dbf84db7ec4a6e6fbbd8d19dca3b980efad18585c46f04b26f9:mostr",
"wss://relay.mostr.pub"
]
],
"content": "Spammers used OpenAI to generate messages that were unique for each recipient, making it possible to bypass filters and spam 80,000 sites since September. OpenAI said it has disabled the spammers' account. Is there anything more OpenAI could or should be doing to more proactively prevent this kind of thing. Probably not, but I want to ask people with experience in LLM security.\n\nhttps://www.sentinelone.com/labs/akirabot-ai-powered-bot-bypasses-captchas-spams-websites-at-scale/",
"sig": "3c8dca02bf7a4e66b9d780a367164a21a5665d7e2a4e9b97618753db82aea8da2912b056330c8536ed73e90c40840ed0b01805ff360e1b3961e3a1637ac0a3c6"
}