Tim Chase on Nostr: Recently saw an interesting comparison between modern LLM prompt-injection and ...
Published at
2025-01-19 13:48:04Event JSON
{
"id": "4bbd6de3dc7e39bf32bc01376bba2539225d79166e0f875e0a1fec55aea7d353",
"pubkey": "9d1fe9f29c7a1e42464c3985f7185fe112b286140d32b8586dd34c6f92d6d9ee",
"created_at": 1737294484,
"kind": 1,
"tags": [
[
"proxy",
"https://mastodon.bsd.cafe/users/gumnos/statuses/113855331339534266",
"activitypub"
]
],
"content": "Recently saw an interesting comparison between modern LLM prompt-injection and old-school phreaking—interpreting data *and commands* over the same channel leads to arbitrary users being able to send commands.\n\nhttps://lobste.rs/s/bbrgdy/lessons_from_red_teaming_100_generative#c_d0tdc1",
"sig": "c5b1bdac72fb7913135b5e396c03c0b3a62efb9d652b153c2f6dd09d4ebfbbe0f26d0c97e100b5e9574eaa4b4c5231eef1207b18fca470156bfaf0a232f2c130"
}