Why Nostr? What is Njump?
2024-09-25 06:45:17

Aljoscha Rittner (beandev) on Nostr: >>Within three months of the rollout, Rehberger found that memories could be created ...

>>Within three months of the rollout, Rehberger found that memories could be created and permanently stored through indirect prompt injection, an AI exploit that causes an #LLM to follow instructions from untrusted content such as emails, blog posts, or documents.

https://arstechnica.com/security/2024/09/false-memories-planted-in-chatgpt-give-hacker-persistent-exfiltration-channel/

#InfoSec #OpenAI #ChatGPT
Author Public Key
npub15yyevvzcx6jwtu9he343mt5ssdrkmamnmj0pu3gx629cvy7phc5qrmqyfc