Why Nostr? What is Njump?
2025-03-27 14:07:06

securityskeptic :donor: :verified: on Nostr: A recent paper describes an experimental attack against large language model (LLM) ...

A recent paper describes an experimental attack against large language model (LLM) agents that underscores the importance of assessing risk when employing AI. The authors explain how they employed a memory injection attack to “produce harmful outputs when the past records retrieved for demonstration are malicious”.

I began thinking about the many malicious actors who might take advantage of this or other memory-poisoning attacks. I'm sure I've only identified a partial list.

https://interisle.substack.com/p/memory-injection-attacks-against

#ai #cybercrime #LLM #cyberthreats
Author Public Key
npub1wmwrp7wl2g7jj8pm29mcqjw73a4yuve2qfnjk35qqte7rs6w3yssjt3htg