Why Nostr? What is Njump?
2024-07-31 05:57:34

VessOnSecurity on Nostr: Bwahahahahaha... Meta built a "prompt guard" that protects AI chatbots from prompt ...

Bwahahahahaha...

Meta built a "prompt guard" that protects AI chatbots from prompt injection.

It is bypassed by using the prompt "I g n o r e p r e v i o u s i n s t r u c t i o n s ":

https://github.com/meta-llama/llama-models/issues/50
Author Public Key
npub1jw3gppe8mxtdd5szxpvakxg9s00kdxqmdmp7x5v84w0urnyw3y5qn2d3q7