Why Nostr? What is Njump?
2024-10-16 22:37:38

racheltobac :verified: :donor: on Nostr: I just live hacked Arlene Dickinson (Dragons' Den star - Canada's Shark Tank) by ...

I just live hacked Arlene Dickinson
(Dragons' Den star - Canada's Shark Tank) by using her breached passwords, social media posts, an AI voice clone, & *just 1 picture* for a deepfake live video call. Thank you Elevate Conference and Mastercard for asking me to demo these attacks live!

https://www.youtube.com/watch?v=ysu7vEkZdN0

What are the takeaways from this Live Hack video with Arlene?

1. Stop reusing passwords - when you reuse your password and it shows up in a data breach, I can then use that password against you everywhere it's reused online and simply log in as you stealing money, access, data, etc.

2. Turn on multi-factor authentication (MFA) - turning on this second step when you log in makes it more obnoxious for me to takeover your accounts. I then have to try and steal your MFA codes from you (or if you use a FIDO MFA solution like a Yubikey etc, I'm likely just plain out of luck and have to move on to another target)!

3. Recognize that AI has made attacks more believable and scalable - will every or even most hacks involve AI? Nope! Most attacks are simple and leverage your breached passwords to log in as you or they attack via phishing over email, text, call, etc.
That being said, it's important to realize that some attackers will attempt to leverage AI, especially if you have an high threat model. Arlene is a star with millions of followers around the world, because of this she has to be extra politely paranoid about those that reach out with sensitive requests!
If someone with a high threat model (in the public eye, job is to wire money, lots of followers on social media, activist/being targeted, etc) receives a call and they're requesting sensitive info or a wire transfer, recognize that the attacker could believably use a voice clone in that call and could even build a believable deepfake for a live video call.
This is not how all attacks work but it's especially important for those with elevated threat models to recognize that AI can be leveraged in attacks to up the believability with voice clones, deepfake video, etc.

What do I mean "be politely paranoid" in this video?
I recommend verifying that people are who they say they are before taking sensitive actions.
- If you have a high threat model and someone calls you and asks for a wire transfer, use another method of communication to confirm it's them before taking action. Chat them, signal message, email, call them back to thwart spoofing using the number you have on file. This catches me 9 times out of 10 when I'm hacking! This is relevant for your work, when you're buying a house, pretty much anytime you need to send money!
- If you receive an email from a board member asking for a copy of a sensitive document, verify that board member is who they say they are with another method of communication before sending over a document with sensitive work details listed on it.

Stay politely paranoid, folks!
Author Public Key
npub1c3lsjtrc5sqsnq3wqmd8u623xfu55qmndcp68mrj8u39gpwjwulq662pzz