Scamlexity
"Yet, our research shows that this convenience comes with a cost: security guardrails were missing or inconsistent, leaving the AI free to interact with phishing pages, fake shops, and even hidden malicious prompts, all without the human’s awareness or ability to intervene.
"We built and tested three scenarios, from a fake Walmart store and a real in-the-wild Wells Fargo phishing site to PromptFix —our AI-era take on the ClickFix scam that hides prompt injection inside a fake captcha to directly take control of a victim’s AI Agent.
"The results reveal an attack surface far wider than anything we’ve faced before, where breaking one AI model could mean compromising millions of users simultaneously."
Comments
Post a Comment
Empathy recommended