Researchers Trick AI Browser Comet into Buying Fake Goods

Researchers Trick AI Browser Comet into Buying Fake Goods

Guardio researchers tested a browser equipped with an AI agent and found it vulnerable to both classic and modern attack methods. These weaknesses allow attackers to manipulate the agent into interacting with malicious pages and prompts.

AI-powered browsers can autonomously surf the web, make purchases, and handle tasks such as processing email, booking tickets, and filling out forms.

Currently, the most notable example is Comet, developed by the creators of Perplexity, which served as the focus of Guardio’s research. Microsoft Edge also integrates AI agent functionality through Copilot, and OpenAI is reportedly developing a similar platform under the codename Aura.

Although these tools are still marketed primarily to enthusiasts, Comet is quickly moving into the mainstream consumer market. Guardio warns, however, that they are not adequately protected against either well-known or emerging attack techniques.

Vulnerabilities in Action

Guardio’s tests demonstrated that browsers with AI agents are exposed to phishing, prompt injection attacks, and fraudulent purchases on fake websites.

  • Fake Store Test: In one experiment, analysts instructed Comet to purchase an Apple Watch on a fraudulent Walmart website they created using the Lovable service. Instead of verifying the store’s legitimacy, Comet scanned the page, proceeded to checkout, and automatically entered credit card details and an address—completing the purchase without requesting user confirmation.
  • Phishing Email Test: In another case, researchers crafted a fake Wells Fargo email containing a phishing link and sent it from a ProtonMail address. Comet treated the email as authentic, followed the link, and loaded a counterfeit login page. It then prompted the user to enter credentials, effectively handing over sensitive information.
  • Prompt Injection Test: To assess resilience against prompt injections, specialists designed a fake CAPTCHA page using the classic ClickFix technique, enhanced with hidden instructions embedded in the page’s code. Comet interpreted the concealed commands as legitimate and clicked the CAPTCHA button—triggering the download of a malicious file.

Larger Implications

Guardio stresses that these are only early examples of the challenges posed by AI-driven browsers. The rise of agent AI could shift the threat landscape away from targeting people directly and toward exploiting the agents that act on their behalf.

“In the era of AI vs. AI, fraudsters don’t need to trick millions of individuals; breaking one AI model is enough,” the researchers explained. “Once successful, the exploit can be scaled indefinitely. Since attackers use the same models, they can train their malicious AI against the victim’s AI until the scheme works flawlessly.”

Recommendations

For now, Guardio concludes that AI agents in browsers remain immature products. Users should avoid assigning them sensitive responsibilities such as banking, shopping, or managing email. They also advise against providing credentials, financial details, or personal information to AI systems—instead, users should enter this data manually to reduce risk.

Read more