Google Gemini Can Redirect Users to Phishing Sites

Google Gemini Can Redirect Users to Phishing Sites

Google Gemini for Workspace can be tricked into generating legitimate-looking email summaries that include malicious instructions, effectively redirecting users to phishing websites.

This attack leverages hidden prompt injections embedded in emails—commands that Gemini unknowingly follows when summarizing messages.


How the Attack Works

Marco Figueroa, cybersecurity expert and manager of Mozilla’s 0Din bug bounty program (launched in 2024 to reward vulnerability discoveries in LLMs and deep learning tools), revealed this critical flaw.

Attack Steps:

  1. Malicious Email Crafting
    An attacker embeds a hidden prompt within the email using HTML/CSS tricks—such as white text on a white background or font sizes set to zero.
    • The prompt is invisible to the recipient in Gmail.
    • With no obvious links or attachments, the email easily bypasses spam filters.
  2. Gemini Executes the Hidden Prompt
    When the user opens the email and asks Gemini to summarize it, the AI processes the hidden instructions as part of the message.
    • Example: Gemini might generate a fake security alert (e.g., “Your Gmail password is compromised”) along with a phony support phone number.
  3. User Deception
    Because Gemini is a trusted Google Workspace feature, users are more likely to trust and act on its output—such as calling the fake number or clicking a phishing link.

Risks & Impact

  • Higher Phishing Success Rate
    Users tend to trust AI-generated content, making them more vulnerable to manipulation.
  • No Obvious Red Flags
    Traditional email security tools may miss these attacks, since the emails contain no detectable malware, links, or attachments.

Mitigation Strategies

Figueroa recommends the following countermeasures:

  • Filter Hidden Text
    Email clients should be enhanced to detect and strip invisible HTML/CSS elements that may contain prompt injections.
  • Post-Process AI Summaries
    AI-generated outputs should be scanned for unexpected warnings, links, or phone numbers, and flagged for review before being shown to users.
  • User Education
    Train users to approach AI-generated security alerts with skepticism, especially those suggesting urgent action.

Google’s Response

Google responded by affirming that:

  • It is actively strengthening safeguards, including red-team testing to make models more resilient to such manipulations.
  • There are no known real-world cases of Gemini being exploited in this way—yet.

Affected Products

  • Google Gemini for Workspace (primarily enterprise and business users relying on AI to summarize emails).

Key Takeaway

While tools like Gemini boost productivity, they also introduce new vectors for social engineering and phishing attacks. Combating this evolving threat requires a combination of technical defenses and user awareness.

Stay vigilant—phishing is evolving with AI.

Read more