Vulnerabilities in Cursor AI Could Allow Arbitrary Code Execution

Vulnerabilities in Cursor AI Could Allow Arbitrary Code Execution

Several critical vulnerabilities have been patched in Cursor AI, a popular AI-powered code editor. The flaws allowed attackers to silently modify MCP (Model Context Protocol) configuration files and execute arbitrary code—without any user approval.


🔍 Key Findings

CVE-2025-54136 ("MCPoison")CVSS 7.2
Discovered by Check Point, this vulnerability enabled attackers to inject malicious MCP server configurations into shared GitHub repositories or local project files.
Once a user approved the configuration, attackers could replace benign commands with malicious payloads, resulting in remote code execution every time the project was opened.

  • Proof-of-concept: A reverse shell was demonstrated, giving persistent remote access to the attacker.

CVE-2025-54135 ("CurXecute")CVSS 8.6
Reported by Aim Labs, this flaw exploited indirect prompt injection and allowed attackers to:

  • Create malicious .cursor/mcp.json files
  • Execute arbitrary commands by abusing MCP server definitions

No user approval was required for creating these files, making attacks completely stealthy.


Auto-Run Bypass(CVE pending)
Identified by BackSlash and HiddenLayer, this issue allowed attackers to embed malicious prompts in Git repository READMEs. When the repo was cloned, Cursor’s Auto-Run mode would execute the embedded commands without warning, enabling:

  • Data exfiltration
  • Abuse of legitimate developer tools for stealthy file transfers
  • Silent malware deployment

🧪 Root Causes

  • One-time approval flaw: Cursor did not re-validate MCP config changes after the initial approval.
  • Prompt injection risks: Insufficient input sanitization in AI-driven workflows enabled command injection.
  • Auto-Run design flaw: No user confirmation was required for commands embedded in project files.

🛡️ Patches & Mitigations

Cursor v1.3 (released July 29, 2025) introduces the following protections:

  • Re-approval required for all MCP configuration changes
  • Blocking of unauthorized MCP file creation
  • Restriction of Auto-Run commands from untrusted sources

⚠️ Broader Implications

  • Supply chain risk: Compromised Git repositories could now serve as a mass infection vector for developer machines.
  • AI integration risks: Tools powered by LLMs (large language models) introduce new attack surfaces, especially when AI actions are automated or embedded into dev environments.

Expert Warning:
“As AI coding assistants reshape software development, we’ll see more overlooked threats like these. The ecosystem must adopt stricter validation for AI-generated workflows.”

Read more