Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Researchers Uncover Chrome Extensions Abusing Affiliate Links and Stealing ChatGPT Access

February 7, 2026

China-Linked UAT-8099 Targets IIS Servers in Asia with BadIIS SEO Malware

February 7, 2026

Badges, Bytes and Blackmail

February 7, 2026
Facebook X (Twitter) Instagram
Saturday, February 7
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»News»Cursor AI Code Editor Flaw Enables Silent Code Execution via Malicious Repositories
News

Cursor AI Code Editor Flaw Enables Silent Code Execution via Malicious Repositories

Team-CWDBy Team-CWDSeptember 22, 2025No Comments5 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


A security weakness has been disclosed in the artificial intelligence (AI)-powered code editor Cursor that could trigger code execution when a maliciously crafted repository is opened using the program.

The issue stems from the fact that an out-of-the-box security setting is disabled by default, opening the door for attackers to run arbitrary code on users’ computers with their privileges.

“Cursor ships with Workspace Trust disabled by default, so VS Code-style tasks configured with runOptions.runOn: ‘folderOpen’ auto-execute the moment a developer browses a project,” Oasis Security said in an analysis. “A malicious .vscode/tasks.json turns a casual ‘open folder’ into silent code execution in the user’s context.”

Cursor is an AI-powered fork of Visual Studio Code, which supports a feature called Workspace Trust to allow developers to safely browse and edit code regardless of where it came from or who wrote it.

With this option disabled, an attacker can make available a project in GitHub (or any platform) and include a hidden “autorun” instruction that instructs the IDE to execute a task as soon as a folder is opened, causing malicious code to be executed when the victim attempts to browse the booby-trapped repository in Cursor.

“This has the potential to leak sensitive credentials, modify files, or serve as a vector for broader system compromise, placing Cursor users at significant risk from supply chain attacks,” Oasis Security researcher Erez Schwartz said.

To counter this threat, users are advised to enable Workplace Trust in Cursor, open untrusted repositories in a different code editor, and audit them before opening them in the tool.

The development comes as prompt injections and jailbreaks have emerged as a stealthy and systemic threat plaguing AI-powered coding and reasoning agents like Claude Code, Cline, K2 Think, and Windsurf, allowing threat actors to embed malicious instructions in sneaky ways to trick the systems into performing malicious actions or leaking data from software development environments.

Software supply chain security outfit Checkmarx, in a report last week, revealed how Anthropic’s newly introduced automated security reviews in Claude Code could inadvertently expose projects to security risks, including instructing it to ignore vulnerable code through prompt injections, causing developers to push malicious or insecure code past security reviews.

“In this case, a carefully written comment can convince Claude that even plainly dangerous code is completely safe,” the company said. “The end result: a developer – whether malicious or just trying to shut Claude up – can easily trick Claude into thinking a vulnerability is safe.”

Another problem is that the AI inspection process also generates and executes test cases, which could lead to a scenario where malicious code is run against production databases if Claude Code isn’t properly sandboxed.

The AI company, which also recently launched a new file creation and editing feature in Claude, has warned that the feature carries prompt injection risks due to it running in a “sandboxed computing environment with limited internet access.”

Specifically, it’s possible for a bad actor to “inconspicuously” add instructions via external files or websites – aka indirect prompt injection – that trick the chatbot into downloading and running untrusted code or reading sensitive data from a knowledge source connected via the Model Context Protocol (MCP).

“This means Claude can be tricked into sending information from its context (e.g., prompts, projects, data via MCP, Google integrations) to malicious third parties,” Anthropic said. “To mitigate these risks, we recommend you monitor Claude while using the feature and stop it if you see it using or accessing data unexpectedly.”

That’s not all. Late last month, the company also revealed browser-using AI models like Claude for Chrome can face prompt injection attacks, and that it has implemented several defenses to address the threat and reduce the attack success rate of 23.6% to 11.2%.

“New forms of prompt injection attacks are also constantly being developed by malicious actors,” it added. “By uncovering real-world examples of unsafe behavior and new attack patterns that aren’t present in controlled tests, we’ll teach our models to recognize the attacks and account for the related behaviors, and ensure that safety classifiers will pick up anything that the model itself misses.”

CIS Build Kits

At the same time, these tools have also been found susceptible to traditional security vulnerabilities, broadening the attack surface with potential real-world impact –

  • A WebSocket authentication bypass in Claude Code IDE extensions (CVE-2025-52882, CVSS score: 8.8) that could have allowed an attacker to connect to a victim’s unauthenticated local WebSocket server simply by luring them to visit a website under their control, enabling remote command execution
  • An SQL injection vulnerability in the Postgres MCP server that could have allowed an attacker to bypass the read-only restriction and execute arbitrary SQL statements
  • A path traversal vulnerability in Microsoft NLWeb that could have allowed a remote attacker to read sensitive files, including system configurations (“/etc/passwd”) and cloud credentials (.env files), using a specially crafted URL
  • An incorrect authorization vulnerability in Lovable (CVE-2025-48757, CVSS score: 9.3) that could have allowed remote unauthenticated attackers to read or write to arbitrary database tables of generated sites
  • Open redirect, stored cross-site scripting (XSS), and sensitive data leakage vulnerabilities in Base44 that could have allowed attackers to access the victim’s apps and development workspace, harvest API keys, inject malicious logic into user-generated applications, and exfiltrate data
  • A vulnerability in Ollama Desktop arising as a result of incomplete cross-origin controls that could have allowed an attacker to stage a drive-by attack, where visiting a malicious website can reconfigure the application’s settings to intercept chats and even alter responses using poisoned models

“As AI-driven development accelerates, the most pressing threats are often not exotic AI attacks but failures in classical security controls,” Imperva said. “To protect the growing ecosystem of ‘vibe coding’ platforms, security must be treated as a foundation, not an afterthought.”



Source

computer security cyber attacks cyber news cyber security news cyber security news today cyber security updates cyber updates data breach hacker news hacking news how to hack information security network security ransomware malware software vulnerability the hacker news
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleGoogle Pixel 10 Adds C2PA Support to Verify AI-Generated Media Authenticity
Next Article FBI Says Threat Actors Are Spoofing its IC3 Site
Team-CWD
  • Website

Related Posts

News

Researchers Uncover Chrome Extensions Abusing Affiliate Links and Stealing ChatGPT Access

February 7, 2026
News

China-Linked UAT-8099 Targets IIS Servers in Asia with BadIIS SEO Malware

February 7, 2026
News

Badges, Bytes and Blackmail

February 7, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views

Ukrainian Ransomware Fugitive Added to Europe’s Most Wanted

September 11, 20255 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views
Our Picks

2025’s most common passwords were as predictable as ever

January 21, 2026

Beware of Winter Olympics scams and other cyberthreats

February 2, 2026

AI-powered financial scams swamp social media

September 11, 2025

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.