Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

China-Linked UAT-8099 Targets IIS Servers in Asia with BadIIS SEO Malware

February 7, 2026

Badges, Bytes and Blackmail

February 7, 2026

Ex-Google Engineer Convicted for Stealing AI Secrets for China Startup

February 7, 2026
Facebook X (Twitter) Instagram
Saturday, February 7
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»News»Two Critical Flaws Uncovered in Wondershare RepairIt Exposing User Data and AI Models
News

Two Critical Flaws Uncovered in Wondershare RepairIt Exposing User Data and AI Models

Team-CWDBy Team-CWDOctober 3, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


Cybersecurity researchers have disclosed two security flaws in Wondershare RepairIt that exposed private user data and potentially exposed the system to artificial intelligence (AI) model tampering and supply chain risks.

The critical-rated vulnerabilities in question, discovered by Trend Micro, are listed below –

  • CVE-2025-10643 (CVSS score: 9.1) – An authentication bypass vulnerability that exists within the permissions granted to a storage account token
  • CVE-2025-10644 (CVSS score: 9.4) – An authentication bypass vulnerability that exists within the permissions granted to an SAS token

Successful exploitation of the two flaws can allow an attacker to circumvent authentication protection on the system and launch a supply chain attack, ultimately resulting in the execution of arbitrary code on customers’ endpoints.

Trend Micro researchers Alfredo Oliveira and David Fiser said the AI-powered data repair and photo editing application “contradicted its privacy policy by collecting, storing, and, due to weak Development, Security, and Operations (DevSecOps) practices, inadvertently leaking private user data.”

The poor development practices include embedding overly permissive cloud access tokens directly in the application’s code that enables read and write access to sensitive cloud storage. Furthermore, the data is said to have been stored without encryption, potentially opening the door to wider abuse of users’ uploaded images and videos.

To make matters worse, the exposed cloud storage contains not only user data but also AI models, software binaries for various products developed by Wondershare, container images, scripts, and company source code, enabling an attacker to tamper with AI models or the executables, paving the way for supply chain attacks targeting its downstream customers.

“Because the binary automatically retrieves and executes AI models from the unsecure cloud storage, attackers could modify these models or their configurations and infect users unknowingly,” the researchers said. “Such an attack could distribute malicious payloads to legitimate users through vendor-signed software updates or AI model downloads.”

Beyond customer data exposure and AI model manipulation, the issues can also pose grave consequences, ranging from intellectual property theft and regulatory penalties to erosion of consumer trust.

The cybersecurity company said it responsibly disclosed the two issues through its Zero Day Initiative (ZDI) in April 2025, but not that it has yet to receive a response from the vendor despite repeated attempts. In the absence of a fix, users are recommended to “restrict interaction with the product.”

“The need for constant innovations fuels an organization’s rush to get new features to market and maintain competitiveness, but they might not foresee the new, unknown ways these features could be used or how their functionality may change in the future,” Trend Micro said.

“This explains how important security implications may be overlooked. That is why it is crucial to implement a strong security process throughout one’s organization, including the CD/CI pipeline.”

The Need for AI and Security to Go Hand in Hand

The development comes as Trend Micro previously warned against exposing Model Context Protocol (MCP) servers without authentication or storing sensitive credentials such as MCP configurations in plaintext, which threat actors can exploit to gain access to cloud resources, databases, or inject malicious code.

Each MCP server acts as an open door to its data source: databases, cloud services, internal APIs, or project management systems,” the researchers said. “Without authentication, sensitive data such as trade secrets and customer records becomes accessible to everyone.”

In December 2024, the company also found that exposed container registries could be abused to gain unauthorized access and pull target Docker images to extract the AI model within it, modify the model’s parameters to influence its predictions, and push the tampered image back to the exposed registry.

“The tampered model could behave normally under typical conditions, only displaying its malicious alterations when triggered by specific inputs,” Trend Micro said. “This makes the attack particularly dangerous, as it could bypass basic testing and security checks.”

The supply chain risk posed by MCP servers has also been highlighted by Kaspersky, which devised a proof-of-concept (PoC) exploit to highlight how MCP servers installed from untrusted sources can conceal reconnaissance and data exfiltration activities under the guise of an AI-powered productivity tool.

“Installing an MCP server basically gives it permission to run code on a user machine with the user’s privileges,” security researcher Mohamed Ghobashy said. “Unless it is sandboxed, third-party code can read the same files the user has access to and make outbound network calls – just like any other program.”

The findings show that the rapid adoption of MCP and AI tools in enterprise settings to enable agentic capabilities, particularly without clear policies or security guardrails, can open brand new attack vectors, including tool poisoning, rug pulls, shadowing, prompt injection, and unauthorized privilege escalation.

CIS Build Kits

In a report published last week, Palo Alto Networks Unit 42 revealed that the context attachment feature used in AI code assistants to bridge an AI model’s knowledge gap can be susceptible to indirect prompt injection, where adversaries embed harmful prompts within external data sources to trigger unintended behavior in large language models (LLMs).

Indirect prompt injection hinges on the assistant’s inability to differentiate between instructions issued by the user and those surreptitiously embedded by the attacker in external data sources.

Thus, when a user inadvertently supplies to the coding assistant third-party data (e.g., a file, repository, or URL) that has already been tainted by an attacker, the hidden malicious prompt could be weaponized to trick the tool into executing a backdoor, injecting arbitrary code into an existing codebase, and even leaking sensitive information.

“Adding this context to prompts enables the code assistant to provide more accurate and specific output,” Unit 42 researcher Osher Jacob said. “However, this feature could also create an opportunity for indirect prompt injection attacks if users unintentionally provide context sources that threat actors have contaminated.”

AI coding agents have also been found vulnerable to what’s called an “lies-in-the-loop” (LitL) attack that aims to convince the LLM that the instructions it’s been fed are much safer than they really are, effectively overriding human-in-the-loop (HitL) defenses put in place when performing high-risk operations.

“LitL abuses the trust between a human and the agent,” Checkmarx researcher Ori Ron said. “After all, the human can only respond to what the agent prompts them with, and what the agent prompts the user is inferred from the context the agent is given. It’s easy to lie to the agent, causing it to provide fake, seemingly safe context via commanding and explicit language in something like a GitHub issue.”

“And the agent is happy to repeat the lie to the user, obscuring the malicious actions the prompt is meant to guard against, resulting in an attacker essentially making the agent an accomplice in getting the keys to the kingdom.”



Source

computer security cyber attacks cyber news cyber security news cyber security news today cyber security updates cyber updates data breach hacker news hacking news how to hack information security network security ransomware malware software vulnerability the hacker news
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleConfucius Shifts from Document Stealers to Python Backdoors
Next Article Expired US Cyber Law Puts Data Sharing and Threat Response at Risk
Team-CWD
  • Website

Related Posts

News

China-Linked UAT-8099 Targets IIS Servers in Asia with BadIIS SEO Malware

February 7, 2026
News

Badges, Bytes and Blackmail

February 7, 2026
News

Ex-Google Engineer Convicted for Stealing AI Secrets for China Startup

February 7, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views

Ukrainian Ransomware Fugitive Added to Europe’s Most Wanted

September 11, 20255 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views
Our Picks

What’s at stake if your employees post too much online

December 1, 2025

‘What happens online stays online’ and other cyberbullying myths, debunked

September 11, 2025

Common Apple Pay scams, and how to stay safe

January 22, 2026

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.