Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Codespaces RCE, AsyncRAT C2, BYOVD Abuse, AI Cloud Intrusions & 15+ Stories

February 14, 2026

The Buyer’s Guide to AI Usage Control

February 13, 2026

Fake AI Assistants in Google Chrome Web Store Steal Passwords

February 13, 2026
Facebook X (Twitter) Instagram
Saturday, February 14
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»News»AI Skills Represent Dangerous New Attack Surface, Says TrendAI
News

AI Skills Represent Dangerous New Attack Surface, Says TrendAI

Team-CWDBy Team-CWDFebruary 12, 2026No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


The so-called “AI skills” used to scale and execute AI operations are dangerously exposed to data theft, sabotage and disruption, TrendAI has warned.

The newly named business unit of Trend Micro explained in a report published this week that AI skills are artifacts combining human-readable text with instructions that large language models (LLMs) can read and execute.

“AI skills encapsulate everything, from elements like human expertise, workflows, and operational constraints, to decision logic,” the report explained. “By capturing this knowledge into something executable, AI skills enable organizations to achieve scalability and knowledge transfer at previously unattainable levels.”

Examples of this approach are Anthropic’s Agent Skills, GPT Actions by OpenAI and Copilot Plugin by Microsoft.

Read more on AI threats: AI Security Threats Loom as Enterprise Usage Jumps 91%.

In this way, these artifacts could support use of AI for trading in financial services, enhanced service delivery in the public sector, or content generation in the media sector, TrendAI said.

However, these skills also pose a risk to enterprise security because they may expose customer/proprietary data, and decision-making logic.

“If an attacker gains access to the logic behind a skill, it can give them substantial opportunity for exploitation,” the report warned. “An attacker might also simply decide to trade or leak acquired data, thus exposing sensitive organizational information.”

With access to operational data and business logic, adversaries could disrupt public services, sabotage manufacturing processes, steal patient data, and much more.

AI-Enabled SOCs Face Rising Risks

The risks for these attack scenarios are particularly acute for AI-enabled SOCs.

Threat actors could identify and exploit detection blind spots in a SOC. Injection attacks are a major challenge in this regard, the TrendAI report claimed.

“AI skills mix user-supplied data with user-supplied instructions, and skill definitions might also mix both data and instructions and can reference external data sources,” TrendAI explained.

“This combination of data and executable logic creates an ambiguity, which in turn makes it difficult for defense tools – and even the AI engine itself – to safely differentiate between genuine analyst instructions and attacker-supplied content. Hence, the inability to defend against injection attacks.”

Principles for Securing AI Skills

The challenge for network defenders is that many of their security tools are unable to effectively detect, analyze and mitigate threats from unstructured text data, which AI skills are.

To help these teams, the report outlined a new eight-phase kill chain model specific to AI skills, and where there are new opportunities to detect malicious activity. It recommended running skills integrity monitoring, looking for SOC logic manipulation, and hunting for execution, credential access and data flow anomalies.

Established security best practices can also help. The report concluded with the following:

  • Treat skills as sensitive IP by assessing and mitigating risk throughout the lifecycle, with proper access control, versioning and change management
  • Separate skill logic and data from untrusted user-supplied data. The latter can lead to exploitation opportunities
  • Limit execution privileges by applying least-privilege principles when designing skills, and limiting execution context to minimum-required permissions in order to prevent lateral movement
  • Test how adversaries might exploit operational logic before deployment
  • Monitor, log and audit continuously, as you should for any business process. This is especially important in AI-enabled environments where traditional security boundaries blur



Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleDEAD#VAX Malware Campaign Deploys AsyncRAT via IPFS-Hosted VHD Phishing Files
Next Article Microsoft Develops Scanner to Detect Backdoors in Open-Weight Large Language Models
Team-CWD
  • Website

Related Posts

News

Codespaces RCE, AsyncRAT C2, BYOVD Abuse, AI Cloud Intrusions & 15+ Stories

February 14, 2026
News

The Buyer’s Guide to AI Usage Control

February 13, 2026
News

Fake AI Assistants in Google Chrome Web Store Steal Passwords

February 13, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views

Ukrainian Ransomware Fugitive Added to Europe’s Most Wanted

September 11, 20255 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views
Our Picks

How the always-on generation can level up their cybersecurity game

September 11, 2025

How chatbots can help spread scams

October 14, 2025

Can password managers get hacked? Here’s what to know

November 14, 2025

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.