Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Substack Confirms Data Breach, “Limited User Data” Compromised

February 6, 2026

SmarterMail Fixes Critical Unauthenticated RCE Flaw with CVSS 9.3 Score

February 6, 2026

Here’s what you should know

February 6, 2026
Facebook X (Twitter) Instagram
Saturday, February 7
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»Cyber Security»UK NCSC Raises Alarms Over Prompt Injection Attacks
Cyber Security

UK NCSC Raises Alarms Over Prompt Injection Attacks

Team-CWDBy Team-CWDDecember 9, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


Prompt injection vulnerabilities may never be fully mitigated as a category and network defenders should instead focus on ways to reduce their impact, government security experts have warned.

Then National Cyber Security Centre (NCSC) technical director for platforms research, David C, warned security professionals not to treat prompt injection like SQL injection.

“SQL injection is … illustrative of a recurring problem in cybersecurity; that is, ‘data’ and ‘instructions’ being handled incorrectly,” he explained.

“This allows an attacker to supply ‘data’ that is executed by the system as an instruction. It’s the same underlying issue for many other critical vulnerability types that include cross-site scripting and exploitation of buffer overflows.”

However, the same rules don’t apply to prompt injection, because large language models (LLMs) don’t distinguish between data and instructions.

“When you provide an LLM prompt, it doesn’t understand the text it in the way a person does. It is simply predicting the most likely next token from the text so far,” the blog continued.

“As there is no inherent distinction between ‘data’ and ‘instruction’, it’s very possible that prompt injection attacks may never be totally mitigated in the way that SQL injection attacks can be.”

This is why mitigations such as detecting prompt injection attempts, training models to prioritize “instructions” over “data,” and explaining to a model what “data” is are doomed to failure, David C argued.

Read more on prompt injection attacks: “PromptFix” Attacks Could Supercharge Agentic AI Threats

A better way to approach the challenge is to look at prompt injection not as code injection but exploitation of an “inherently confusable deputy.”

David C argued that LLMs are “inherently confusable” because the risk can’t be fully mitigated.

“Rather than hoping we can apply a mitigation that fixes prompt injection, we instead need to approach it by seeking to reduce the risk and the impact. If the system’s security cannot tolerate the remaining risk, it may not be a good use case for LLMs,” he explained.

Reducing Prompt Injection Risks

The NCSC suggested the following steps to reduce prompt injection risk, all of which are aligned to ETSI (TS 104 223) on Baseline Cyber Security Requirements for AI Models and Systems.

  • Developer/security team/organizational awareness of this class of vulnerabilities and that there will always be a residual risk that can’t be fully mitigated with a product or appliance
  • Secure LLM design, especially if the LLM calls tools or uses APIs based on its output. Protections should focus on non-LLM safeguards that constrain the actions of the system, such as preventing a model that processes emails from external individuals from having access to privileged tools
  • Make it harder to inject malicious prompts, such as marking “data” sections as separate to “instructions”
  • Monitoring logging information to identify suspicious activity, such as failed tool/API calls

Failure to address the challenge early on could lead to a similar situation to SQL injection bugs, which have only recently become much rarer.

“We risk seeing this pattern repeated with prompt injection, as we are on a path to embed genAI into most applications,” David C concluded.

“If those applications are not designed with prompt injection in mind, a similar wave of breaches may follow.”

Exabeam chief AI officer, Steve Wilson, agreed that current approaches to tackling prompt injection are failing.

“CISOs need to shift their mindset. Defending AI agents is less like securing traditional software and far more like defending the humans inside an organization. Agents, like people, are messy, adaptive, and prone to being manipulated, coerced or confused,” he added.

“That makes them more analogous to insider threats than to classic application components. Whether dealing with a malicious prompt, compromised upstream data or unintended reasoning pathways, constant vigilance is required. Effective AI security will come not from magical layers of protection, but from operational discipline, monitoring, containment and the expectation that these systems will continue to behave unpredictably for years to come.”



Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleGartner Calls For Pause on AI Browser Use
Next Article New Albiriox MaaS Malware Targets 400+ Apps for On-Device Fraud and Screen Control
Team-CWD
  • Website

Related Posts

Cyber Security

Why AI’s Rise Makes Protecting Personal Data More Critical Than Ever

February 6, 2026
Cyber Security

New Hacking Campaign Exploits Microsoft Windows WinRAR Vulnerability

February 5, 2026
Cyber Security

Two Critical Flaws Found in n8n AI Workflow Automation Platform

February 4, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views

Ukrainian Ransomware Fugitive Added to Europe’s Most Wanted

September 11, 20255 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views
Our Picks

What if your romantic AI chatbot can’t keep a secret?

November 18, 2025

How to help older family members avoid scams

October 31, 2025

Chronology of a Skype attack

February 5, 2026

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.