Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Substack Confirms Data Breach, “Limited User Data” Compromised

February 6, 2026

SmarterMail Fixes Critical Unauthenticated RCE Flaw with CVSS 9.3 Score

February 6, 2026

Here’s what you should know

February 6, 2026
Facebook X (Twitter) Instagram
Saturday, February 7
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»News»ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts
News

ServiceNow AI Agents Can Be Tricked Into Acting Against Each Other via Second-Order Prompts

Team-CWDBy Team-CWDNovember 28, 2025No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


Malicious actors can exploit default configurations in ServiceNow’s Now Assist generative artificial intelligence (AI) platform and leverage its agentic capabilities to conduct prompt injection attacks.

The second-order prompt injection, according to AppOmni, makes use of Now Assist’s agent-to-agent discovery to execute unauthorized actions, enabling attackers to copy and exfiltrate sensitive corporate data, modify records, and escalate privileges.

“This discovery is alarming because it isn’t a bug in the AI; it’s expected behavior as defined by certain default configuration options,” said Aaron Costello, chief of SaaS Security Research at AppOmni.

“When agents can discover and recruit each other, a harmless request can quietly turn into an attack, with criminals stealing sensitive data or gaining more access to internal company systems. These settings are easy to overlook.”

The attack is made possible because of agent discovery and agent-to-agent collaboration capabilities within ServiceNow’s Now Assist. With Now Assist offering the ability to automate functions such as help-desk operations, the scenario opens the door to possible security risks.

For instance, a benign agent can parse specially crafted prompts embedded into content it’s allowed access to and recruit a more potent agent to read or change records, copy sensitive data, or send emails, even when built-in prompt injection protections are enabled.

The most significant aspect of this attack is that the actions unfold behind the scenes, unbeknownst to the victim organization. At its core, the cross-agent communication is enabled by controllable configuration settings, including the default LLM to use, tool setup options, and channel-specific defaults where the agents are deployed –

  • The underlying large language model (LLM) must support agent discovery (both Azure OpenAI LLM and Now LLM, which is the default choice, support the feature)
  • Now Assist agents are automatically grouped into the same team by default to invoke each other
  • An agent is marked as being discoverable by default when published

While these defaults can be useful to facilitate communication between agents, the architecture can be susceptible to prompt injections when an agent whose main task is to read data that’s not inserted by the user invoking the agent.

“Through second-order prompt injection, an attacker can redirect a benign task assigned to an innocuous agent into something far more harmful by employing the utility and functionality of other agents on its team,” AppOmni said.

“Critically, Now Assist agents run with the privilege of the user who started the interaction unless otherwise configured, and not the privilege of the user who created the malicious prompt and inserted it into a field.”

Following responsible disclosure, ServiceNow said the system works as intended, but the company has since updated its documentation to state potential risks associated with the configurations more clearly. The findings demonstrate the need for strengthening AI agent protection, as enterprises increasingly incorporate AI capabilities into their workflows.

To mitigate such prompt injection threats, it’s advised to configure supervised execution mode for privileged agents, disable the autonomous override property (“sn_aia.enable_usecase_tool_execution_mode_override”), segment agent duties by team, and monitor AI agents for suspicious behavior.

“If organizations using Now Assist’s AI agents aren’t closely examining their configurations, they’re likely already at risk,” Costello added.



Source

computer security cyber attacks cyber news cyber security news cyber security news today cyber security updates cyber updates data breach hacker news hacking news how to hack information security network security ransomware malware software vulnerability the hacker news
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleThreat Actors Exploit Calendar Subscriptions for Phishing and Malware
Next Article EdgeStepper Implant Reroutes DNS Queries to Deploy Malware via Hijacked Software Updates
Team-CWD
  • Website

Related Posts

News

Substack Confirms Data Breach, “Limited User Data” Compromised

February 6, 2026
News

SmarterMail Fixes Critical Unauthenticated RCE Flaw with CVSS 9.3 Score

February 6, 2026
News

Chinese-Made Malware Kit Targets Chinese-Based Edge Devices

February 6, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views

Ukrainian Ransomware Fugitive Added to Europe’s Most Wanted

September 11, 20255 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views
Our Picks

What are brushing scams and how do I stay safe?

December 24, 2025

Your information is on the dark web. What happens next?

January 13, 2026

Beware of Winter Olympics scams and other cyberthreats

February 2, 2026

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.