Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

China-Linked UAT-8099 Targets IIS Servers in Asia with BadIIS SEO Malware

February 7, 2026

Badges, Bytes and Blackmail

February 7, 2026

Ex-Google Engineer Convicted for Stealing AI Secrets for China Startup

February 7, 2026
Facebook X (Twitter) Instagram
Saturday, February 7
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»Cyber Security»How Security Teams Can Manage Agentic AI Risks
Cyber Security

How Security Teams Can Manage Agentic AI Risks

Team-CWDBy Team-CWDOctober 4, 2025No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


Security teams are well-versed in managing insider threats. These threats come from trusted individuals with legitimate system access who exploit that trust, whether through malicious intent or reckless behavior.

According to the UK Cyber Security Breaches Survey 2025, insider threats contributed to 50% of UK businesses experiencing a cyber breach or attack in the past 12 months.

Now, AI agents represent a new type of insider on the horizon. Without proper consideration, these digital entities are poised to become the ultimate agents of chaos within existing authorization frameworks.

What Works for Humans Doesn’t Always Work for Agents

Authorization (AuthZ) systems manage users’ access to resources, ensuring that people can only perform the actions they’re supposed to. However, most AuthZ systems weren’t built to stop everything users might attempt because they were designed with the expectation that external factors would constrain human misbehavior.

This is why over-provisioning access to users is common and has traditionally been manageable. When someone joins a company, it’s simpler to copy an existing set of permissions to their account rather than carefully consider minimal access rights. This approach works because humans understand context, but AI agents have no such awareness.

Agentic AI systems operate with the same trusted access as human users, but without the social constraints, fear of consequences or common sense that typically keep humans from overstepping boundaries. While a human employee might hesitate before accessing sensitive data they don’t need, an AI agent will optimize for efficiency and exploit every permission it’s been granted in pursuit of its goals.

This creates a perfect storm for AuthZ systems that were designed around human behavior patterns. AI agents require a new approach.

Three Ways Security Teams Can Minimize Agentic AI Chaos

Responsible governance can limit the chaos that agentic AI may cause within their AuthZ systems by focusing on three key areas:

Implement Composite Identities

Current authentication (AuthN) and AuthZ systems cannot distinguish between human users and AI agents. When AI agents take actions, they operate under human identities or use access credentials based on human-centric permission models.

This complicates simple questions like, who authored this code? Who initiated this merge request? Who created this Git commit?

It also creates accountability gaps for questions such as, who told the AI agent to create this code? What context did the agent need to build it? What resources did the AI have access to?

Composite identities solve this problem by linking an AI agent’s digital identity with the human user instructing it. When an AI agent attempts to access a resource, the system can authenticate and authorize both the agent and its human operator, creating a complete audit trail.

This approach maintains accountability while enabling organizations to set more granular permissions based on the specific human-AI pairing.

Deploy Comprehensive Monitoring Frameworks

Operations, development and security teams need ways to keep track of the activities of AI agents across several workflows, processes and systems. It’s not sufficient to know what an agent is doing in a codebase, for instance, teams also need to be able to keep an eye on its activity in the staging and production environments, in associated databases and in any applications it might have access to.

Organizations should consider using Autonomous Resource Information Systems (ARIS) that mirror existing Human Resource Information Systems (HRIS). These frameworks maintain profiles of autonomous agents, document their abilities and specializations and manage their operational boundaries.

We can see the beginnings of these technologies in LLM data management systems like Knostic, but the field is rapidly evolving.

Establish Transparency and Accountability Structures

Even with sophisticated monitoring frameworks, organizations must maintain clear accountability structures for autonomous AI agents. This means establishing policies that require disclosure when AI tools are being used and designating individuals responsible for agent oversight.

Regular human review of agent actions and outputs is essential, but more importantly, organizations need clear escalation procedures when agents overstep their boundaries.

This accountability structure should include audits of agent permissions, review of unusual behavior patterns and established playbooks for rapidly revoking or modifying agent access when problems arise.

Responsible Agent Deployment

In many cases, the use of AI agents will lead to remarkable innovations and breakthroughs. They will also force teams to reimagine the structure of current AuthZ systems.

This form of disruption is not unprecedented. The shift to cloud computing similarly challenged existing security frameworks, forcing organizations to develop new approaches to identity management, network security, and data protection.

Security often follows innovation, and success requires learning to strike a balance. Facing this transformation head-on ensures AI agents deliver on their promise of productivity without becoming agents of chaos.



Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleCampaign Warns Solicitors and House Buyers of Payment Diversion Fraud
Next Article CTEM’s Core: Prioritization and Validation
Team-CWD
  • Website

Related Posts

Cyber Security

Why AI’s Rise Makes Protecting Personal Data More Critical Than Ever

February 6, 2026
Cyber Security

New Hacking Campaign Exploits Microsoft Windows WinRAR Vulnerability

February 5, 2026
Cyber Security

Two Critical Flaws Found in n8n AI Workflow Automation Platform

February 4, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views

Ukrainian Ransomware Fugitive Added to Europe’s Most Wanted

September 11, 20255 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views
Our Picks

How cybercriminals are targeting content creators

November 26, 2025

Chronology of a Skype attack

February 5, 2026

What are brushing scams and how do I stay safe?

December 24, 2025

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.