Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Conduent Data Breach Impacts Over 10.5 Million Individuals

October 31, 2025

Chinese Threat Actors Exploit ToolShell SharePoint Flaw Weeks After Microsoft’s July Patch

October 31, 2025

Chinese-Linked Hackers Exploit Windows Flaw to Spy on EU Diplomats

October 31, 2025
Facebook X (Twitter) Instagram
Friday, October 31
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»News»Securing AI to Benefit from AI
News

Securing AI to Benefit from AI

Team-CWDBy Team-CWDOctober 29, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


Artificial intelligence (AI) holds tremendous promise for improving cyber defense and making the lives of security practitioners easier. It can help teams cut through alert fatigue, spot patterns faster, and bring a level of scale that human analysts alone can’t match. But realizing that potential depends on securing the systems that make it possible.

Every organization experimenting with AI in security operations is, knowingly or not, expanding its attack surface. Without clear governance, strong identity controls, and visibility into how AI makes its decisions, even well-intentioned deployments can create risk faster than they reduce it. To truly benefit from AI, defenders need to approach securing it with the same rigor they apply to any other critical system. That means establishing trust in the data it learns from, accountability for the actions it takes, and oversight for the outcomes it produces. When secured correctly, AI can amplify human capability instead of replacing it to help practitioners work smarter, respond faster, and defend more effectively.

Establishing Trust for Agentic AI Systems

As organizations begin to integrate AI into defensive workflows, identity security becomes the foundation for trust. Every model, script, or autonomous agent operating in a production environment now represents a new identity — one capable of accessing data, issuing commands, and influencing defensive outcomes. If those identities aren’t properly governed, the tools meant to strengthen security can quietly become sources of risk.

The emergence of Agentic AI systems make this especially important. These systems don’t just analyze; they may act without human intervention. They triage alerts, enrich context, or trigger response playbooks under delegated authority from human operators. Each action is, in effect, a transaction of trust. That trust must be bound to identity, authenticated through policy, and auditable end to end.

The same principles that secure people and services must now apply to AI agents:

  • Scoped credentials and least privilege to ensure every model or agent can access only the data and functions required for its task.
  • Strong authentication and key rotation to prevent impersonation or credential leakage.
  • Activity provenance and audit logging so every AI-initiated action can be traced, validated, and reversed if necessary.
  • Segmentation and isolation to prevent cross-agent access, ensuring that one compromised process cannot influence others.

In practice, this means treating every agentic AI system as a first-class identity within your IAM framework. Each should have a defined owner, lifecycle policy, and monitoring scope just like any user or service account. Defensive teams should continuously verify what those agents can do, not just what they were intended to do, because capability often drifts faster than design. With identity established as the foundation, defenders can then turn their attention to securing the broader system.

Securing AI: Best Practices for Success

Securing AI begins with protecting the systems that make it possible — the models, data pipelines, and integrations now woven into everyday security operations. Just as

we secure networks and endpoints, AI systems must be treated as mission-critical infrastructure that requires layered and continuous defense.

The SANS Secure AI Blueprint outlines a Protect AI track that provides a clear starting point. Built on the SANS Critical AI Security Guidelines, the blueprint defines six control domains that translate directly into practice:

  • Access Controls: Apply least privilege and strong authentication to every model, dataset, and API. Log and review access continuously to prevent unauthorized use.
  • Data Controls: Validate, sanitize, and classify all data used for training, augmentation, or inference. Secure storage and lineage tracking reduce the risk of model poisoning or data leakage.
  • Deployment Strategies: Harden AI pipelines and environments with sandboxing, CI/CD gating, and red-teaming before release. Treat deployment as a controlled, auditable event, not an experiment.
  • Inference Security: Protect models from prompt injection and misuse by enforcing input/output validation, guardrails, and escalation paths for high-impact actions.
  • Monitoring: Continuously observe model behavior and output for drift, anomalies, and signs of compromise. Effective telemetry allows defenders to detect manipulation before it spreads.
  • Model Security: Version, sign, and integrity-check models throughout their lifecycle to ensure authenticity and prevent unauthorized swaps or retraining.

These controls align directly NIST’s AI Risk Management Framework and the OWASP Top 10 for LLMs, which highlights the most common and consequential vulnerabilities in AI systems — from prompt injection and insecure plugin integrations to model poisoning and data exposure. Applying mitigations from those frameworks inside these six domains helps translate guidance into operational defense. Once these foundations are in place, teams can focus on using AI responsibly by knowing when to trust automation and when to keep humans in the loop.

Balancing Augmentation and Automation

AI systems are capable of assisting human practitioners like an intern that never sleeps. However, it is critical for security teams to differentiate what to automate from what to augment. Some tasks benefit from full automation, especially those that are repeatable, measurable, and low-risk if an error occurs. However, others demand direct human oversight because context, intuition, or ethics matter more than speed.

Threat enrichment, log parsing, and alert deduplication are prime candidates for automation. These are data-heavy, pattern-driven processes where consistency outperforms creativity. By contrast, incident scoping, attribution, and response decisions rely on context that AI cannot fully grasp. Here, AI should assist by surfacing indicators, suggesting next steps, or summarizing findings while practitioners retain decision authority.

Finding that balance requires maturity in process design. Security teams should categorize workflows by their tolerance for error and the cost of automation failure. Wherever the risk of false positives or missed nuance is high, keep humans in the loop. Wherever precision can be objectively measured, let AI accelerate the work.

Join us at SANS Surge 2026!

I’ll dive deeper into this topic during my keynote at SANS Surge 2026 (Feb. 23-28, 2026), where we’ll explore how security teams can ensure AI systems are safe to depend on. If your organization is moving fast on AI adoption, this event will help you move more securely. Join us to connect with peers, learn from experts, and see what secure AI in practice really looks like.

Register for SANS Surge 2026 here.

Note: This article was contributed by Frank Kim, SANS Institute Fellow.



Source

computer security cyber attacks cyber news cyber security news cyber security news today cyber security updates cyber updates data breach hacker news hacking news how to hack information security network security ransomware malware software vulnerability the hacker news
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticlePHP Servers and IoT Devices Face Growing Cyber-Attack Risks
Next Article Open Source “b3” Benchmark to Boost LLM Security for Agents
Team-CWD
  • Website

Related Posts

News

Conduent Data Breach Impacts Over 10.5 Million Individuals

October 31, 2025
News

Chinese Threat Actors Exploit ToolShell SharePoint Flaw Weeks After Microsoft’s July Patch

October 31, 2025
News

Chinese-Linked Hackers Exploit Windows Flaw to Spy on EU Diplomats

October 31, 2025
Add A Comment
Leave A Reply Cancel Reply

Latest News

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202512 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views

Ukrainian Ransomware Fugitive Added to Europe’s Most Wanted

September 11, 20255 Views

The risks of unsupported IoT tech

September 11, 20255 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202512 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views
Our Picks

Look out for phony verification pages spreading malware

September 14, 2025

Why you should never pay to get paid

September 15, 2025

It’s all fun and games until someone gets hacked

September 26, 2025

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2025 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.