Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

TeamPCP Expands Supply Chain Campaign With LiteLLM PyPI Compromise

March 25, 2026

LeakNet Ransomware Uses ClickFix via Hacked Sites, Deploys Deno In-Memory Loader

March 25, 2026

Cloud Phones Linked to Rising Financial Fraud Threat

March 25, 2026
Facebook X (Twitter) Instagram
Wednesday, March 25
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»News»AI is Everywhere, But CISOs are Still Securing It with Yesterday’s Skills and Tools, Study Finds
News

AI is Everywhere, But CISOs are Still Securing It with Yesterday’s Skills and Tools, Study Finds

Team-CWDBy Team-CWDMarch 25, 2026No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


A majority of security leaders are struggling to defend AI systems with tools and skills that are not fit for the challenge, according to the AI and Adversarial Testing Benchmark Report 2026 from Pentera.

The report, based on a survey of 300 US CISOs and senior security leaders, examines how organizations are securing AI infrastructure and highlights critical gaps tied to skills shortages and reliance on security controls not designed for the AI era.

AI adoption is outpacing security visibility

AI systems are rarely deployed in isolation. They are layered across and integrated into existing corporate technology, from cloud platforms and identity systems to applications and data pipelines. With ownership spread across disparate teams, effective centralized oversight has collapsed.

As a result, 67 percent of CISOs reported limited visibility into how AI is being used across their organization. None of the respondents indicated they have full visibility; rather, they acknowledge being aware of or accepting some form of unmanaged or unsanctioned AI usage.

Without a clear view of where AI systems operate or what resources they can access, security teams struggle to assess risk effectively. Basic questions, such as which identities AI systems rely on, what data they can reach, or how they behave when controls fail, often remain unanswered.

Skills, not budget, are the primary barrier

Although AI security is now a regular topic in boardrooms and executive discussions, the study shows that the biggest challenges are not financial.

CISOs identified the following as their top obstacles to securing AI infrastructure:

  • Lack of internal expertise (50 percent)
  • Limited visibility into AI usage (48 percent)
  • Insufficient security tools designed specifically for AI systems (36 percent)

Only 17 percent cited budget constraints as a primary concern. This suggests that many organizations are willing to invest in AI security, but do not yet have the specialized skills needed to evaluate AI-related risks in real environments.

AI systems introduce behaviors that security teams are still learning to assess, including autonomous decision-making, indirect access paths, and privileged interaction between systems. Without the right expertise and active testing, it becomes difficult to evaluate whether existing controls are effective as intended.

Legacy controls are carrying most of the load

In the absence of AI-specific best practices, skills, and tooling, most enterprises are extending existing security controls to cover AI infrastructure.

The study found that 75 percent of CISOs rely on legacy security controls, such as endpoint, application, cloud, or API security tools, to protect AI systems. Only 11 percent reported having security tools designed specifically to secure AI infrastructure.

This approach reflects a familiar pattern seen during previous technology shifts, where organizations initially adapt existing defenses before more tailored security practices emerge. While this can provide basic coverage, controls built for traditional systems may not account for how AI changes access patterns and expands potential attack paths.

A familiar challenge, now applied to AI

Taken together, the findings show that AI security challenges stem from foundational gaps rather than a lack of awareness or intent.

As AI becomes a core part of enterprise infrastructure, the report suggests that organizations will need to focus on building expertise and improving how they validate security controls across environments where AI is already operating.

To explore the full findings, download the AI and Adversarial Testing Benchmark Report 2026 for a deeper discussion of the data and key takeaways.

Note: This article was written by Ryan Dory, Director, Technical Advisors at Pentera. 



Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleHackers Exploit Compromised Enterprise Identities at Industrial Scale
Next Article Cloud Phones Linked to Rising Financial Fraud Threat
Team-CWD
  • Website

Related Posts

News

TeamPCP Expands Supply Chain Campaign With LiteLLM PyPI Compromise

March 25, 2026
News

LeakNet Ransomware Uses ClickFix via Hacked Sites, Deploys Deno In-Memory Loader

March 25, 2026
News

Cloud Phones Linked to Rising Financial Fraud Threat

March 25, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

Cyber M&A Roundup: Cyber Giants Strengthen AI Security Offerings

December 1, 20258 Views

Malicious Nx Packages in ‘s1ngularity’ Attack Leaked 2,349 GitHub, Cloud, and AI Credentials

September 5, 20258 Views

Near-ultrasonic attacks on voice assistants

September 11, 20256 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

Cyber M&A Roundup: Cyber Giants Strengthen AI Security Offerings

December 1, 20258 Views
Our Picks

The WhatsApp screen-sharing scam you didn’t see coming

November 6, 2025

What are brushing scams and how do I stay safe?

December 24, 2025

What’s at stake if your employees post too much online

December 1, 2025

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.