Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Tycoon2FA Phishing Service Resumes Activity Post-Takedown

March 23, 2026

GlassWorm Supply-Chain Attack Abuses 72 Open VSX Extensions to Target Developers

March 23, 2026

Cybersecurity Staff Don’t Know How Fast They Could Stop AI Attacks

March 23, 2026
Facebook X (Twitter) Instagram
Tuesday, March 24
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»Cyber Security»Cybersecurity Staff Don’t Know How Fast They Could Stop AI Attacks
Cyber Security

Cybersecurity Staff Don’t Know How Fast They Could Stop AI Attacks

Team-CWDBy Team-CWDMarch 23, 2026No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


Over half (56%) of IT and cybersecurity professionals have no idea how quickly they could shut down AI systems affected by a cyber-attack or security incident, new research by ISACA has found.

Published on 23 March by the global certification body, the research is based on a survey of over 3400 security and digital professionals.

Just under a third of respondents (32%) said that they believed they could halt potentially compromised AI systems within an hour, while 7% said they thought it would take over an hour.

Confusion Over Enterprise AI Ownership Raises Security and Governance Risks

Part of the issue stems from confusion over who is responsible for managing enterprise AI applications. A fifth (20%) of survey respondents said they didn’t know who was accountable for AI apps.

Meanwhile, 28% of those surveyed said managing AI was the responsibility of board level executives, 18% said it was the responsibility of the CIO or CTO, while 13% said it is the responsibility to their CISO.

No matter where responsibility lies, under half (43%) of security professionals surveyed said they have high confidence in their organization’s ability to investigate a serious AI incident and explain what happened to leadership or regulators.

Just over a quarter (27%) said they had little to no confidence in their organization’s ability to do so.

According to the ISACA research, many security professionals believe that their organization would struggle to identify a potential security issue related to AI, due to a lack of human oversight of systems.

Only 36% of those surveyed said that humans must approve most AI actions before they happen within their organization. A further 26% said AI activity was only reviewed after the action has taken place.

Meanwhile, 11% said AI actions are only reviewed in the event of specifically flagged activity and 20% said they did not know what role humans played in overseeing decisions made by AI at their organization.

“While organizations may feel the push to adopt AI technology quickly to keep pace and leverage its capabilities, it is imperative they have the proper guardrails and governance in place before doing so,” said Jenai Marinkovic, vCISO and CTO of Tiro Security, co-founder and board chair of GRCIE, and ISACA Emerging Trends Working Group member.

“Enterprises need to ensure the right people, policies, processes, and plans are in place to be able to not only use AI effectively and responsibility, but also to avoid potential major disruption if crisis hits,” she added.



Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleTrivy Supply Chain Attack Expands With New Compromised Docker Images
Next Article GlassWorm Supply-Chain Attack Abuses 72 Open VSX Extensions to Target Developers
Team-CWD
  • Website

Related Posts

Cyber Security

Strategic Leadership in Digital Transformation

March 20, 2026
Cyber Security

Ransomware Affiliate Exposes Details of ‘The Gentlemen’ Operation

March 19, 2026
Cyber Security

Average Number of Daily API Attacks Up 113% Annually

March 18, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

Cyber M&A Roundup: Cyber Giants Strengthen AI Security Offerings

December 1, 20258 Views

Malicious Nx Packages in ‘s1ngularity’ Attack Leaked 2,349 GitHub, Cloud, and AI Credentials

September 5, 20258 Views

Near-ultrasonic attacks on voice assistants

September 11, 20256 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

Cyber M&A Roundup: Cyber Giants Strengthen AI Security Offerings

December 1, 20258 Views
Our Picks

What it is and how to protect yourself

January 8, 2026

Your information is on the dark web. What happens next?

January 13, 2026

How to help older family members avoid scams

October 31, 2025

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.