Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

One in Eight Workers Has Sold Their Corporate Logins

May 6, 2026

Microsoft Confirms Active Exploitation of Windows Shell CVE-2026-32202

May 6, 2026

AI Adoption Outpaces Safety Policies, Leaving Organizations Exposed

May 6, 2026
Facebook X (Twitter) Instagram
Wednesday, May 6
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»News»AI Adoption Outpaces Safety Policies, Leaving Organizations Exposed
News

AI Adoption Outpaces Safety Policies, Leaving Organizations Exposed

Team-CWDBy Team-CWDMay 6, 2026No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


AI has become embedded in organizations, yet fewer than half have any form of AI safety or security policies in place, potentially leaving them exposed to data breaches, privacy failures and other cyber threats.

According to new research published by ISACA on May 5, 90% of digital trust professionals believe that employees in their organization use AI tools.

However, only 38% said their organization has a formal, comprehensive AI policy in place to manage use of AI tools, while 30% said they have a limited policy in place.

Despite the rise of AI in the workplace, 25% of organizations said they don’t have any policies in place around AI at all.

The lack of solidified policies around appropriate AI usage has resulted in the rise of Shadow AI, as employees use tools like LLMs to aid their day-to-day work. This, however, could lead to them sharing sensitive company information with AI models.

Those polled as part of ISACA’s annual AI Pulse Poll noted it is unclear if they could prevent a security incident caused by a Shadow AI tool that was unknown to security and IT teams.

Uncertainties Over Ability to Shut Down AI

In total, 56% of respondents said they do not know how long it would take to halt an AI system due to a security incident.

Only 20% said their organization has any sort of process in place to shut down or override AI systems if something went wrong, such as the AI performing malicious activity or the AI being impacted by data poisoning attacks.

“With only 38% of practitioners confident in their board’s understanding of AI risks, the leadership deficit is as real as the technology one,” said Ulrika Dellrud, member of ISACA’s Emerging Trends Working Group and chief privacy and data ethics officer at Smarter Contracts.

“Effective AI governance also starts with mastering your data: without strong data and privacy governance as a foundation, organizations cannot manage AI risk, ensure trust, or unlock sustainable value. The path forward is clear: AI success will depend not just on innovation, but on disciplined governance, informed leadership and responsible data stewardship.”

The research also found that data privacy and security professionals believe that AI-powered cybersecurity threats are escalating. Many believe that these threats are going unnoticed by their organizations.

In the AI Pulse Poll, respondents highlighted several growing challenges linked to AI threats:

  • 71% said AI-powered phishing and social engineering attacks are now more difficult to spot
  • 58% said AI has made it significantly harder to authenticate digital information
  • 38% said their trust in traditional threat detection methods has declined as a result

Despite this, many respondents suggested that they do see AI as providing an advantage for cyber defenders, with 43% noting that the deployment of AI-based cybersecurity tools has improved their organization’s ability to detect and respond to cyber threats.

The ISACA AI Pulse Poll is based on the responses of 3400 global digital trust professionals across IT audit, governance, cybersecurity, privacy and emerging technology roles.



Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleCheckmarx Confirms GitHub Repository Data Posted on Dark Web After March 23 Attack
Next Article Microsoft Confirms Active Exploitation of Windows Shell CVE-2026-32202
Team-CWD
  • Website

Related Posts

News

One in Eight Workers Has Sold Their Corporate Logins

May 6, 2026
News

Microsoft Confirms Active Exploitation of Windows Shell CVE-2026-32202

May 6, 2026
News

Checkmarx Confirms GitHub Repository Data Posted on Dark Web After March 23 Attack

May 6, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Target Crypto Firms with ClickFix and Zoom Lures

April 29, 202610 Views

Why SOC Burnout Can Be Avoided: Practical Steps

November 14, 20259 Views

Cyber M&A Roundup: Cyber Giants Strengthen AI Security Offerings

December 1, 20258 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Target Crypto Firms with ClickFix and Zoom Lures

April 29, 202610 Views
Our Picks

Don’t let “back to school” become “back to bullying”

September 11, 2025

Why you should never pay to get paid

September 15, 2025

How it preys on personal data – and how to stay safe

October 23, 2025

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.