Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

China-Linked UAT-8099 Targets IIS Servers in Asia with BadIIS SEO Malware

February 7, 2026

Badges, Bytes and Blackmail

February 7, 2026

Ex-Google Engineer Convicted for Stealing AI Secrets for China Startup

February 7, 2026
Facebook X (Twitter) Instagram
Saturday, February 7
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»News»How CISOs Can Drive Effective AI Governance
News

How CISOs Can Drive Effective AI Governance

Team-CWDBy Team-CWDSeptember 26, 2025No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


AI’s growing role in enterprise environments has heightened the urgency for Chief Information Security Officers (CISOs) to drive effective AI governance. When it comes to any emerging technology, governance is hard – but effective governance is even harder. The first instinct for most organizations is to respond with rigid policies. Write a policy document, circulate a set of restrictions, and hope the risk is contained. However, effective governance doesn’t work that way. It must be a living system that shapes how AI is used every day, guiding organizations through safe transformative change without slowing down the pace of innovation.

For CISOs, finding that balance between security and speed is critical in the age of AI. This technology simultaneously represents the greatest opportunity and greatest risk enterprises have faced since the dawn of the internet. Move too fast without guardrails, and sensitive data leaks into prompts, shadow AI proliferates, or regulatory gaps become liabilities. Move too slow, and competitors pull ahead with transformative efficiencies that are too powerful to compete with. Either path comes with ramifications that can cost CISOs their job.

In turn, they cannot lead a “department of no” where AI adoption initiatives are stymied by the organization’s security function. It is crucial to instead find a path to yes, mapping governance to organizational risk tolerance and business priorities so that the security function serves as a true revenue enabler. Over the course of this article, I’ll share three components that can help CISOs make that shift and drive AI governance programs that enable safe adoption at scale.

1. Understand What’s Happening on the Ground

When ChatGPT first arrived in November 2022, most CISOs I know scrambled to publish strict policies that told employees what not to do. It came from a place of positive intent considering sensitive data leakage was a legitimate concern. However, while policies written from that “document backward” approach are great in theory, they rarely work in practice. Due to how fast AI is evolving, AI governance must be designed through a “real-world forward” mindset that accounts for what’s really happening on the ground inside an organization. This requires CISOs to have a foundational understanding of AI: the technology itself, where it is embedded, which SaaS platforms are enabling it, and how employees are using it to get their jobs done.

AI inventories, model registries, and cross-functional committees may sound like buzzwords, but they are practical mechanisms that can help security leaders develop this AI fluency. For example, an AI Bill of Materials (AIBOM) offers visibility into the components, datasets, and external services that will feed an AI model. Just as a software bill of materials (SBOM) clarifies third-party dependencies, an AIBOM ensures leaders know what data is being used, where it came from, and what risks it introduces.

Model registries serve a similar role for AI systems already in use. They track which models are deployed, when they were last updated, and how they’re performing to prevent “black box sprawl” and inform decisions about patching, decommissioning, or scaling usage. AI committees ensure that oversight doesn’t fall on security or IT alone. Often chaired by a designated AI lead or risk officer, these groups include representatives from legal, compliance, HR, and business units – turning governance from a siloed directive into a shared responsibility that bridges security concerns with business outcomes.

2. Align Policies to the Speed of the Organization

Without real-world forward policies, security leaders often fall into the trap of codifying controls they cannot realistically deliver. I’ve seen this firsthand through a CISO colleague of mine. Knowing employees were already experimenting with AI, he worked to enable the responsible adoption of several GenAI applications across his workforce. However, when a new CIO joined the organization and felt there were too many GenAI applications in use, the CISO was directed to ban all GenAI until one enterprise-wide platform was selected. Fast forward one year later, that single platform still hadn’t been implemented, and employees were using unapproved GenAI tools that exposed the organization to shadow AI vulnerabilities. The CISO was stuck trying to enforce a blanket ban he couldn’t execute, fielding criticism without the authority to implement a workable solution.

This kind of scenario plays out when policies are written faster than they can be executed, or when they fail to anticipate the pace of organizational adoption. Policies that look decisive on paper can quickly become obsolete if they don’t evolve with leadership changes, embedded AI functionality, and the organic ways employees integrate new tools into their work. Governance must be flexible enough to adapt, or else it risks leaving security teams enforcing the impossible.

The way forward is to design policies as living documents. They should evolve as the business does, informed by actual use cases and aligned to measurable outcomes. Governance also can’t stop at policy; it needs to cascade into standards, procedures, and baselines that guide daily work. Only then do employees know what secure AI adoption really looks like in practice.

3. Make AI Governance Sustainable

Even with strong policies and roadmaps in place, employees will continue to use AI in ways that aren’t formally approved. The goal for security leaders shouldn’t be to ban AI, but to make responsible use the easiest and most attractive option. That means equipping employees with enterprise-grade AI tools, whether purchased or homegrown, so they do not need to reach for insecure alternatives. In addition, it means highlighting and reinforcing positive behaviors so that employees see value in following the guardrails rather than bypassing them.

Sustainable governance also stems from Utilizing AI and Protecting AI, two pillars of the SANS Institute’s recently published Secure AI Blueprint. To govern AI effectively, CISOs should empower their SOC teams to effectively utilize AI for cyber defense – automating noise reduction and enrichment, validating detections against threat intelligence, and ensuring analysts remain in the loop for escalation and incident response. They should also ensure the right controls are in place to protect AI systems from adversarial threats, as outlined in the SANS Critical AI Security Guidelines.

Learn More at SANS Cyber Defense Initiative 2025

This December, SANS will be offering LDR514: Security Strategic Planning, Policy, and Leadership at SANS Cyber Defense Initiative 2025 in Washington, D.C. This course is designed for leaders who want to move beyond generic governance advice and learn how to build business-driven security programs that steer organizations to safe AI adoption. It will cover how to create actionable policies, align governance with business strategy, and embed security into culture so you can lead your enterprise through the AI era securely.

If you’re ready to turn AI governance into a business enabler, register for SANS CDI 2025 here.

Note: This article was contributed by Frank Kim, SANS Institute Fellow.

Found this article interesting? This article is a contributed piece from one of our valued partners. Follow us on Google News, Twitter and LinkedIn to read more exclusive content we post.





Source

computer security cyber attacks cyber news cyber security news cyber security news today cyber security updates cyber updates data breach hacker news hacking news how to hack information security network security ransomware malware software vulnerability the hacker news
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleInterpol Cracks Down on Large-Scale African Scamming Networks
Next Article Singapore Threatens Meta With Fines Over Facebook Impersonation Scams
Team-CWD
  • Website

Related Posts

News

China-Linked UAT-8099 Targets IIS Servers in Asia with BadIIS SEO Malware

February 7, 2026
News

Badges, Bytes and Blackmail

February 7, 2026
News

Ex-Google Engineer Convicted for Stealing AI Secrets for China Startup

February 7, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views

Ukrainian Ransomware Fugitive Added to Europe’s Most Wanted

September 11, 20255 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views
Our Picks

Chronology of a Skype attack

February 5, 2026

What if your romantic AI chatbot can’t keep a secret?

November 18, 2025

When ‘hacking’ your game becomes a security risk

October 17, 2025

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.