Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

New Chaos Variant Targets Misconfigured Cloud Deployments, Adds SOCKS Proxy

April 18, 2026

Why that next data breach alert could be a trap

April 18, 2026

Masjesu Botnet Emerges as DDoS-for-Hire Service Targeting Global IoT Devices

April 17, 2026
Facebook X (Twitter) Instagram
Saturday, April 18
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»Cyber Security»OpenClaw Exposes the Real Cybersecurity Risks of Agentic AI
Cyber Security

OpenClaw Exposes the Real Cybersecurity Risks of Agentic AI

Team-CWDBy Team-CWDApril 17, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


One of today’s hot topics in infosec is Agentic AI.  For senior leaders it looks like magic – reduce your headcount, be more efficient and move more quickly.  But does the hype match the reality.  And do business leaders understand the security risks?

Agentic AI typically involves one AI system orchestrating multiple other tools or agents to execute a chain of tasks. In more advanced deployments, agents operate autonomously, selecting which tools to use and how to complete an objective without human intervention. While this architecture can drive efficiency, it also introduces a fragmented and dynamic attack surface.   And in some organizations a loss of control.

Without effective governance, visibility and control, risks can escalate rapidly. Until recently, these risks were largely theoretical; however, the OpenClaw investigation shows how quickly those concerns can become real.  And how quickly regulators can get involved too.

The OpenClaw Exposure

OpenClaw was built in late 2025 as a “weekend project” by its author, Peter Steinberger, and quickly gained traction. Steinberger said that his GitHub repository attracted around 2 million visitors in a single week, with many developers incorporating the code into their Agentic AI infrastructure.

However, on 9 February 2026, a report identified significant vulnerabilities. Researchers discovered more than 42,000 unique IP addresses hosting exposed OpenClaw control panels across 82 countries, many with full system access.

The report identified almost 50,000 instances where devices appeared vulnerable to remote code execution (RCE). In practical terms, this could allow an attacker to exploit the OpenClaw gateway to take control of the affected system.

OpenClaw deployments were heavily concentrated across major cloud and hosting providers. Depending on configuration, these vulnerabilities could also allow threat actors to access connected third-party services, including email, calendars, chat applications, social media and browser sessions.

Further concerns emerged when a cybersecurity investigation reportedly identified a misconfigured database exposing approximately 1.5 million authentication tokens, around 35,000 email addresses, and private communications between AI agents. Taken together, these issues point not just to isolated weaknesses, but to broader challenges around access control, credential management and system design.

Regulatory Concerns Are Emerging

Regulators have already begun to respond. On 12 February 2026, the Dutch data protection authority, Autoriteit Persoonsgegevens (AP), warned users and organizations against using OpenClaw and similar experimental systems. They noted that what it called “open-source tools” may not meet basic security requirements and advised against deploying them on systems containing sensitive or confidential data. 

The AP reminded organizations that it had powers under GDPR to get involved: under GDPR regulators can suspend processing, launch dawn raids or levy fines.  We’ve seen all three options used to police AI.

The AP’s warning includes the use of tools like OpenClaw in environments holding access credentials, financial information, employee data, private documents or identity records. The AP also emphasized that local deployment does not guarantee security, a point that remains widely misunderstood in practice.

Why Uninstalling OpenClaw is Not a Solution

For many organizations, fixing risks associated with OpenClaw is not be as simple as uninstalling the software. One challenge is visibility as some companies may not be aware if OpenClaw has been deployed, as the tool may have been adopted by developers or staff experimenting with AI tools without formal approval or oversight.

Shadow AI risk is already significant. A Microsoft study suggested that 71% of UK employees admitted using unapproved AI tools at work. Given the rapid adoption of AI since then, the true figure could now be higher.

OpenClaw also integrates with widely used communication and collaboration platforms, including WhatsApp, Telegram, Discord, Slack and Teams. If OpenClaw has been linked to multiple applications, manually resetting credentials and access tokens across those services could be a substantial task.

Practical Steps Organizations Should Consider

For many organizations, the OpenClaw case is a reminder that AI innovation must be matched with appropriate risk management. Some practical steps include:

  • Looking at Technical Settings:  Organizations need to restrict the use of applications like OpenClaw on their networks. There are tools available to look at Shadow AI risk. If the organization has those tools, they need to add OpenClaw on to the list of prohibited applications.  It has been reported that it is currently not possible for humans to delete an account on OpenClaw at least by using common settings.  Organizations that think they have been exposed may want to take specialist advice.
  • Check Your Socials:  It has been reported that OpenClaw collects X (formerly Twitter) user names, display names andpasswords.So it might be possible via OpenClaw for a threat actor to gain access to the organization’s social networking output, which again can lead to reputational risks and expose the organization to phishing attacks etc.
  • Literacy is Key.  AI literacy has become a regulatory expectation, including under the EU AI Act, and staff need to understand both the opportunities and risks of AI systems.
  • Take Measures to Protect Against Shadow AI: Whilst a literacy program will be part of this, organizations may want to include traditional software solutions like data loss prevention (DLP) software, and specialist Shadow AI monitoring and blocking services. 
  • Look at Contracts and Developer Due Diligence: For some organizations the issue might stem from sub-contracted developers. Therefore, they need to ensure contractual protections are in place to meet their compliance and regulatory obligations.  This might also include specific insurance policies since developers with just 1 or 10 employees are unlikely to have the financial ability to pay up when things go wrong. 
  • Do a Proper Data Protection Impact Assessment: This isn’t just common sense but may well be a legal requirement.  Whilst organizations want to move quickly in the new AI world, sometimes it is necessary to step back and see if an organization’s legal and compliance obligations are being considered.

A Broader Lesson

Agentic AI has the potential to transform the way organizations operate. However, the OpenClaw exposure highlights how quickly innovation can outpace governance. For security professionals, the issue is not just a single vulnerable tool, but a broader shift towards autonomous, highly integrated systems operating with extensive permissions and limited oversight. Without appropriate controls, these systems can introduce significant and systemic risk.

Organizations that improve visibility, strengthen governance and invest in AI literacy will be far better placed to realize the benefits of Agentic AI while managing its risks effectively.

Image credit: Stockinq / Shutterstock.com



Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleDDoS-For-Hire Services Disrupted by International Police Action
Next Article Masjesu Botnet Emerges as DDoS-for-Hire Service Targeting Global IoT Devices
Team-CWD
  • Website

Related Posts

Cyber Security

Systemic Flaw in MCP Protocol Could Expose 150 Million Downloads

April 17, 2026
Cyber Security

Cookeville Hospital Discloses Rhysida Breach Hitting 337,917

April 16, 2026
Cyber Security

AI Companies To Play Bigger Role in CVE Program, Says CISA

April 16, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

Why SOC Burnout Can Be Avoided: Practical Steps

November 14, 20259 Views

Cyber M&A Roundup: Cyber Giants Strengthen AI Security Offerings

December 1, 20258 Views

Why the Identity Security Fabric is Essential for Securing AI and Non-Human Identities

November 27, 20258 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

Why SOC Burnout Can Be Avoided: Practical Steps

November 14, 20259 Views
Our Picks

Top IRS scams to look out for in 2026

February 10, 2026

What it takes to fool facial recognition

March 14, 2026

Here’s how to avoid a ‘second strike’

April 11, 2026

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.