Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

Fake Claude AI Site Drops Beagle Backdoor on Windows Users

May 8, 2026

What to Look for in an Exposure Management Platform (And What Most of Them Get Wrong)

May 8, 2026

OpenAI and Anthropic LLMs Used in Critical Infrastructure Cyber-Attack

May 7, 2026
Facebook X (Twitter) Instagram
Friday, May 8
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»Cyber Security»OpenAI and Anthropic LLMs Used in Critical Infrastructure Cyber-Attack
Cyber Security

OpenAI and Anthropic LLMs Used in Critical Infrastructure Cyber-Attack

Team-CWDBy Team-CWDMay 7, 2026No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


Commercial large language models (LLMs) were used as part of a cyber-attack which targeted a municipal water and drainage utility provider in Mexico, cybersecurity researchers at Dragos have warned.

A “significant compromise” of the water infrastructure providers’ IT environment escalated into an attempted attack against the organization’s operational infrastructure (OT), said a Dragos report, published on May 6.

The research suggested that attackers used Anthropic’s Claude AI and OpenAI’s GPT models to aid with planning and conducting the campaign.

The cyber-attack against the water facility in the Monterrey metropolitan area of Mexico took place between December 2025 and February 2026.

Dragos analyzed 350 artifacts associated with the attack, most of which were AI-generated malicious scripts used as offensive tooling during the intrusions. They found that the adversary leveraged commercially available tools to aid with the campaign.

Attribution remains unclear, with no named threat actor publicly identified.

AI Exploited to Operate Attack Faster

Anthropic’s Claude AI was used to as “the primary technical executor of the intrusion” and handled prompt-and-response interactions, intrusion planning and the development and deployment of malicious tools.

Meanwhile, OpenAI’s GPT models were used for what Dragos described as “analytical roles,” as well as processing collected data and generating outputs in Spanish.

The AI models were deployed to help the campaign operate faster and more efficiently and allowed the attackers to refine their techniques in real-time, based on what was working and what was not.

According to Dragos, Claude was also deployed to analyse vendor documentation around the SCADA systems at the water facility and was even used to generate lists of default and known login credentials for brute force attacks against the systems.

While a breach of the OT system was ultimately unsuccessful, Dragos pointed out that the AI-assisted campaign should serve as a warning over how commercial AI models can be exploited by nefarious threat actors. In this case, the attackers seemed to have no prior experience with targeting OT.

“This investigation showed how commercial AI tools assisted an adversary with no prior objective in OT targeting to identify an OT environment and develop and refine a viable access pathway to OT infrastructure,” Jay Deen, associate principal adversary hunter at Dragos, wrote in the blog post.

“These findings demonstrate how the adoption of commercial AI tools as an intrusion aid has made OT more visible to adversaries already operating within IT,” he added.

To help counter cyber-attacks against OT, Dragos recommended that security teams ensure that secure remote access policies are put in place and strong authentication controls are applied to limit unauthorized progression into OT environments.

The research by Dragos builds on previous research by Gambit Security into the attacks against government and infrastructure operators in Mexico, which exposed the personal data of millions of people.

Infosecurity has contacted both Anthropic and OpenAI for comment



Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleLegacy Security Tools Are Failing Data Protection
Next Article What to Look for in an Exposure Management Platform (And What Most of Them Get Wrong)
Team-CWD
  • Website

Related Posts

Cyber Security

Daemon Tools Developer Confirms Software Was Trojanized

May 7, 2026
Cyber Security

Five Years On: Lessons Learned From the Colonial Pipeline Cyber-Attack

May 6, 2026
Cyber Security

Trellix Reveals Unauthorized Access to Source Code

May 5, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Target Crypto Firms with ClickFix and Zoom Lures

April 29, 202610 Views

Why SOC Burnout Can Be Avoided: Practical Steps

November 14, 20259 Views

Cyber M&A Roundup: Cyber Giants Strengthen AI Security Offerings

December 1, 20258 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Target Crypto Firms with ClickFix and Zoom Lures

April 29, 202610 Views
Our Picks

Top IRS scams to look out for in 2026

February 10, 2026

Watch out for SVG files booby-trapped with malware

September 22, 2025

Is it OK to let your children post selfies online?

February 17, 2026

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.