Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

New Zero-Click Attack Lets ChatGPT User Steal Data

January 8, 2026

The State of Cybersecurity in 2025: Key Segments, Insights, and Innovations 

January 8, 2026

GoBruteforcer Botnet Targets Linux Servers

January 8, 2026
Facebook X (Twitter) Instagram
Thursday, January 8
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»News»Personal LLM Accounts Drive Shadow AI Data Leak Risks
News

Personal LLM Accounts Drive Shadow AI Data Leak Risks

Team-CWDBy Team-CWDJanuary 7, 2026No Comments4 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


The rising use of generative AI tools like Large Language Models (LLMs) in the workplace is increasing the risk of cyber-security violations as organizations struggle to keep tabs on how employees are using them.

One of the key challenges IT and security teams are facing is the continued use of Shadow AI, when employees use their personal accounts – such as ChatGPT, Google Gemini, and Microsoft Copilot – at work.

According to Netskope’s Cloud and Threat Report for 2026, nearly half (47%) of people using generative AI tools in the workplace are using personal accounts and applications to do so.

With this comes a lack of visibility or controls over how employees are using these personal generative AI accounts at work.

The result is an increase in cyber-security risks and issues with data-policy violations, with the risk of sensitive corporate information being leaked.

Meanwhile, the number of prompts being sent to generative AI applications are on the rise.

“While the number of users tripled on average, the amount of data being sent to SaaS gen AI apps grew sixfold, from 3,000 to 18,000 prompts per month. Meanwhile, the top 25% of organizations are sending more than 70,000 prompts per month, and the top 1% are sending more than 1.4 million prompts per month,” said Netskope in its report.

Generative AI Data Policy Violations Average 223 Per Month

This increase in data being sent to AI tools creates additional security risks for organizations; according to Netskope the number of known data policy violations as a result of employees using generative AI and LLMs has doubled in the last year – and it is likely that this is an underestimation, given how organizations are struggling to monitor usage of shadow AI.

“In the average organization, both the number of users committing data policy violations and the number of data policy incidents has increased twofold over the past year, with an average of 3% of gen AI users committing an average of 223 gen AI data policy violations per month,” said the report.

It also warns that the more enthusiastic organizations and their employees are about using AI applications and services, the higher the risk of a data policy violation – the top 25% of organizations for using generative AI saw an average of 2100 incidents a month.

These incidents involve sensitive data being sent to AI tools – including source code, confidential data, intellectual properly and even login credentials – leading to an increase in accidental data exposure and compliance risks.

This is especially the case if employees are using their personal accounts, which without proper procedures put in place, security and might not even be aware are being used.

There’s also the risk that attackers take advantage of information being entered into LLMs, using carefully curated prompts to draw out sensitive information they can use either in its own right, or to help make targeted campaigns more customized and efficient.

As the use of generative AI and LLMs continues to grow, organizations need to ensure they have effective policies in place to maximise visibility of AI tool usage across the network, as well as educating employees on what constitutes risky use of AI.

“The combination of the surge in data policy violations and the high sensitivity of the data regularly being compromised should be a primary concern for organizations that haven’t taken initiatives to bring AI risk under control,” said Netskope.

“Without stronger controls, the probability of accidental leakage, compliance failures, and downstream compromise continues to rise month over month.”

While data policy violations via generative AI remain a significant risk, it appears organizations are starting to take notice: the percentage of employees using personal AI accounts in the workplace has dropped from 78% down to 47% compared with the previous twelve months, suggesting data governance policies are starting to clamp down on the use of Shadow AI.



Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleGhostAd Drain, macOS Attacks, Proxy Botnets, Cloud Exploits, and 12+ Stories
Next Article Ghost Tap Malware Fuels Surge in Remote NFC Payment Fraud
Team-CWD
  • Website

Related Posts

News

New Zero-Click Attack Lets ChatGPT User Steal Data

January 8, 2026
News

The State of Cybersecurity in 2025: Key Segments, Insights, and Innovations 

January 8, 2026
News

Phishing attacks exploit misconfigured emails to target Microsoft 365

January 8, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202521 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views

Ukrainian Ransomware Fugitive Added to Europe’s Most Wanted

September 11, 20255 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202521 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views
Our Picks

The hidden risks of browser extensions – and how to avoid them

September 13, 2025

Can password managers get hacked? Here’s what to know

November 14, 2025

What if your romantic AI chatbot can’t keep a secret?

November 18, 2025

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.