Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

eScan Antivirus Update Servers Compromised to Deliver Multi-Stage Malware

February 8, 2026

Open VSX Supply Chain Attack Used Compromised Dev Account to Spread GlassWorm

February 8, 2026

Iran-Linked RedKitten Cyber Campaign Targets Human Rights NGOs and Activists

February 8, 2026
Facebook X (Twitter) Instagram
Sunday, February 8
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»Cyber Security»Why AI’s Rise Makes Protecting Personal Data More Critical Than Ever
Cyber Security

Why AI’s Rise Makes Protecting Personal Data More Critical Than Ever

Team-CWDBy Team-CWDFebruary 6, 2026No Comments6 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


January 28 marks Data Privacy Day. Founded in 2007 by the Council of Europe, the aim of Data Privacy Day is to raise public awareness about the right to personal data protection and privacy.

Now in its 19th year, Data Privacy Day faces a much-changed world to the one it originated in: the first iPhone was revealed just two weeks before the first Data Privacy Day, leading to the smartphone revolution – and a revolution in how we access and interact with the internet, both at work and at home.

In 2026, we are in the middle of another technological revolution, this time brought about by the rise of artificial intelligence (AI).

AI isn’t new, it had been around for many years, commonly labelled as machine learning and often used for applications like scientific research.

But when OpenAI launched ChatGPT in late 2023, everything changed – the Large Language Model (LLM) opened AI use to wider world. It didn’t take long for Microsoft, Google and many other technology companies and software vendors to release their own LLMs and AI tools.

The advantage of these AI tools, we’re told, is that using them makes us more efficient. For example, people can use AI tools embedded in Microsoft 365 or third-party LLMs to help write emails, summarize documents and more. But are employees keeping data privacy in mind as they do so.

AI Agents and LLMs as a Data Privacy Risk

According to the The LayerX Enterprise AI & SaaS Data Security Report 2025, 77% of employees said they have pasted company information into AI or LLM services, while 82% of those who said they have done this also said that they have used a personal account to do so.

This creates cybersecurity and data privacy risks on two fronts. The first is that the information employees are feeding to an LLM as prompts potentially contains sensitive corporate data. That means it runs the risk of being leaked.

In theory, AI companies have guardrails in place to ensure that any information used to prompt an LLM, no matter the context or the source, cannot be reverse engineered to reveal the data.

However, attackers have been known to bypass such security features by using prompt injections, malicious queries disguised as legitimate to manipulate the code and trick the AI into revealing data it shouldn’t.

Secondly, the use of personal ChatGPT, Claude, Gemini, Co-Pilot or personal AI accounts presents a problem for enterprises: that corporate information is being transferred to and uploaded to models by accounts which aren’t monitored by security teams.

That doesn’t just create a data privacy risk around the data being uploaded to LLMs, but if the sensitive corporate information used to make those queries is left sitting in the personal email or cloud storage accounts of employees, then the business is at risk of a data breach if that personal account is hacked.

In each of these scenarios, the employee is using AI to be more efficient at work: they’re not actively attempting to jeopardize data privacy, but without the correct tools with the correct rules in place, there are risks.

Securing AI in the Enterprise

One factor that is key to ensuring data privacy and data security is to thoroughly audit what data an organization holds and where it is stored. Without this knowledge, ensuring data privacy is a significant challenge.

“The area where more work is required is putting technology-based controls around AI policies and procedures. That is still one of the best ways to make sure they’re being adhered to,” Kamran Ikram, senior managing director and cyber security and resiliency lead at Accenture told Infosecurity.

“That starts with having a good inventory of the data that exists. Because if you don’t know if it exists, you don’t know if it’s being used,” he added.

Organizations should also ensure that appropriate protections are in place to identify and prevent potential data privacy breaches.

“Have the right controls around that data to make sure only the right people can access and use it. There’s another benefit of that which is if a threat actor infiltrates your organization, if you have the right controls, it limits what they can see and access,” said Ikram.

Technical controls on how data is used in relation to AI tools is a must for data privacy, but it isn’t the only protection which should be put in place. It’s also vital for organizations to provide training to ensure that employees know how to leverage AI appropriately – and that they know what classes as inappropriate, risky use with implications for data privacy.

“Focus on the employees. Empower the workforce to be able to use these tools by giving them that proper guidance,” Chris Gow, senior director of EU public policy and head of government affairs at Cisco told Infosecurity.

For Gow, it’s also important that businesses who expect staff to use AI tools provide them with enterprise versions of those tools, to reduce the risk of data leaks or breaches via unauthorized personal AI applications

“As a company you can get enterprise versions of these tools: that’s going to encourage your employees to use them, rather than looking for shadow AI externally,” he said.

As with other areas of corporate data privacy, such as GDPR compliance, training should also form part of the strategy to ensure that staff are informed about how to appropriately handle data. They should also be provided with guides on what to do if they suspect a potential breach of data via an AI tool.

As AI becomes more embedded in workplaces, applications, services and society, people will expect their data to be handled carefully. Having full-fledged data privacy plans in place to ensure that is the case is therefore vital.

“Curating your data and having a privacy program in place isn’t just a compliance cost. There are clear benefits from that and it’s something that will be recognized more in an AI world,” says Gow.

Conclusion 

At a time when many businesses are moving swiftly to adopt AI into their ecosystems, Data Privacy Day should act as a catalyst for those organizations to think about the privacy and security issues which could occur if AI solutions are implemented incorrectly.

As well as understanding what data the organization holds, where it is and how it is used, data privacy leaders should ensure that their staff are trained and knowledgeable about how to appropriately handle data when using AI.



Source

Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleNew RCEs, Darknet Busts, Kernel Bugs & 25+ More Stories
Next Article Researchers Find 175,000 Publicly Exposed Ollama AI Servers Across 130 Countries
Team-CWD
  • Website

Related Posts

Cyber Security

New Hacking Campaign Exploits Microsoft Windows WinRAR Vulnerability

February 5, 2026
Cyber Security

Two Critical Flaws Found in n8n AI Workflow Automation Platform

February 4, 2026
Cyber Security

Sophos CISO on Software Flaws, Vendor Risk and Secure by Design

February 4, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views

Ukrainian Ransomware Fugitive Added to Europe’s Most Wanted

September 11, 20255 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202522 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views
Our Picks

Why you should never pay to get paid

September 15, 2025

What if your romantic AI chatbot can’t keep a secret?

November 18, 2025

Is it time for internet services to adopt identity verification?

January 14, 2026

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.