Close Menu
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice

Subscribe to Updates

Get the latest creative news from FooBar about art, design and business.

What's Hot

CISA Closes Ten Emergency Directives After Federal Cyber Reviews

January 12, 2026

Coolify Discloses 11 Critical Flaws Enabling Full Server Compromise on Self-Hosted Instances

January 12, 2026

Palo Alto Networks Introduces New Vibe Coding Security Framework

January 12, 2026
Facebook X (Twitter) Instagram
Monday, January 12
Facebook X (Twitter) Instagram Pinterest Vimeo
Cyberwire Daily
  • Home
  • News
  • Cyber Security
  • Internet of Things
  • Tips and Advice
Cyberwire Daily
Home»News»OpenAI Launches ChatGPT Health with Isolated, Encrypted Health Data Controls
News

OpenAI Launches ChatGPT Health with Isolated, Encrypted Health Data Controls

Team-CWDBy Team-CWDJanuary 12, 2026No Comments3 Mins Read
Share Facebook Twitter Pinterest LinkedIn Tumblr Reddit Telegram Email
Share
Facebook Twitter LinkedIn Pinterest Email


Artificial intelligence (AI) company OpenAI on Wednesday announced the launch of ChatGPT Health, a dedicated space that allows users to have conversations with the chatbot about their health.

To that end, the sandboxed experience offers users the optional ability to securely connect medical records and wellness apps, including Apple Health, Function, MyFitnessPal, Weight Watchers, AllTrails, Instacart, and Peloton, to get tailored responses, lab test insights, nutrition advice, personalized meal ideas, and suggested workout classes.

The new feature is rolling out for users with ChatGPT Free, Go, Plus, and Pro plans outside of the European Economic Area, Switzerland, and the U.K.

“ChatGPT Health builds on the strong privacy, security, and data controls across ChatGPT with additional, layered protections designed specifically for health — including purpose-built encryption and isolation to keep health conversations protected and compartmentalized,” OpenAI said in a statement.

Stating that over 230 million people globally ask health and wellness-related questions on the platform every week, OpenAI emphasized that the tool is designed to support medical care, not replace it or be used as a substitute for diagnosis or treatment.

The company also highlighted the various privacy and security features built into the Health experience –

  • Health operates in silo with enhanced privacy and its own memory to safeguard sensitive data using “purpose-built” encryption and isolation
  • Conversations in Health are not used to train OpenAI’s foundation models
  • Users who attempt to have a health-related conversation in ChatGPT are prompted to switch over to Health for additional protections
  • Health information and memories is not used to contextualize non-Health chats
  • Conversations outside of Health cannot access files, conversations, or memories created within Health
  • Apps can only connect with users’ health data with their explicit permission, even if they’re already connected to ChatGPT for conversations outside of Health
  • All apps available in Health are required to meet OpenAI’s privacy and security requirements, such as collecting only the minimum data needed, and undergo additional security review for them to be included in Health

Furthermore, OpenAI pointed out that it has evaluated the model that powers Health against clinical standards using HealthBench⁠, a benchmark the company revealed in May 2025 as a way to better measure the capabilities of AI systems for health, putting safety, clarity, and escalation of care in focus.

“This evaluation-driven approach helps ensure the model performs well on the tasks people actually need help with, including explaining lab results in accessible language, preparing questions for an appointment, interpreting data from wearables and wellness apps, and summarizing care instructions,” it added.

OpenAI’s announcement follows an investigation from The Guardian that found Google AI Overviews to be providing false and misleading health information. OpenAI and Character.AI are also facing several lawsuits claiming their tools drove people to suicide and harmful delusions after confiding in them. A report published by SFGate earlier this week detailed how a 19-year-old died of a drug overdose after trusting ChatGPT for medical advice.



Source

computer security cyber attacks cyber news cyber security news cyber security news today cyber security updates cyber updates data breach hacker news hacking news how to hack information security network security ransomware malware software vulnerability the hacker news
Share. Facebook Twitter Pinterest LinkedIn Tumblr Email
Previous ArticleWorld Economic Forum: Cyber-fraud overtakes ransomware
Next Article Palo Alto Networks Introduces New Vibe Coding Security Framework
Team-CWD
  • Website

Related Posts

News

CISA Closes Ten Emergency Directives After Federal Cyber Reviews

January 12, 2026
News

Coolify Discloses 11 Critical Flaws Enabling Full Server Compromise on Self-Hosted Instances

January 12, 2026
News

Palo Alto Networks Introduces New Vibe Coding Security Framework

January 12, 2026
Add A Comment
Leave A Reply Cancel Reply

Latest News

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202521 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views

U.S. Treasury Sanctions DPRK IT-Worker Scheme, Exposing $600K Crypto Transfers and $1M+ Profits

September 5, 20256 Views

Ukrainian Ransomware Fugitive Added to Europe’s Most Wanted

September 11, 20255 Views
Stay In Touch
  • Facebook
  • YouTube
  • TikTok
  • WhatsApp
  • Twitter
  • Instagram
Most Popular

North Korean Hackers Turn JSON Services into Covert Malware Delivery Channels

November 24, 202521 Views

macOS Stealer Campaign Uses “Cracked” App Lures to Bypass Apple Securi

September 7, 202517 Views

North Korean Hackers Exploit Threat Intel Platforms For Phishing

September 7, 20256 Views
Our Picks

What it is and how to protect yourself

January 8, 2026

Look out for phony verification pages spreading malware

September 14, 2025

Watch out for SVG files booby-trapped with malware

September 22, 2025

Subscribe to Updates

Get the latest news from cyberwiredaily.com

Facebook X (Twitter) Instagram Pinterest
  • Home
  • Contact
  • Privacy Policy
  • Terms of Use
  • California Consumer Privacy Act (CCPA)
© 2026 All rights reserved.

Type above and press Enter to search. Press Esc to cancel.