Veille IA

ChatGPT Health: When AI Enters Your Medical Records

OpenAI launches a dedicated version of ChatGPT for healthcare. Between promises of patient empowerment and unprecedented concentration of sensitive data, this announcement deserves critical examination.

The Facts

What OpenAI Announces

OpenAI is launching ChatGPT Health, a dedicated experience within ChatGPT allowing users to connect their health data for personalized responses.

Massive Pre-existing Usage

According to OpenAI, over 230 million people already ask health and wellness-related questions on ChatGPT every week. ChatGPT Health aims to formalize and secure this existing usage.

Main Features

  • Medical data connection: Electronic Health Records (EHR), test results, consultation reports
  • Wellness app integration: Apple Health, Function, MyFitnessPal, Weight Watchers, Peloton, AllTrails, Instacart
  • Announced use cases: understanding test results, preparing medical appointments, tracking diet and physical activity, comparing health insurance options

Architecture and Privacy

OpenAI highlights several protections:

  • Isolated space: health conversations are separated from other chats
  • Dedicated encryption and enhanced isolation
  • Not used for training: conversations are not used to train foundational models
  • Compartmentalized memories: health context doesn’t “leak” into regular conversations

Development and Evaluation

  • 260 physicians from 60 countries consulted over 2+ years
  • 600,000 feedback entries on model responses
  • HealthBench: a proprietary clinical evaluation framework developed with practitioners

Explicit Limitations

OpenAI specifies that ChatGPT Health:

  • Does not establish diagnoses
  • Does not propose treatments
  • Aims to “support, not replace, healthcare professionals”

Availability

  • Limited access launch (waitlist)
  • EHR integrations and some apps reserved for the United States
  • Apple Health connection requires iOS

Implications and Open Questions

Mental Health: The Great Absent

OpenAI’s announcement speaks abundantly about “health and wellness,” blood test results, nutrition, physical activity. But no mention of mental health, psychotherapy, or psychological support.

Is this silence legitimate caution given the complexity of the human psyche? Or an implicit admission that AI isn’t ready for this terrain? Yet among those 230 million weekly questions, how many concern anxiety, depression, relationship difficulties?

Question for mental health professionals: Should we be relieved by this apparent exclusion of the psychological field, or worried that it will be circumvented by users anyway?

The Privacy Paradox

OpenAI promises enhanced health data protection: dedicated encryption, isolation, non-use for training. Yet the business model relies on an unprecedented concentration of sensitive medical data with a private American company.

Users are invited to connect:

  • Their complete medical records
  • Their biological test results
  • Their sleep, activity, and nutrition data
  • Their consultation history

This centralization creates a single point of vulnerability. In case of a security breach, the consequences would be considerable. And even without a breach, the question of long-term data governance remains open.

Question for debate: Can we entrust our most intimate health data to a company whose business model and jurisdiction (USA, outside GDPR) largely escape European patients’ control?

”Support, Not Replace”: A Rhetoric to Question

This phrase, repeated by OpenAI, has become the mantra of all tech companies entering the medical field. It deserves critical examination.

De facto substitution already exists. With 230 million people asking health questions each week, ChatGPT is already a major player in medical information — well before this announcement. Formalization via ChatGPT Health only officializes and amplifies an existing phenomenon.

Empowerment can become dependency. The “better-informed” patient who “prepares their questions” with AI arrives at the office with an already-formed framework. This profoundly modifies the consultation dynamic: does the caregiver become a mere validator of AI hypotheses?

Question for healthcare professionals: How do we welcome a patient who arrives with a ChatGPT summary of their medical file? Should we celebrate it as time saved, or worry about a pre-installed confirmation bias?

The Permanent Medicalization of Daily Life

ChatGPT Health encourages continuous data connection: sleep, physical activity, nutrition, biological analyses. This permanent self-surveillance raises questions.

The quantified self, pushed to the extreme, can generate:

  • Performance anxiety (did I sleep enough? walk enough?)
  • Pathologization of normal variations (my heart rate varied by 3 bpm, is that serious?)
  • Dependency on external validation (what does the AI say about my data?)

For clinical psychologists, this phenomenon isn’t new — but its industrialization through a tool as accessible as ChatGPT gives it unprecedented scale.

Clinical question: How do we support patients whose anxiety is fueled by AI-assisted health surveillance?

The Irony of AI Psychosis

It’s striking that OpenAI launches a health tool at the very moment media outlets report cases of “AI-induced psychosis” — people developing pathological relationships with chatbots, or even delusional episodes nourished by these interactions.

ChatGPT Health mentions no specific safeguards to detect or prevent these deviations. Yet a tool that invites users to share their medical intimacy and “personalize their experience” creates conditions for potentially problematic attachment.

Ethical question: What responsibility does OpenAI bear if a user develops pathological dependency on their “health companion”?

Access Asymmetry

The most advanced features (medical record connection, certain apps) are reserved for the United States. This asymmetry raises the question of equity in digital health.

European patients will have access to a “degraded” version — but will they be protected by GDPR? Server location, applicable jurisdiction, right to erasure: so many gray areas.

Health policy question: Should we wait for a “European ChatGPT Health” compliant with GDPR, or accept dependency on American standards?

What Should Be Debated

In light of this announcement, several questions would merit in-depth debate among healthcare professionals, psychologists, ethicists, and regulators:

  1. Training for caregivers: How do we prepare healthcare professionals to interact with AI-”augmented” patients?

  2. Medical liability: Who is responsible if a patient makes a bad decision based on a ChatGPT Health interpretation?

  3. Access inequalities: Will health AI widen inequalities between “connected” patients and others?

  4. Place of human relationship: What becomes of care when part of the listening, information, and preparation work is delegated to a machine?

  5. Specific regulation: Should consumer health AI tools be subject to the same regulations as medical devices?

  6. Mental health and AI: Should we prohibit, regulate, or support the use of ChatGPT for psychological health questions?


Our Position

At AI & Psychotherapy, we are neither blissful technophiles nor reflexive technophobes. ChatGPT Health represents a significant evolution that deserves attention and vigilance.

What seems positive to us:

  • OpenAI’s recognition that health requires specific protections
  • Physician involvement in development
  • (Relative) transparency about the tool’s limitations

What concerns us:

  • The total absence of the psychological dimension
  • Concentration of sensitive data with a private actor
  • The risk of anxiety-inducing medicalization of daily life
  • The absence of safeguards against pathological uses

What we call for:

  • A public debate involving mental health professionals
  • Specific European regulation
  • Independent research on the psychological impact of these tools
  • Training for psychologists to support patients who use health AI

This article inaugurates our AI Watch section, dedicated to critical analysis of news concerning AI and health. Our goal: expose biases, bring nuance, and reopen debate.

Mots-clés

OpenAI ChatGPT digital health health data ethics