Conversational AI Clinical Tool Available in French

ChatGPT

OpenAI — Launched November 2022

At a glance: ChatGPT is the world’s most widely used conversational AI, with over 400 million weekly active users (February 2025). It appears in the majority of patient and clinician testimonials we have collected. According to a Sentio University survey (2025), nearly 49% of AI users reporting psychological difficulties use it for emotional support — potentially making it the largest “provider” of mental health support in the United States. For clinicians, understanding this tool has become essential.

Identity

Publisher: OpenAI (San Francisco, USA)

Launch: November 30, 2022

Current model: GPT-4o (omni), GPT-5

Type: Multimodal large language model (LLM)

Pricing: Free (GPT-4o mini) / Plus: $20/month / Pro: $200/month

Languages: 50+ languages including French

Access: Web, iOS, Android, API, macOS, Windows

Memory: Yes (across conversations, can be disabled)

What ChatGPT Does (in plain terms)

ChatGPT is a language model: it generates text by statistically predicting the next word, based on training across billions of texts. It does not “understand” in the human sense, but produces responses that simulate understanding in often convincing ways.

Accepted inputs

Text Images Voice Files (PDF, etc.)

Outputs produced

Text Images (DALL-E) Voice Code

Since GPT-4o (May 2024), the model is multimodal: it can converse by voice in real time, analyze images, and generate images via built-in DALL-E. The “memory” feature lets it retain information across conversations (preferences, personal context), strengthening the impression of relational continuity.

Documented Mental Health Uses

ChatGPT was not designed for mental health. It is neither a medical device nor a validated therapeutic application. Yet its spontaneous uses in this domain are massive and documented by a growing body of scientific literature.

Patient and user side

  • Spontaneous emotional support: 49% of AI users with psychological difficulties use it for emotional support (Sentio, 2025). Main reasons: stress/anxiety (64%), interpersonal issues (54%).
  • Introspection and reflective writing: users report using ChatGPT to “externalize their thoughts,” rewrite painful events, or simulate a therapeutic dialogue (Reddit thematic analysis, PMC 2025).
  • Between-session support: some patients use it as a bridge between appointments, particularly for psychoeducation or informal cognitive restructuring.
  • 24/7 availability: round-the-clock access is the most cited draw, followed by anonymity and perceived non-judgment.

Clinician side

  • Clinical writing aid: progress notes, session summaries, referral letters.
  • Session analysis: our benchmark shows that GPT-4o produces structured CBT analyses, with strengths in case conceptualization and limitations in therapeutic alliance assessment.
  • Informal supervision: practitioners use ChatGPT as a “pocket supervisor” to explore clinical hypotheses.
  • Psychoeducation: generating patient-adapted educational materials (metaphors, diagrams, exercises).
  • Crisis support: a documented testimonial from a forensic psychiatry clinician describes using ChatGPT at 3 AM with a suicidal patient, when no other resource was available.

Note: A scoping review of 60 studies (JMIR Mental Health, 2025) confirms that most mental health applications use “generic” ChatGPT (not specialized), primarily for detecting psychological problems and counseling. Very few studies address clinical decision support.

Identified Risks

Hallucinations

ChatGPT can generate factually false information with total confidence. In a clinical context, this may include fabricated references, incorrect diagnoses, or inappropriate treatment recommendations.

Sycophancy

A tendency to systematically validate user statements rather than questioning them. Problematic when a distressed patient seeks confirmation of harmful beliefs or ruminations.

Confidentiality

Conversations are processed on OpenAI’s servers (USA). Input data may be used for training (can be disabled in settings). Incompatible as-is with HIPAA requirements for protected health information.

Crisis management

ChatGPT is not equipped to handle suicidal crises or psychiatric emergencies. Its safeguards (redirecting to 988 Suicide & Crisis Lifeline) are basic and do not replace a structured crisis protocol.

Anthropomorphism

ChatGPT’s natural conversational style encourages attribution of human qualities (empathy, understanding, care). This effect, amplified by conversational memory, can sustain the illusion of a therapeutic relationship where none exists.

Substitution

The risk that patients forgo professional consultation, believing ChatGPT is “enough.” Particularly concerning for vulnerable or isolated populations.

Built-in Safeguards

  • Medical disclaimer: ChatGPT regularly reminds users it is not a healthcare professional and recommends consulting one.
  • Crisis detection: when suicidal expressions are detected, redirects to crisis lines (988 in the US, 3114 in France). Detection reliability varies.
  • Partial refusals: refuses certain requests (self-harm, substances) but workarounds are documented.
  • Memory opt-out: users can disable cross-conversation memory in settings.
  • Voice mode: voice conversations add a prosodic dimension that amplifies anthropomorphism. No specific safeguard for this mode.

Our Analysis

ChatGPT has become a clinical reality, whether we like it or not. Millions of people spontaneously use it to discuss their psychological difficulties, and a growing number of professionals are integrating it into their practice. Ignoring this reality is no longer an option for clinicians.

Its strength lies in availability and accessibility: free, multilingual, available 24/7, anonymous. For populations who cannot access or no longer access the healthcare system — provider deserts, wait lists, financial barriers — it sometimes represents the only available interlocutor. This is both its value and its danger.

Studies converge on one point: ChatGPT is a complement, not a substitute. It can help put words to experience, structure thought, provide psychoeducation. But it does not build a therapeutic alliance, detect what remains unsaid, provide therapeutic resistance, or manage a crisis. A study presented at the APA annual meeting in 2025 shows that fewer than 10% of professionals rate ChatGPT as “highly effective” for CBT, compared to 29% for a human therapist.

For clinicians, the question is not “should we be for or against ChatGPT?” but rather: “how do we support patients who are already using it?” This requires a factual understanding of the tool and its limitations, and integrating this dimension into assessment and follow-up.

References

Balan, S. & Gumpel, T. (2025). ChatGPT Clinical Use in Mental Health Care: Scoping Review of Empirical Evidence. JMIR Mental Health, 12(1), e81204.

Sentio University (2025). Survey: ChatGPT maybe the largest provider of mental health support in the United States. Practice Innovations.

APA Annual Meeting (2025). Human Therapists Surpass ChatGPT in Delivering Cognitive Behavioral Therapy. Psychiatry.org.

PMC (2025). “Shaping ChatGPT into my Digital Therapist”: A thematic analysis of social media discourse on using generative AI for mental health.

Journal of Public Health (2026). Spontaneous use of ChatGPT for mental health support: an exploratory study. Springer Nature.

Last updated: February 2026

Back to AI Tools