Conversational AI Clinical Tool Available in French

Claude

Anthropic (San Francisco) — Launched March 2023

At a glance: Claude is Anthropic’s conversational AI model, designed with a particular emphasis on safety, nuance, and refusal of potentially harmful responses. Less publicized than ChatGPT, it is nonetheless used by clinicians for clinical elaboration and informal supervision. Anthropic is also the first AI company to have published a formal study on the affective uses of its own model (June 2025), acknowledging that 2.9% of conversations involve emotional support. Since January 2026, “Claude for Healthcare” offers dedicated tools for the healthcare sector, HIPAA-compliant.

Identity

Publisher: Anthropic (San Francisco, USA)

Launch: March 2023 (Claude 1), current model: Opus 4.5 (Nov. 2025)

Type: Multimodal large language model (LLM)

Founders: Dario and Daniela Amodei (ex-OpenAI)

Pricing: Free (limited) / Pro: $20/month / Max: $100/month

Languages: Multilingual including French (strong proficiency)

Access: Web (claude.ai), iOS, Android, API

Memory: Yes (“Projects” with persistent context)

What Claude Does (in plain terms)

Like ChatGPT, Claude is a language model that generates text through statistical prediction. It does not “understand” in the human sense. What sets it apart: Anthropic has placed strong emphasis on alignment (ensuring the model behaves according to human intentions) and constitutional safety (Claude is trained to refuse certain requests rather than satisfy the user at all costs).

Accepted inputs

Text Images Files (PDF, etc.) Long context (200K–1M tokens)

Outputs produced

Text Code Document analysis

Three model tiers

  • Haiku: fast and lightweight, for simple tasks
  • Sonnet: balanced, the most commonly used day-to-day
  • Opus: the most capable, for complex tasks (long analysis, reasoning)

A notable difference from ChatGPT: Claude has a very large context window (200,000 tokens on standard plans, roughly 600 pages). The 1 million token window (roughly 3,000 pages) is currently reserved for Enterprise and API subscriptions. Even at 200K, this allows analysis of lengthy clinical documents — a complete patient file, an expert report, a session transcript — in a single request.

Safety Approach: What Sets Claude Apart

Anthropic distinguishes itself through an explicitly AI safety-centered posture. Founded in 2021 by former OpenAI executives (Dario and Daniela Amodei), the company has made alignment and risk reduction its founding identity.

Constitutional AI

Claude is trained using a method called “Constitutional AI”: rather than relying solely on human feedback, the model is guided by a set of written principles (“constitution”) that steer responses toward honesty, helpfulness, and harmlessness.

Resistance to sycophancy

Claude is designed to resist user pressure when a request involves risk. According to Anthropic, Claude refuses or reframes in fewer than 10% of emotional support conversations, but does so systematically when safety is at stake (self-harm, eating disorders, etc.).

Claude for Healthcare

Launched in January 2026, “Claude for Healthcare” offers dedicated tools for the healthcare sector: patient record summaries, test result explanations, pattern detection in health data. HIPAA-compliant.

Mental health usage policy

Anthropic’s usage policy explicitly requires that a “qualified professional must review content or decisions before dissemination or finalization” for any use involving “diagnosis, treatment, therapy, or mental health.”

Documented Mental Health Uses

Like ChatGPT, Claude is not designed as a therapeutic tool. But Anthropic is the first AI company to have formally studied and published findings on the affective uses of its model.

Anthropic study on affective uses (June 2025)

Using their aggregate analysis tool Clio, Anthropic researchers published a study on how users talk to Claude about their emotional difficulties.

  • 2.9% of conversations on claude.ai involve emotional support (counseling, coaching, companionship)
  • • Fewer than 0.5% involve companionship or role-play
  • • Topics addressed: career development, relationship navigation, persistent loneliness, existential questions (meaning, consciousness)
  • • Key finding: as conversations progress, user-expressed sentiment tends to become more positive
  • • Caveat: “We cannot claim these shifts represent lasting emotional benefits” (Miles McCain, Anthropic researcher)

Reported professional uses

  • Clinical elaboration: psychologists use Claude as a “cognitive amplifier” to explore hypotheses, draw theoretical connections, and reframe case conceptualizations. Anna’s testimonial on this site illustrates this use.
  • Long document analysis: the extended context window enables analyzing a complete session transcript, expert report, or clinical file in a single request.
  • Writing assistance: clinical reports, referral letters, research articles, psychoeducation materials.
  • Informal supervision: formulating diagnostic hypotheses, exploring alternative therapeutic approaches, case preparation.

Patient uses

  • • Fewer documented spontaneous uses than ChatGPT (smaller market share)
  • • User profile tends to be more tech-savvy or specifically drawn to Claude for its reputation for nuance
  • • Laura, whose testimonial appears on this site, uses both Claude and ChatGPT for emotional well-being and notes stylistic differences between the two

Note: The Anthropic study measures only expressed language, not actual psychological states. Improving sentiment over the course of a conversation does not necessarily mean clinical improvement. And as Anthropic notes, patterns may shift with the introduction of voice interactions.

Identified Risks

Hallucinations

Like any LLM, Claude can generate false information with confidence. While Anthropic works to reduce this problem, it persists, particularly for bibliographic references and numerical data.

Boundless empathy

Anthropic itself identifies the risk of “endless empathy”: a model that is always available and always kind could foster unhealthy attachment, particularly among isolated individuals or those with relational difficulties.

Confidentiality

Conversations are processed on Anthropic’s servers (USA). “Claude for Healthcare” is HIPAA-compliant, but not GDPR-compliant. Entering identifiable patient data remains problematic for European practitioners.

Excessive caution

The safety orientation can produce the opposite effect: excessive refusals on legitimate topics, overly cautious responses lacking substance, or repetitive disclaimers that hinder conversational flow.

Crisis management

Like ChatGPT, Claude is not equipped to handle a psychiatric emergency. Anthropic is working with crisis organizations to improve referrals, but the system remains basic.

No voice generation

Unlike ChatGPT, Claude does not (yet) offer real-time voice interactions. This limits accessibility for some populations but also reduces anthropomorphism risks associated with vocal interaction.

Our Analysis

Claude occupies a distinctive position in the conversational AI landscape. Less widely used than ChatGPT by the general public, it is proportionally more adopted by professionals and discerning users who value nuance and depth of response.

For clinicians, Claude presents an interesting profile. Its very large context window enables analyzing complete clinical documents in a single pass — a real asset for supervision, session analysis, or case conceptualization. Anna’s testimonial on this site shows how a psychologist uses this capacity as a “cognitive amplifier”: not to replace her own thinking, but to extend and enrich it.

Anthropic’s approach to safety is both a strength and a limitation. The model is less likely to produce dangerous or sycophantic responses — valuable in a clinical context. But it can also be perceived as overly restrictive, refusing legitimate discussions on sensitive topics that practitioners need to explore.

The fact that Anthropic publishes its own research on Claude’s affective uses is noteworthy. It represents a rare form of transparency in the industry, even if it remains partial (the company does not publish raw data and controls the narrative). This positions Claude as a tool whose maker explicitly acknowledges mental health implications, rather than ignoring them.

Transparency note: This site (IA-Psy) uses Claude as a working tool for writing, analysis, and development. This profile strives to maintain the same critical distance as for other models. Our use of it does not constitute an endorsement.

References

Anthropic (2025). How people use Claude for support, advice, and companionship. anthropic.com/news.

Anthropic (2026). Claude for Healthcare and Life Sciences. anthropic.com/news/healthcare-life-sciences.

Psychiatric Times (2025). Comparing the Clinical Utility of 4 Major Chatbots.

Axios (2025). New Anthropic report on chatbots for therapy and companionship.

Anthropic (2025). Introducing Claude Opus 4.5. anthropic.com/news/claude-opus-4-5.

Last updated: February 2026

Back to AI Tools