Analysis

AI Watch

Critical analysis of AI and mental health news. Our goal: expose biases, bring nuance, and reopen the debate.

In-depth Analysis

When AI makes headlines in mental health, we analyze, question and open up reflection.

Latest Analysis

Archives

Décryptage
Source externe

CONSORT 2025 and SPIRIT 2025: Open Science yes, Artificial Intelligence no

In April 2025, CONSORT and SPIRIT published their first coordinated update since 2010 and 2013. Major advance: Open Science is now integrated into the base standards. But artificial intelligence? Absent. Despite CONSORT-AI existing since 2020, the new guidelines mention neither AI, nor existing extensions, nor algorithmic transparency issues.

reporting guideline clinical trial open science methodological transparency normative fragmentation
Éditorial
From Lab to Practice: Why AI Health Studies Don't Measure What They Claim

From Lab to Practice: Why AI Health Studies Don't Measure What They Claim

Major AI health studies — published in Nature, JAMA, The Lancet — rely on Prolific workers, text-based vignettes, and inadequate comparators. Five layers of distance separate these protocols from clinical reality. A detailed analysis with data and a practical reading framework for the clinician.

ecological validity methodology critical appraisal Bean et al. Nature Medicine JAMA mental health double standards AI clinical trials
Décryptage
Source externe

Fewer than 40% of health chatbot studies report their prompting strategy: the CHART Statement

Out of 137 studies published in the year following ChatGPT's launch, fewer than 40% report key elements of their prompting strategy. An international consortium of 531 experts proposes 12 criteria to fix this — and change how we read these studies.

reporting guideline chatbot methodological transparency reproducibility AI evaluation
Décryptage
Why an AI That 'Outperforms Doctors' in the Lab Can Fail in the Clinic: Choudhury's Framework
Source externe

Why an AI That 'Outperforms Doctors' in the Lab Can Fail in the Clinic: Choudhury's Framework

An AI that achieves 95% accuracy on standardized cases can fail in real clinical practice. A human factors researcher at West Virginia University explains why — and offers a three-level framework every clinician should know before trusting an AI health study.

ecological validity human factors AI evaluation trust clinical adoption accountability
Décryptage
Source externe

Only 3 out of 52 journals require transparency for AI clinical trials: the CONSORT-AI case

Published in 2020, CONSORT-AI mandates 14 transparency criteria for clinical trials testing AI interventions. Five years on, adherence is declining and most journals ignore these standards. What this reveals — and how it changes the way we critically read studies.

reporting guideline clinical trial methodological transparency reproducibility AI evaluation
Décryptage
Source externe

77% of LLM studies in mental health never get past the bench test stage: the Hua framework

Out of 160 studies reviewed, LLMs account for 77% of bench tests but only 16% of clinical trials. A Harvard team proposes a three-tier framework to clarify what studies actually prove — and what they don't.

ecological validity AI evaluation chatbot mental health LLM methodology
Décryptage
Embedded ethics: What if ethicists joined the teams developing your AI tools?
Source externe

Embedded ethics: What if ethicists joined the teams developing your AI tools?

A Munich-based team proposes integrating ethicists directly into medical AI development teams. The idea is appealing—but is it enough? Analysis of an approach directly relevant to future AI-assisted psychotherapy tools.

ethics embedded ethics AI development interdisciplinarity medical AI bioethics
Décryptage
When Perceived Existential Threat Biases the Analysis of AI: A Critical Reading of Allen Frances

When Perceived Existential Threat Biases the Analysis of AI: A Critical Reading of Allen Frances

A prominent psychiatrist publishes a warning about therapeutic chatbots in the British Journal of Psychiatry. His analysis is lucid on certain risks, but falls into cognitive traps that every clinician should know how to spot — in authors and in themselves.

cognitive biases critical reading AI psychotherapy double standard risk analysis epistemology
Décryptage
AI, chatbot, LLM, app: why we need to stop conflating everything

AI, chatbot, LLM, app: why we need to stop conflating everything

When a study mentions a 'therapeutic chatbot', are we talking about a scripted decision tree or a fine-tuned GPT-4? This terminological blur is far from trivial: it makes studies incomparable and public debate unintelligible.

terminology chatbot LLM mental health apps epistemology research
Éditorial

The APA Model: A Framework for Evaluating Mental Health Apps

Over 10,000 mental health apps on the stores, but only 15% with clinical evidence. The APA model offers a 5-level framework to help clinicians navigate the landscape.

APA mental health apps evaluation ethics practical guide
Éditorial
Sycophantic AI: Reframing the Debate

Sycophantic AI: Reframing the Debate

Chatbots are accused of being 'too nice' and creating dependent users. This assumption deserves scrutiny: research shows that validation enables autonomy, not frustration.

sycophancy emotional validation attachment motivational interviewing AI ethics
Décryptage
Source externe

APA Ethical Guide: What Psychologists Should Know

The APA publishes an ethical guide for AI use in psychological practice. 71% of American psychologists have never used AI: what lessons can we draw?

APA ethics practical guide regulation AI in practice
Décryptage
Source externe

ChatGPT Health: When AI Enters Your Medical Records

OpenAI launches a dedicated version of ChatGPT for healthcare. Between promises of patient empowerment and unprecedented concentration of sensitive data, this announcement deserves critical examination.

OpenAI ChatGPT digital health health data ethics

Deepen the reflection

Explore our practical resources to integrate AI into your clinical practice ethically.