Analysis

AI Watch

Critical analysis of AI and mental health news. Our goal: expose biases, bring nuance, and reopen the debate.

In-depth Analysis

When AI makes headlines in mental health, we analyze, question and open up reflection.

Latest Analysis

Archives

Décryptage
Source externe

PROBAST+AI: 34 questions that most AI prediction models in healthcare don't survive

Published in the BMJ in March 2025, PROBAST+AI is the first quality assessment tool for clinical prediction models that holds classical statistical and artificial intelligence approaches to the same standards of rigour. Its starting finding is damning: most published models are of poor quality, their performance is overestimated and their biases go unnoticed. Sixth instalment of our series on AI evaluation frameworks in healthcare.

Décryptage
Source externe

CONSORT 2025 and SPIRIT 2025: Open Science yes, Artificial Intelligence no

In April 2025, CONSORT and SPIRIT published their first coordinated update since 2010 and 2013. Major advance: Open Science is now integrated into the base standards. But artificial intelligence? Absent. Despite CONSORT-AI existing since 2020, the new guidelines mention neither AI, nor existing extensions, nor algorithmic transparency issues.

Éditorial
From Lab to Practice: Why AI Health Studies Don't Measure What They Claim

From Lab to Practice: Why AI Health Studies Don't Measure What They Claim

Major AI health studies — published in Nature, JAMA, The Lancet — rely on Prolific workers, text-based vignettes, and inadequate comparators. Five layers of distance separate these protocols from clinical reality. A detailed analysis with data and a practical reading framework for the clinician.

Décryptage
Source externe

Fewer than 40% of health chatbot studies report their prompting strategy: the CHART Statement

Out of 137 studies published in the year following ChatGPT's launch, fewer than 40% report key elements of their prompting strategy. An international consortium of 531 experts proposes 12 criteria to fix this — and change how we read these studies.

Décryptage
Why an AI That 'Outperforms Doctors' in the Lab Can Fail in the Clinic: Choudhury's Framework
Source externe

Why an AI That 'Outperforms Doctors' in the Lab Can Fail in the Clinic: Choudhury's Framework

An AI that achieves 95% accuracy on standardized cases can fail in real clinical practice. A human factors researcher at West Virginia University explains why — and offers a three-level framework every clinician should know before trusting an AI health study.

Décryptage
Source externe

Only 3 out of 52 journals require transparency for AI clinical trials: the CONSORT-AI case

Published in 2020, CONSORT-AI mandates 14 transparency criteria for clinical trials testing AI interventions. Five years on, adherence is declining and most journals ignore these standards. What this reveals — and how it changes the way we critically read studies.

Décryptage
Source externe

77% of LLM studies in mental health never get past the bench test stage: the Hua framework

Out of 160 studies reviewed, LLMs account for 77% of bench tests but only 16% of clinical trials. A Harvard team proposes a three-tier framework to clarify what studies actually prove — and what they don't.

Décryptage
Embedded ethics: What if ethicists joined the teams developing your AI tools?
Source externe

Embedded ethics: What if ethicists joined the teams developing your AI tools?

A Munich-based team proposes integrating ethicists directly into medical AI development teams. The idea is appealing—but is it enough? Analysis of an approach directly relevant to future AI-assisted psychotherapy tools.

Décryptage
AI, chatbot, LLM, app: why we need to stop conflating everything

AI, chatbot, LLM, app: why we need to stop conflating everything

When a study mentions a 'therapeutic chatbot', are we talking about a scripted decision tree or a fine-tuned GPT-4? This terminological blur is far from trivial: it makes studies incomparable and public debate unintelligible.

Éditorial

The APA Model: A Framework for Evaluating Mental Health Apps

Over 10,000 mental health apps on the stores, but only 15% with clinical evidence. The APA model offers a 5-level framework to help clinicians navigate the landscape.

Décryptage
Source externe

APA Ethical Guide: What Psychologists Should Know

The APA publishes an ethical guide for AI use in psychological practice. 71% of American psychologists have never used AI: what lessons can we draw?

Décryptage
Source externe

ChatGPT Health: When AI Enters Your Medical Records

OpenAI launches a dedicated version of ChatGPT for healthcare. Between promises of patient empowerment and unprecedented concentration of sensitive data, this announcement deserves critical examination.

Deepen the reflection

Explore our practical resources to integrate AI into your clinical practice ethically.