When Perceived Existential Threat Biases the Analysis of AI: A Critical Reading of Allen Frances
A prominent psychiatrist publishes a warning about therapeutic chatbots in the British Journal of Psychiatry. His analysis is lucid on certain risks, but falls into cognitive traps that every clinician should know how to spot — in authors and in themselves.
The Article
In August 2025, Allen Frances — American psychiatrist, Professor Emeritus at Duke University, and most notably former chair of the task force that produced the DSM-IV — published in the British Journal of Psychiatry an article entitled “Warning: AI chatbots will soon dominate psychotherapy”.
The article is substantial. Frances lists eleven benefits of chatbot-delivered therapy, fourteen dangers, and formulates recommendations for the profession. It all concludes with a dramatic question: will AI be “the great new servant of humanity, or is it destined to replace us in a Darwinian struggle for survival?”
Coming from an author of this stature, published in one of the most prestigious psychiatry journals in the world, the article deserves close reading. That is what we did — and it is precisely this close reading that reveals problems.
Not that Frances is wrong about everything. On the contrary, several of his concerns are well-founded. But the way he constructs his analysis falls into identifiable, reproducible cognitive biases, and — this is what interests us here — biases that are generalizable to most debates about AI in mental health.
If even a clinician of this caliber is not immune, none of us are. Hence the value of spotting them.
What Frances Gets Right
Before critiquing, let’s acknowledge what is solid. Frances identifies real risks that we share:
-
Iatrogenic harm to vulnerable patients: an unsupervised chatbot interacting with a psychotic or suicidal person poses a genuine risk. This is not speculation — it is a legitimate clinical concern.
-
Regulatory circumvention: many mental health apps label themselves “wellness tools” to avoid regulations applicable to medical devices. Frances is right to flag this.
-
Commercial exploitation of data: the fact that companies use transcripts from online therapy sessions to train their models raises ethical questions about informed consent.
-
Training through simulated patients: one of the most constructive ideas in the article — AI could train therapists without exposing real patients to learning mistakes.
These points deserve to be taken seriously. The problem is not what Frances says — it’s what he doesn’t say, and how he weighs what he does say.
Bias #1 · The Double Standard
The bias: Applying different evaluation criteria depending on whether one is judging AI or humans.
This is the most structuring bias in the article. When a chatbot makes a mistake, Frances calls it “terrifying.” When a human therapist makes a mistake, he notes parenthetically, with humor: “in fairness, we human therapists also sometimes say dumb things and don’t own up to them.”
Same phenomenon, two radically different treatments.
Yet the literature on iatrogenic harm in human psychotherapy is far from anecdotal. Lilienfeld (2007) documented multiple psychological treatments that cause harm, including widely practiced approaches. Crawford et al. (2016), in the British Journal of Psychiatry itself, found that approximately 5 to 10% of patients in clinical trials of psychological therapies deteriorate — a rate comparable to pharmacotherapy.
Frances mentions none of these data. Iatrogenic harm is presented as a problem specific to AI, when it is inherent to any therapeutic intervention.
Why this matters: if we want to rigorously evaluate therapeutic AI, we must apply the same criteria to both modalities. Same outcome measures, same safety standards, same transparency about adverse effects. This is not defending AI — it is defending the scientific method.
How to spot it: every time an article about AI in mental health describes a risk, ask yourself: does this risk also exist in human therapy? And if so, why isn’t it mentioned?
Bias #2 · Probability Neglect
The bias: Evaluating a risk by the severity of the worst imaginable scenario, without estimating its probability of occurrence.
This bias has a name in cognitive psychology: probability neglect (Sunstein, 2005). It is the tendency to judge a risk by the terror it inspires rather than by its actual frequency. It is why we fear flying more than driving, even though driving kills incomparably more people.
Frances catalogs spectacular dangers: totalitarian brainwashing, AI that “rebels” against its programmers, chatbots that “blackmail” their developers, suicide caused by a chatbot. Each danger is presented at its extreme — never by its probability.
The irony is profound. Frances chaired the DSM-IV task force — a monument of evidence-based psychiatry, built on epidemiology: prevalence, incidence, risk factors, number needed to treat, number needed to harm. The entire intellectual tradition of which he is one of the architects rests on probabilistic risk assessment.
That a psychiatrist of this stature abandons all probabilistic estimation in favor of a catalog of possible catastrophes is a remarkable inconsistency with his own tradition.
What’s missing: for each listed danger, what proportion of users is actually affected? What is the rate of serious incidents per number of interactions? How does this rate compare to the base rate in human therapy? Without these data, we are not doing risk analysis — we are doing fear rhetoric.
How to spot it: when an article lists dangers without ever estimating their frequency, ask yourself: am I looking at a risk assessment or an emotional argument?
Bias #3 · The Demonized Tool
The bias: Attributing to a technical tool the risks that stem from its instrumentalization by human systems.
Frances describes very real dangers: commercial exploitation of therapeutic data, political manipulation via chatbots, intrusive marketing, abusive collection of personal information. These risks exist. But they are not properties of AI — they are properties of surveillance capitalism applied to mental health.
Human teleconsultation poses the same confidentiality issues. Electronic medical records are hacked regularly. Pharmaceutical companies have been doing targeted marketing to prescribers for decades. None of these problems has led to the conclusion that teleconsultation, medical records, or medications should be abolished.
Frances goes so far as to invoke Goebbels and Nazi Germany to illustrate the danger of AI manipulation. But Goebbels used radio — not artificial intelligence. Language itself, the printing press, television: every communication technology has been instrumentalized for destructive purposes. The relevant question is never “can this tool be misused?” (yes, like any tool) but “what institutional, regulatory, and ethical frameworks enable beneficial use?”
It is precisely this constructive question that Frances does not explore.
How to spot it: when an article attributes a risk to a technology, ask yourself: is this risk intrinsic to the technology, or to the system in which it is deployed?
Bias #4 · The Vanishing Benefits
The bias: Documenting advantages, then excluding them from the conclusion and recommendations.
This is perhaps the most troubling bias in the article, because it is structural.
Frances devotes a substantial section to the benefits of therapeutic AI: 24/7 accessibility, reduced cost, absence of judgment, ability to integrate techniques from different therapeutic schools. He acknowledges that chatbots are “good, some brilliant.” He acknowledges that the majority of users benefit from them.
Then, in his conclusion, these benefits vanish entirely. The final word is exclusively alarmist: “existential threat,” “David versus Goliath,” “Darwinian struggle for survival.” No recommendation addresses how to maximize the benefits he has just documented.
It is as if a physician documented that a treatment is effective for 80% of patients, has serious side effects for 5%, and concluded that it should never be prescribed.
The stakes are not abstract. The WHO estimates that more than 75% of people suffering from mental disorders in low- and middle-income countries have no access to psychological care. If even a fraction of these people benefited from AI support — even imperfect, even limited to mild cases — the public health impact would be considerable. An article in the British Journal of Psychiatry should integrate these data into the balance, not mention them as a preamble only to forget them afterward.
Frances paradoxically reproduces the pattern he describes in Weizenbaum: horrified to discover that “it works,” the response is alarm rather than constructive exploration.
The Question He Doesn’t Ask
Throughout the article, Frances maintains a competitive framing: human versus AI, “adapt or die,” David versus Goliath. This framing excludes what may be the most promising answer: collaboration.
What might this look like in practice?
- AI provides between-session support (exercises, psychoeducation, mood monitoring) while the human therapist focuses on relational work during sessions
- AI serves as a real-time clinical supervision tool — detecting suicidal risk signals, suggesting evidence-based interventions
- Stepped care protocols where AI handles the first level of support, and the human therapist intervenes when complexity demands it
- The therapist as supervisor of AI agents, guaranteeing clinical quality and ethics
Frances mentions in passing the idea of “leading teams of artificial intelligence agents.” But he does not develop it. His competitive framing prevents him: if AI is an enemy in a Darwinian struggle, it cannot simultaneously be a partner.
Frances’s question — “servant or replacement?” — admits a third answer: partner.
What This Article Teaches Us About Ourselves
The biases we have just identified are not Frances’s “mistakes.” They are predictable cognitive reactions to a perceived existential threat. Every clinician will recognize the pattern:
- Dichotomous thinking: friend or foe, adapt or die
- Selective attention: dangers capture attention, benefits are minimized
- Overgeneralization: a few dramatic cases become the rule
- Emotional reasoning: “it’s terrifying, therefore it’s likely”
These are exactly the cognitive distortions we help our patients identify. It would be paradoxical if the profession that formalized these concepts failed to apply them to its own thinking about AI.
The value of Frances’s article is therefore not only in what it says about AI. It is also a textbook case of what happens to our thinking when we feel threatened — even when we are recognized experts, even when we write in prestigious journals, even when we have the best intentions in the world.
Conclusion: Thinking Tools, Not Camps to Join
The debate about AI in psychotherapy does not need more alarm. Nor does it need blind enthusiasm. It needs what our discipline does best: rigorous, nuanced, data-driven analysis.
This means:
- Applying the same criteria to evaluate human therapy and AI therapy
- Assessing risks by their probability, not just their imagined severity
- Including benefits in the balance, especially for populations currently without access to care
- Exploring collaboration models rather than preparing for war
- Developing our own AI literacy as a professional competency
Frances does us a service by forcing the profession to confront this transformation. But a profession that defines itself by the rigorous analysis of human complexity cannot afford a simplified analysis of technological complexity.
Neither naive technophile nor catastrophist technophobe: this is the hardest position to hold, and it is exactly the one our patients need.
This article is the first in a series of critical analyses of the scientific literature on AI in mental health. Our goal is not to defend AI or condemn it, but to equip clinicians to read these publications with the same critical eye they apply to their own patients.
Reference analyzed: Frances, A. (2025). Warning: AI chatbots will soon dominate psychotherapy. The British Journal of Psychiatry, 1–5. https://doi.org/10.1192/bjp.2025.10380
Further reading:
- Crawford, M. J., Thana, L., Farquharson, L., Palmer, L., Hancock, E., Bassett, P., Clarke, J., & Parry, G. D. (2016). Patient experience of negative effects of psychological treatment: Results of a national survey. British Journal of Psychiatry, 208(3), 260–265. https://doi.org/10.1192/bjp.bp.114.162628
- Lilienfeld, S. O. (2007). Psychological treatments that cause harm. Perspectives on Psychological Science, 2(1), 53–70. https://doi.org/10.1111/j.1745-6916.2007.00029.x
- Sunstein, C. R. (2005). Laws of fear: Beyond the precautionary principle. Cambridge University Press.
Mots-clés