What the HAS-CNIL Guide Reveals (and Conceals) About AI in Psychotherapy
The HAS-CNIL guide on AI in healthcare (February 2026) mentions neither psychotherapy nor mental health. Three critical blind spots for psychologists: consent, professional secrecy, situation awareness. Analysis and practical tools.
Source analysée
https://www.cnil.fr/sites/default/files/2026-03/guide_has_cnil_recommandations_ia.pdf
As a clinical psychologist, I had high expectations for the first official French guide on AI in healthcare. Sixty pages, co-signed by the Haute Autorité de Santé (HAS) and the CNIL, articulating the European AI Regulation (the “AI Act”), the GDPR, and the French Public Health Code. Serious work that lays necessary foundations.
And then I looked for our practice. Psychotherapy. Psychological care. The therapeutic relationship. Nothing.
The word “psychotherapy” does not appear once in the HAS-CNIL guide published in February 2026. Neither does “mental health” as a specific domain. Yet the document claims to cover “the care context” in its entirety.
If you felt uneasy reading it — a diffuse sense that this text doesn’t really concern you, that the situations it describes don’t resemble what you experience in your office — that unease is well-founded. It’s not a reading failure on your part. It’s a blind spot in the guide.
This article builds on our previous analysis: What the HAS Says (and Doesn’t Say) About AI in Mental Health. The joint HAS-CNIL guide of February 2026 confirms and amplifies the institutional blind spot we had identified.
What the guide says — and says well
The guide clearly articulates the regulatory triptych. It structures risk levels for AI systems. It reminds us that “the healthcare professional remains fully responsible, even when the act is performed with the assistance of an SIA.” It insists on training, transparency, vigilance.
In other words, the guide assigns the clinician the role of a rubber stamp: validate, affix one’s seal of human expertise on AI outputs, guarantee compliance. The problem isn’t that this stamp lacks ink — it’s that this very position reduces the professional to a passive validator, where psychological care demands an active co-constructor.
The cost of this blind spot
This void is not an academic oversight. It is a space into which unregulated uses are already rushing.
440,000
connections to personal generative AI by hospital staff in a single month at the Nancy University Hospital1
This is the phenomenon we proposed to analyze using an adapted theoretical framework in a forthcoming article3. While the guide regulates an ideal use case, actual use has already outpaced it.
Human oversight seen as a rubber stamp
Naming the problem: Fictitious Human Oversight
The guide makes human oversight its cornerstone. It prescribes “meaningful” oversight that must enable the professional to “understand the system’s capabilities and limitations, detect anomalies, avoid automation bias, and correctly interpret results.”
But this requirement presupposes a specific model: that of the radiologist supervising a detection tool. In psychotherapy, this model is structurally unsuitable. The therapeutic relationship is not a protocol applied to algorithmic output. It IS the vehicle of care.
This gap has a name. We call it Fictitious Human Oversight: the situation where human oversight over an AI system is formally exercised — procedures, validation, documented supervision — but cognitively empty. The operator approves without understanding, monitors without grasping, validates without any real capacity for critical intervention.
This is not an argument against human oversight. It is an observation: certain oversight mechanisms are organizational fictions that create the illusion of control.
Fictitious Human Oversight rests on five converging mechanisms:
1. Formal presence without cognitive substance
The clinician checks a validation box but has neither the time nor the technical expertise to genuinely evaluate what the tool produces.
A documented tendency to treat algorithmic recommendations as decisions already made rather than suggestions to be evaluated4.
3. Degradation of situation awareness
The more we delegate to the tool, the more we lose the ability to intervene when it matters5.
4. Diffuse accountability gap
When no one — neither the designer, nor the deployer, nor the clinician — can be held fully accountable for a harmful outcome6.
5. Institutional legitimation effect
The formal presence of a human in the loop makes fictitious oversight self-protective — the record proves that supervision took place.
Automation bias in session
Let us imagine a concrete situation. A psychologist uses an AI tool that analyzes a patient’s PHQ-9 responses and generates a risk profile: “moderate risk, CBT orientation recommended.” The score seems consistent with their clinical impression.
The risk is not that the AI is grossly wrong. The risk is that the AI is roughly right, and that the clinician stops mobilizing their own judgment on the nuances the algorithm cannot capture: the patient’s tone of voice, the hesitation on certain questions, the relational context coloring the responses. The score becomes a diagnosis rather than a signal to be clinically contextualized.
This slippage is all the more insidious because it is rendered invisible by a social paradox documented by research:
| What the studies measure | Score | Source |
|---|---|---|
| Psychotherapists’ favorability toward AI (self-assessment) | 4.30 / 7 | Wagner & Schwind, 20257 |
| Score given by therapists to AI empathic support | 1.77 / 6 | Wagner & Schwind, 20257 |
| Competence of a clinician using AI, as judged by peers | 3.79 / 7 | Yang et al., 20258 |
| Competence of a clinician without AI, as judged by peers | 5.93 / 7 | Yang et al., 20258 |
The result? A double bind. The clinician can neither openly adopt AI (at the risk of losing professional credibility) nor refuse it entirely (it is already in their work environment). The foreseeable consequence — and one already observable — is discreet, unsupervised use, undiscussed among peers. Exactly the breeding ground in which Fictitious Human Oversight thrives.
Three blind spots that change everything for the psychologist
Consent is not the same in psychotherapy
The guide recommends NOT requiring specific consent for the use of an SIA in routine care, arguing the “imbalance of the patient-professional relationship.” In somatic medicine, this argument may hold.
In psychotherapy, the framework is qualitatively different. The relationship of trust is not a context of care — it is the care.
The therapeutic transference is active. The relational asymmetry is not a bias to be compensated by a form — it is the very lever of the therapeutic process. Introducing an algorithmic third party into this relationship without consent that takes these specific dimensions into account means modifying the framework of care while pretending to only modify the tool.
Professional secrecy is a condition of care, not merely a legal obligation
For a psychotherapist, professional secrecy is the very condition of the therapeutic process. Without a guarantee of absolute confidentiality, no free speech. Without free speech, no transference. Without transference, no psychological care in the clinical sense of the term.
When a mental health professional uses a public LLM to rephrase a clinical note or explore a diagnostic hypothesis, fragments of clinical material — potentially identifying, always intimate — pass through servers whose business model relies on data exploitation. The guide treats confidentiality as a standard GDPR issue. In psychotherapy, it is a structural issue of care.
Psychological data are identifying by their very content — the combination of a diagnosis, a family context, and a relational dynamic is often sufficient to identify a patient, name or not.
Situation awareness IS the care
Mica Endsley theorized the loss of situation awareness linked to automation — the “out-of-the-loop” problem5. When an operator is confined to a passive monitoring role, their active comprehension skills progressively deteriorate.
In most domains, this loss is a risk to be managed. In psychotherapy, it is a structural catastrophe.
Because the clinician’s situation awareness is not a complement to care — it is the care itself. Freudian free-floating attention, Rogerian empathic resonance, the micro-behavioral adjustment of the therapeutic alliance — all of this requires active, sustained, irreplaceable cognitive presence.
A psychotherapist who is “out-of-the-loop” is no longer a psychotherapist. They are an operator rubber-stamping algorithmic outputs with the seal of clinical expertise.
But — and this is essential — this does not mean that all AI in psychotherapy should be prohibited. It means that the regulatory framework is not yet up to the specificity of our practice. And it is up to clinicians to signal this.
Refusing the role of rubber stamp
Three levels of use, three different frameworks
To move beyond the binary reflex of “adopt or refuse,” it is useful to distinguish three levels:
Level 1 — Pure administrative use
Scheduling, billing, non-clinical documentation. Low risk to the therapeutic relationship. Standard GDPR framework applicable.
Level 2 — Indirect clinical use
Literature review, session preparation, hypothesis exploration — but also session transcription and assistance with report writing. Note: automatic transcription is not a simple administrative act in psychotherapy. A patient who knows that an AI is “listening” to their session may alter their disclosure, activate self-presentation biases, or inhibit the expression of intimate content.
Level 3 — Direct clinical use
Tool used in session, chatbot offered to the patient, automated psychometric scoring. Full ethical framework required: specific and informed consent, peer supervision, regular assessment of impact on the therapeutic alliance.
Five questions before using an AI tool in your practice
To transform fictitious human oversight into meaningful oversight:
1. Do I understand what this tool actually does?
Not “what it claims to do” on its marketing page, but its actual functioning, its training data, its documented limitations.
2. Could I reach this conclusion without the tool?
If the answer is no, the risk of automation bias is maximal. The tool should enrich a competence you already have.
3. Does my patient know that an AI tool is involved in their care pathway?
And was this information given in a context that allows a genuine choice — not just another form to sign.
4. Could the processed data compromise the therapeutic framework if exposed?
If so, the tool must guarantee a level of confidentiality at least equivalent to what you ensure in your office.
5. Am I using this tool to speed up my work or to improve care?
Speeding up can be legitimate. But if the tool changes the quality of your attention to the patient, it is the care itself that is at stake.
From passive validator to practitioner-researcher
The guide implicitly draws a model of the clinician as operator: they receive AI outputs, validate them according to a protocol, and remain “responsible.” This model is not only unsuitable for psychotherapy — it falls short of what clinicians are already doing.
The practitioners we interviewed as part of our research — such as Beatrice Perez-Dandieu (director of the CEFTI, specialist in schema therapy) and Dr. Isabelle Leboeuf (clinical psychologist, author and researcher, specialist in CFT/CMT) — exercise a relationship with AI that has nothing to do with rubber-stamping. It is clinical discernment forged by experience, not protocol, that distinguishes in their practice the relevant use from the risky one.
This model of deliberate integration — what we call cognitive hybridization — is the horizon toward which we can collectively strive. The practitioner-researcher who understands AI mechanisms, evaluates their effects, and preserves in every use the primacy of the therapeutic relationship.
Conclusion
The HAS-CNIL guide lays solid foundations, but its lack of mental health specificity reveals a broader assumption: that psychological care can be framed like somatic care. One treats organs, the other treats through the relationship. It is not the same profession, and it cannot be the same framework.
It is time to refuse the role of rubber stamp. Not by rejecting AI, but by redefining our role in relation to it — from passive validator to co-constructor of a hybrid dynamic.
The stance this article defends is that of the practitioner-researcher: a clinician who deliberately integrates AI into their thinking and practice, understanding its mechanisms, evaluating its effects, and never yielding on the primacy of the relationship. This is not a position “between two extremes” — it is a position that demands more rigor than either, because it refuses to simplify the complexity of the field and the dialogue to which this complexity invites.
The HAS-CNIL guide is a starting point. It is the clinicians who will write what comes next.
The great absentees from the guide: the users
One final point deserves attention. The guide was developed through stakeholder hearings, a multidisciplinary working group, and a public consultation. But the primary stakeholders — the patients themselves — appear remarkably absent from this process. The text speaks for them (loyal information, right to object, data protection), but never with them. They appear as objects of protection, rarely as agents capable of formulating their own expectations regarding AI in their care pathway.
This absence is all the more striking given that, as we recall, nearly half of LLM users in distress are already spontaneously using them as psychological support — without waiting for the regulatory framework or the opinion of professionals.
When patients act massively before institutions consult them, the question is no longer merely ethical: it is democratic.
We will return to this in a forthcoming article.
Références
-
Haute Autorité de Santé & CNIL. (2026). Accompagner le bon usage des systèmes d’intelligence artificielle en contexte de soins. Document de travail, 16 février 2026. ↩
-
Rousmaniere, T., Zhang, Y., Li, X. & Shah, S. (2025). Large language models as mental health resources: Patterns of use in the United States. Practice Innovations. https://doi.org/10.1037/pri0000292 ↩
-
Ferry, M. & Malo, R. (2026, en review). The Pocket Therapist: Emergence of a Clinical Phenomenon and Proposal of an Analytical Schema-Therapy Framework. Manuscrit soumis pour publication. ↩
-
Parasuraman, R. & Manzey, D. H. (2010). Complacency and Bias in Human Use of Automation: An Attentional Integration. Human Factors, 52(3), 381-410. https://doi.org/10.1177/0018720810376055 ↩
-
Endsley, M. R. & Kiris, E. O. (1995). The Out-of-the-Loop Performance Problem and Level of Control in Automation. Human Factors, 37(2), 381-394. https://doi.org/10.1518/001872095779064555 ↩ ↩2
-
Sparrow, R. (2007). Killer Robots. Journal of Applied Philosophy, 24(1), 62-77. https://doi.org/10.1111/j.1468-5930.2007.00346.x ↩
-
Wagner, J. & Schwind, A.-S. (2025). Investigating psychotherapists’ attitudes towards artificial intelligence in psychotherapy. BMC Psychology, 13(1), 719. https://doi.org/10.1186/s40359-025-03071-7 ↩ ↩2
-
Yang, H., Dai, T., Mathioudakis, N., Knight, A. M., Nakayasu, Y. & Wolf, R. M. (2025). Peer perceptions of clinicians using generative AI in medical decision-making. npj Digital Medicine, 8(1), 530. https://doi.org/10.1038/s41746-025-01901-x ↩ ↩2
Mots-clés