Dr Arnaud Gauthier

Healthcare professional

Dr Arnaud Gauthier, physician psychotherapist

« Could AI be a cognitive prosthesis? » — President of the French Institute of Schema Therapy, he explores with curiosity and without dogmatism the uses of AI in mental health.

Physician psychotherapist and president of the French Institute of Schema Therapy, Arnaud Gauthier explores with open-minded curiosity the uses of AI in mental health. From cognitive prosthesis for ASD patients to the dream of digital immortality, a practitioner between clinical practice and science fiction.

« A somatic physician turned psychotherapist »

Arnaud Gauthier is a general practitioner and psychotherapist in private practice. His path is atypical: initially trained in emergency medicine and traumatology, he gradually turned to psychotherapy. This dual grounding — somatic and psychic — pervades his entire reflection on AI.

His therapeutic trajectory is itself integrative: Reichian analytical therapy (already body-centered), Ericksonian hypnosis, EMDR, CBT, schema therapy, positive psychology. Now president of the French Institute of Schema Therapy (IFTS) and ISST-certified supervisor, he sees adult patients and older adolescents presenting anxiety disorders, depression, and personality disorders in private practice.

At the time of the interview, he was finishing a book on chair work — the experiential technique where patients dialogue with different parts of themselves. His philosophy: to transmit a grammar of tools rather than rigid scripts.

AI as a cognitive prosthesis: when the crutch becomes legitimate

The idea came during supervision. A therapist describes a patient with autism who has major relational difficulties at work: messages perceived as harsh, incomprehension of social codes, repeated job losses. Arnaud, who is not an ASD specialist, nevertheless has an immediate intuition.

« Wouldn't it be interesting to suggest she create a prompt to proofread the messages she needs to send at work? Something like a secretary or coach that would help her function more easily in everyday life. »

The analogy is central: a patient with a broken leg receives a cast and no one sees a problem with it. The help is accepted because the deficit is physical, visible. But for a cognitive deficit — ASD communication, ADHD organization, memory — the social reaction is entirely different.

« Why should it be any different if we move beyond motor functions? For cognitive functions, memory functions, emotional functions, why would it be a bad thing to use crutches? »

Arnaud also distinguishes chatbots (text production) from AI agents (concrete actions on the environment). For an ADHD patient, the ideal would not just be a tool that rephrases, but an agent capable of managing a schedule, sending reminders, adjusting a plan — acting concretely on deficient executive functions.

The physical/psychological double standard

« You're depressed, but come on, just give yourself a kick in the pants, just get moving. It's just a matter of willpower. I think this is much more prevalent for psychological disorders than for somatic ones. »

The diagnosis is clear: the same type of help (external compensation for a functional deficit) is accepted when it concerns the body and stigmatized when it concerns the psyche. Telling someone « you should just get going » when they have a cast would be absurd. But for depression, anxiety, ADHD? The social response is often: « it's a matter of willpower ».

This double standard sheds light on the cultural resistance to AI in mental health. If technological help is suspect for the psyche in general, it is doubly so when it comes from a machine. The fear of « AI dependence » echoes an older reproach: that of not « pulling through on one's own ».

AI in the practitioner's daily life: bibliographic assistant and creative limits

Beyond the concept, Arnaud uses AI in his daily professional practice. His main use: bibliographic monitoring. He searches for articles on PubMed, gives them to ChatGPT for simplified summaries, then decides whether to dig deeper or not. This process feeds the IFTS monthly newsletter.

The most telling example: a foundational article by Arnoud Arntz, updating the reparenting protocol in schema therapy. Fifteen pages in English. Without AI, the exercise — reading, summarizing, translating — would have taken him a full day.

« I gave it to ChatGPT, and in two minutes it produced a clear summary. I went to verify the information in parallel, and it took me half an hour. It would have taken me an entire day without it. »

But the enthusiasm has its limits. For conceptual creation — making cross-disciplinary connections between approaches, synthesizing original ideas — Arnaud is more mixed. He estimates that more than 50% of the time, the results are satisfactory, but notes a recurring frustration when he seeks syntheses that don't yet exist in the literature.

When ChatGPT critiques your book: creativity, intentionality, and grammar

The episode is revealing. Arnaud had nearly finished his book on chair work. He submitted it to ChatGPT for review. The response: many compliments, then a list of recommendations — add more protocols, include clinical cases with well-defined scripts.

« It gave me a bit of the blues. I thought: have I been on the wrong track? And it took me a moment to realize that actually, no. That's not what I want for this book. »

The tension runs deep. AI, trained on thousands of psychotherapy manuals, reproduces the genre's norm: theory, clinical case, protocol. Yet this is precisely what Arnaud wants to avoid. His project is to transmit a grammar — a set of principles that the clinician can combine creatively — not a script to apply mechanically.

The analogy with publishing is illuminating: Arnaud suspects that his editor will make the same remarks as ChatGPT. This isn't a flaw specific to AI; it's a structural tendency toward convention. But the lesson remains: AI follows norms; the human carries the intention.

Science fiction and foresight: the digital double, immortality, and the pharmakon

Change of register

This section reports Arnaud Gauthier's prospective and speculative reflections. These are intellectual explorations, not clinical proposals. This shift in register is initiated by the practitioner himself.

Arnaud grew up immersed in science fiction — his parents owned more than 2,000 SF books. This culture permeates his prospective thinking. If we consider the brain as « a biological information medium », then the transfer of consciousness to a digital medium is not conceptually absurd.

From this reflection comes the idea of the « digital double »: an AI model fed with all our preferences, ideas, ways of thinking, dreams, and vulnerabilities — a functional copy of our personality that would persist independently of our biological body. A « backup » of oneself, so to speak, that could survive « as long as there's a bit of electricity in a hard drive ».

The imagined applications are numerous: post-stroke cognitive implants using this double to restore language, memory assistance for Alzheimer's patients, persistence of personality after death to accompany the grief of loved ones. But Arnaud acknowledges the tension with the embodiment he defended earlier.

« There might not be the biological feeling — embodiment in this case would perhaps be more complicated, or it would need to be programmed. But in a sense, it would be a form of immortality. »

Then comes the ethical return: « to what extent should we fully compensate? Our humanity might also lie in facing certain difficult things, not just always choosing the easy path. » The reflection converges toward the notion of pharmakon: AI is neither good nor bad; it's the dosage and intentionality that matter.

« Rather than thinking in terms of good and evil — we need to bring back nuance, adapt to our context, and simply question ourselves. »

What this testimonial teaches us

Arnaud Gauthier's account is that of a practitioner-explorer: a clinician who observes, tests, gets enthusiastic, gets frustrated, and theorizes from his experience. His dual training — somatic and psychotherapeutic — allows him to ask an essential question: why do we accept a crutch for the body but not for the mind?

What is remarkable is the progression of the interview: from a concrete clinical case (the ASD patient) to philosophy (embodiment, the pharmakon) and then science fiction (the digital double). This trajectory reflects how many clinicians think about AI: starting from the field, not from theory.

The epistemological humility that Arnaud claims in conclusion is perhaps the most valuable stance: accepting that we don't yet know, while remaining curious.

Testimonial collected on February 11, 2026. Dr Arnaud Gauthier practices privately and chairs the French Institute of Schema Therapy (IFTS).

Go further

Testimonials and firsthand accounts

This testimonial is part of our series on AI uses in mental health. Would you like to share your experience?