« Could AI be a cognitive prosthesis? » — President of the French Institute of Schema Therapy, he explores with curiosity and without dogmatism the uses of AI in mental health.
Physician psychotherapist and president of the French Institute of Schema Therapy,
Arnaud Gauthier explores with open-minded curiosity the uses of AI in mental
health. From cognitive prosthesis for ASD patients to the dream of digital immortality,
a practitioner between clinical practice and science fiction.
« A somatic physician turned psychotherapist »
Arnaud Gauthier is a general practitioner and psychotherapist in private practice. His
path is atypical: initially trained in emergency medicine and traumatology, he gradually
turned to psychotherapy. This dual grounding — somatic and psychic — pervades
his entire reflection on AI.
His therapeutic trajectory is itself integrative: Reichian analytical therapy
(already body-centered), Ericksonian hypnosis, EMDR, CBT, schema therapy, positive
psychology. Now president of the French Institute of Schema Therapy (IFTS)
and ISST-certified supervisor, he sees adult patients and older
adolescents presenting anxiety disorders, depression, and personality disorders in private practice.
At the time of the interview, he was finishing a book on chair work
— the experiential technique where patients dialogue with
different parts of themselves. His philosophy: to transmit a grammar
of tools rather than rigid scripts.
AI as a cognitive prosthesis: when the crutch becomes legitimate
The idea came during supervision. A therapist describes a patient with autism who
has major relational difficulties at work: messages perceived as harsh, incomprehension
of social codes, repeated job losses. Arnaud, who is not an ASD specialist,
nevertheless has an immediate intuition.
« Wouldn't it be interesting to suggest she create a prompt
to proofread the messages she needs to send at work?
Something like a secretary or coach that would help her function
more easily in everyday life. »
The analogy is central: a patient with a broken leg receives a cast and
no one sees a problem with it. The help is accepted because the deficit is
physical, visible. But for a cognitive deficit — ASD communication, ADHD organization,
memory — the social reaction is entirely different.
« Why should it be any different if we move beyond motor functions?
For cognitive functions, memory functions, emotional functions,
why would it be a bad thing to use crutches? »
Arnaud also distinguishes chatbots (text production) from AI agents (concrete actions
on the environment). For an ADHD patient, the ideal would not just be a tool
that rephrases, but an agent capable of managing a schedule, sending reminders,
adjusting a plan — acting concretely on deficient executive functions.
The physical/psychological double standard
« You're depressed, but come on, just give yourself a kick in the pants,
just get moving. It's just a matter of willpower. I think this is
much more prevalent for psychological disorders than for somatic ones. »
The diagnosis is clear: the same type of help (external compensation for a functional
deficit) is accepted when it concerns the body and stigmatized when it concerns the
psyche. Telling someone « you should just get going » when they have a cast would be
absurd. But for depression, anxiety, ADHD? The social response is often:
« it's a matter of willpower ».
This double standard sheds light on the cultural resistance to AI in mental health. If
technological help is suspect for the psyche in general, it is doubly so when
it comes from a machine. The fear of « AI dependence » echoes an older
reproach: that of not « pulling through on one's own ».
AI in the practitioner's daily life: bibliographic assistant and creative limits
Beyond the concept, Arnaud uses AI in his daily professional practice.
His main use: bibliographic monitoring. He searches for articles on PubMed,
gives them to ChatGPT for simplified summaries, then decides whether to dig deeper
or not. This process feeds the IFTS monthly newsletter.
The most telling example: a foundational article by Arnoud Arntz, updating the
reparenting protocol in schema therapy. Fifteen pages in English. Without AI,
the exercise — reading, summarizing, translating — would have taken him a full day.
« I gave it to ChatGPT, and in two minutes it produced a clear summary.
I went to verify the information in parallel, and it took me half an hour. It would have
taken me an entire day without it. »
But the enthusiasm has its limits. For conceptual creation — making cross-disciplinary
connections between approaches, synthesizing original ideas — Arnaud is more mixed.
He estimates that more than 50% of the time, the results are satisfactory, but notes a
recurring frustration when he seeks syntheses that don't yet exist in
the literature.
When ChatGPT critiques your book: creativity, intentionality, and grammar
The episode is revealing. Arnaud had nearly finished his book on chair work.
He submitted it to ChatGPT for review. The response: many compliments, then
a list of recommendations — add more protocols, include clinical cases
with well-defined scripts.
« It gave me a bit of the blues. I thought: have I been on the wrong track?
And it took me a moment to realize that actually, no.
That's not what I want for this book. »
The tension runs deep. AI, trained on thousands of psychotherapy manuals,
reproduces the genre's norm: theory, clinical case, protocol. Yet this is
precisely what Arnaud wants to avoid. His project is to transmit a
grammar — a set of principles that the clinician can combine
creatively — not a script to apply mechanically.
The analogy with publishing is illuminating: Arnaud suspects that his editor will make
the same remarks as ChatGPT. This isn't a flaw specific to AI; it's
a structural tendency toward convention. But the lesson remains: AI follows
norms; the human carries the intention.
Science fiction and foresight: the digital double, immortality, and the pharmakon
Change of register
This section reports Arnaud Gauthier's prospective and speculative reflections.
These are intellectual explorations, not clinical proposals.
This shift in register is initiated by the practitioner himself.
Arnaud grew up immersed in science fiction — his parents owned more than 2,000 SF
books. This culture permeates his prospective thinking. If we consider the
brain as « a biological information medium », then the transfer of
consciousness to a digital medium is not conceptually absurd.
From this reflection comes the idea of the « digital double »: an AI
model fed with all our preferences, ideas, ways of thinking, dreams, and
vulnerabilities — a functional copy of our personality that would persist
independently of our biological body. A « backup » of oneself, so to
speak, that could survive « as long as there's a bit of electricity
in a hard drive ».
The imagined applications are numerous: post-stroke cognitive implants using
this double to restore language, memory assistance for Alzheimer's
patients, persistence of personality after death to accompany the grief
of loved ones. But Arnaud acknowledges the tension with the embodiment he defended earlier.
« There might not be the biological feeling — embodiment in this
case would perhaps be more complicated, or it would need to be programmed. But in a sense,
it would be a form of immortality. »
Then comes the ethical return: « to what extent should we fully compensate?
Our humanity might also lie in facing certain difficult things,
not just always choosing the easy path. » The reflection converges toward
the notion of pharmakon: AI is neither good nor bad; it's the
dosage and intentionality that matter.
« Rather than thinking in terms of good and evil — we need to bring back nuance,
adapt to our context, and simply question ourselves. »
What this testimonial teaches us
Arnaud Gauthier's account is that of a practitioner-explorer: a clinician who observes, tests, gets enthusiastic,
gets frustrated, and theorizes from his experience. His dual training
— somatic and psychotherapeutic — allows him to ask an essential
question: why do we accept a crutch for the body but not for the mind?
What is remarkable is the progression of the interview: from a concrete clinical
case (the ASD patient) to philosophy (embodiment, the pharmakon) and then
science fiction (the digital double). This trajectory reflects how
many clinicians think about AI: starting from the field, not from theory.
The epistemological humility that Arnaud claims in conclusion is perhaps
the most valuable stance: accepting that we don't yet know, while
remaining curious.
Testimonial collected on February 11, 2026. Dr Arnaud Gauthier practices privately
and chairs the French Institute of Schema Therapy (IFTS).
Go further
About this transcript
Interview duration: ~67 minutes | Participants: Matthieu (M) interviewer, Arnaud (A) interviewee
Edited version: statements rephrased for readability, reorganized by theme. The substance and style of each speaker have been faithfully preserved.
Background and clinical practice
[M]: Can you introduce yourself: who you are, what you do, what kind of patients you see?
[A]: I'm in private practice doing fairly generalist psychotherapy. My practice leans toward CBT and schema therapy. I see a lot of anxiety disorders, depressive disorders, quite a few personality disorders as well, and general difficulties: bereavement, separation. What I don't take on are patients with psychotic disorders. I only see adults or older adolescents, from age 15. Mostly in-person, but I do have a few remote patients.
[A]: I'm originally a somatic physician — I trained in emergency medicine, worked a lot in traumatology — and I've had a multi-approach journey. My first psychotherapy approach was Reichian analytical therapy, already body-centered. I trained in hypnosis, I did EMDR, CBT, schema therapy, positive psychology. I'm president of the French Institute of Schema Therapy and an ISST-certified supervisor.
[A]: Right now, I'm writing a book on chair work — it's an approach that goes well beyond schema therapy. You find it in Gestalt, CBT, emotion-focused therapy, compassion-focused therapy. My intention is to transmit a grammar of the tool, not rigid protocols.
The cognitive prosthesis: the ASD case
[A]: You reminded me of an idea I had yesterday. I was in supervision with a therapist who was telling me about a patient she had just diagnosed with autism spectrum disorder. The patient is really struggling: she has enormous difficulty anticipating other people's reactions. When she sends messages or emails, she can be extremely blunt, even completely off-base. She's lost countless jobs because of this.
[A]: I'm not at all trained in ASD. But when my supervisee told me about this, I thought: this patient has a biological cognitive deficit, like ADHD. Having a crutch, a substitute for a deficient function, can often be a good idea. Wouldn't it be interesting to suggest she create a prompt to proofread the messages she needs to send? Something like a secretary or coach that would help her function more easily in everyday life.
[M]: The crutch analogy resonates with me: a tool that compensates for a function that's deficient relative to the norm and that harms social life. That would seem like a more than legitimate use.
[A]: Yes, I think it's something worth seriously exploring. If I think about the limitations, I believe the ideal would be agent-type AIs — that can spontaneously activate functions, not just produce text. For example, an AI with a schedule management function for someone with ADHD. Having an agent that can act across different platforms to support the person in daily life.
The physical/psychological double standard
[A]: A patient has a broken leg, a physical deficit: we have no qualms saying they need a prosthesis, a crutch. It just seems normal. And I think: why should it be any different if we move beyond motor functions? For cognitive functions, memory functions, emotional functions, why would it be a bad thing to use crutches?
[M]: Do you have a sense of what makes it not as straightforward?
[A]: You break your arm, you get a cast, it's normal. Telling the person « no, you need to tough it out, manage your pain, not move your arm, it's a matter of willpower, getting a cast is taking the easy way out » — that's absurd. And yet, that's somewhat the discourse we hear for psychological issues: « You're depressed, just get moving, it's just a matter of willpower. » There's much more judgment, this idea that it's a lack of willpower, laziness.
[M]: I often hear the fear of becoming dependent on AI, whereas it would be hard to say « you're dependent on your cast ». This reminds me of Lerner's just-world theory: if you do more or less the right thing, everything will be fine — so if things aren't fine, it's because you don't want it enough. And there's this strong distinction between physical and psychological disorders, probably linked to our Cartesian dualist worldview.
Embodiment and embodied cognition
[A]: Right now, I'm working on a chapter about the experiential. The goal is to bring the patient to feel, to have a corrective emotional experience. It's an experience that is not only cognitive but also embodied in the body. The fact that a psychotherapeutic technique triggers a change in bodily sensation, in interoception, has a transformative effect.
[A]: There are quite a few studies showing that we're not just a brain and a disconnected body. All the perceptions we have of our body are part of memory encoding, especially emotional encoding. In clinical practice, we see this in many patients: the emotional reasoning bias — « because I feel something, it must be true ». « I'm afraid, therefore it's proof of danger. »
[M]: Damasio wrote Descartes' Error precisely to point out the role of emotions. And Varela redefines cognition in a much broader way, within an organism. It brings us back to the material.
AI in daily life: research and writing
[M]: Do you use AI for yourself?
[A]: I use AI as a secretary, an assistant. I do my research on PubMed, download the article, give it to ChatGPT and ask for a simplified summary. That lets me see if the content is relevant to me. Then I won't necessarily always go further — sometimes the summary is enough. Sometimes, if I have a doubt, I'll verify.
[A]: I'm president of the IFTS and I run a monthly bibliographic monitoring newsletter. I use this for that monitoring. An article by Arnoud Arntz came out that updates the reparenting protocol — it's a big piece, 15 pages in English. Reading the entire article plus writing a synthesis to share with students — that would have taken me a full day. I gave it to ChatGPT, and in two minutes it produced a clear summary. I went to verify the information, and it took me half an hour.
[M]: What you're saying implicitly is that it simplifies certain tasks so much that you end up doing them instead of not doing them.
Creative limits and frustrations
[A]: I'll be more mixed on using AI for creative work. I feel that AI is very strong at rephrasing things that already exist. However, when I had concept ideas, I tried to make it think — and I was often a bit frustrated. More than 50% of the time, I still find it interesting. But when the synthesis doesn't exist yet, when I'm the one creating it, I find it unsatisfying.
[A]: I really enjoy making connections between different fields. My first approach was Reichian analytical therapy, I did hypnosis, EMDR, CBT, schema therapy, positive psychology. I always try to find places where different words are used to talk about the same concepts, to create a synthesis. And that's where ChatGPT struggles when that synthesis doesn't already exist.
[M]: It's interesting that we have, about the same thing, two different experiences. That's the heart of the matter: perception. The same AI, depending on the use, the context, doesn't yield the same things.
[A]: I think a good part of it comes from how I ask the question. I have a fairly clear vision of what I want, and I don't take enough time to explain my intention. Sometimes I start a new chat telling it to start from scratch. And then I get better results, because it no longer limits itself to what I usually work on.
The book episode on chair work
[A]: I had finished writing my book, I was delighted. I copied my text into ChatGPT and asked it to do a review. It gave me something laudatory, a list of what works well. And then a list of everything that needs changing: « You need to include many more well-structured protocols, you need to provide clinical cases with well-defined scripts. »
[A]: It gave me a bit of the blues. I thought: have I been on the wrong track? And it took me a moment to realize that actually, no. That's not what I want for this book. That's not me. My idea was to transmit a grammar. And then with that grammar, once you know how to make sentences, you can create.
[M]: That's the essential question: why do we do things? AI calibrates itself on its training data — we were already very protocol-driven before AI. But the real issue is: it has meaning for me.
[A]: This difficulty isn't specific to AI. I'm in contact with a publisher. I think when I send them the manuscript, the feedback will be the same as ChatGPT's. AI follows a trend, a convention.
[A]: And so I realize that probably, I should have explained all of this to it: what my intention is, what I want to do with this book. My mistake, clearly, was not telling it everything I just told you.
The digital double and immortality
[A]: What I would find amazing would be to create a digital model of my personality. An AI to which I give all my preferences, my opinions, my ideas, my way of thinking, my dreams. A model that could do this kind of work based on all of that.
[M]: The usefulness for you would be to maintain a kind of vibration, a reference note? Like a tuning?
[A]: Yes. And I grew up steeped in SF — my parents had more than 2,000 science fiction books. The idea of transferring a consciousness, if we consider that our brain is a biological information medium — it wouldn't be absurd to envision creating a dematerialized Arnaud. There might not be the biological feeling, embodiment would perhaps be more complicated. But it would be a form of immortality.
[A]: It would be a kind of backup. Everything that makes up our essence — our ideas, our convictions, our dreams, our vulnerabilities — could persist as long as there's electricity in a hard drive. We could imagine implants that restore lost function after a stroke. Doubling an Alzheimer's patient with that. And for grief: if I lose my wife, if I can still talk with her, the experience of death would be completely different.
[M]: Isn't that already what we do when we take a loved one's scarf to seek their perfume? When we look at a photo album? It's summoning the spirit of what has been.
Ethics and the pharmakon
[A]: This makes me think: from the moment we use AI to compensate for deficits, to what extent is it a good thing? Would a human being who no longer had to experience grief — would that be beneficial? Our humanity might also lie in facing certain difficult things. Sometimes facing pain, sorrow, loss. How much does it make us dependent on the tool? The line is thin.
[A]: I think it's like everything. AI is not something fundamentally good or bad. It all depends on what you do with it. Rather than taking an approach based on good and evil, we need to bring back nuance. Ask yourself: what use do I have for it, how am I using it, does it do me good, am I getting the right dosage?
[M]: There's a philosopher, Bernard Stiegler, who recalled the notion of pharmakon: the same thing can be poison or remedy depending on the dosage and context. That is the very approach of ethics. Ethics is a process, not a checklist.
[A]: Thank you for proposing this exchange. These are subjects I enjoy discussing. I like this way of approaching things: not being dogmatic, stepping back, and sometimes accepting that there are things we don't know.
[M]: Humility...
[A]: Epistemological humility. Remaining humble about what we know, about what we think we know.
[M]: A magnificent conclusion.
Transcript generated by whisper-medium + pyannote, edited for readability.
Interview conducted on February 11, 2026.
This section puts Arnaud Gauthier's testimonial in perspective with
the scientific literature on cognitive prosthesis, embodiment, schema therapy,
and the pharmacology of technologies.
01
Cognitive prosthesis and compensation: a paradigm under construction
The analogy Arnaud proposes — AI as a cognitive crutch — falls within
the field of assistive cognitive technologies (assistive technology).
Lussier and Flessas (2009) described technological aids for neuropsychological
disorders in children. LoPresti, Mihailidis, and Kirsch (2004), in a
review published in Neuropsychological Rehabilitation, demonstrated the effectiveness
of technological aids for people with cognitive deficits of various origins.
What is new in Arnaud's proposal is the extension to conversational
LLMs: no longer reminder or scheduling tools (classic assistive
technology), but an agent capable of rephrasing a message, adapting a tone,
compensating for a social communication difficulty. The leap is qualitative: we move
from compensating for a function (memory, attention) to mediating a
relationship (interpersonal communication).
Research question
Does conversational AI as a communication prosthesis for people with ASD
develop understanding of social codes (learning) or mask them
(substitution)? The answer determines the ethical evaluation of this use.
02
The epistemic double standard: psychic/somatic bias
Arnaud's observation about the asymmetry between the treatment of physical and
psychological disorders is solidly documented. The just-world hypothesis
(Lerner, 1980) posits that individuals need to believe the world is
ordered and that everyone gets what they deserve. When this belief is threatened
— for example when faced with mental illness — it produces a reflex of
blaming the victim: « if you're suffering, it's because you're not trying hard
enough ».
The fundamental attribution error (Ross, 1977) reinforces this bias:
it consists of overestimating dispositional factors (willpower, character) and
underestimating situational factors (biology, environment). For a visible
physical disorder (broken leg), the situation is obvious. For an invisible
psychological disorder (depression, ADHD), attribution shifts toward the person.
Corrigan (2004) showed that stigmatization of mental disorders operates according
to a three-component model: stereotypes (« depressed people are weak »),
prejudice (negative emotional reaction), and discrimination (exclusionary
behavior). AI as an aid for the psyche inherits this triple stigmatization.
03
Embodiment and embodied cognition: from Descartes to Damasio
Arnaud grounds his reflection on embodiment in his clinical practice: the corrective
emotional experience passes through the body, not just through cognition. This
position is supported by a solid body of work.
Damasio (1994), in Descartes' Error, showed that
emotions and bodily sensations (somatic markers) play an essential role
in rational decision-making. Patients with lesions of the ventromedial
prefrontal cortex retain their abstract intelligence but lose the
ability to make adaptive decisions — precisely because bodily signals
no longer reach consciousness.
Varela, Thompson, and Rosch (1991), in The Embodied Mind,
propose the concept of enaction: cognition is not the
representation of a pre-existing world, but the enactment of a world and a mind
based on a history of structural coupling between the organism and its environment.
Merleau-Ponty (1945) had already shown, in phenomenology, that
perception is always embodied: we are not minds piloting bodies,
but body-subjects.
In clinical work, the implication is direct: techniques that engage the body
(chair work, EMDR, focusing, mindfulness) show superior results compared to
purely cognitive approaches for certain disorders (van der Kolk, 2014).
04
Schema therapy and chair work: therapeutic context
Schema therapy (Young, Klosko & Weishaar, 2003) is an
integrative « third wave » approach that targets early maladaptive
schemas — emotional and cognitive patterns formed in childhood that
continue to organize adult experience. The approach combines cognitive,
behavioral, psychodynamic, and experiential elements.
Chair work is a transtheoretical technique
that extends beyond schema therapy. Originating from Fritz Perls'
Gestalt therapy (1969), it involves having different parts of the patient dialogue
by placing them in distinct chairs. Kellogg (2015) proposed a transtheoretical
systematization in Transformational Chairwork.
The concept of corrective emotional experience, coined by
Franz Alexander (1946), refers to the moment when the patient relives an
old emotion under new therapeutic conditions that allow for a
new outcome. This is the central mechanism of chair work as
Arnaud practices it.
Research question
If the corrective emotional experience requires embodiment (bodily engagement
of the patient), what are the limits of AI in supporting this process?
Can AI facilitate access to these experiences, or does it risk intellectualizing them?
05
The technological pharmakon: Stiegler and pharmacology
The pharmakon metaphor, which both speakers invoke in conclusion,
finds its most developed elaboration in Bernard Stiegler
(2010, 2013). For Stiegler, every technology is simultaneously poison and
remedy: it can proletarianize (destroy know-how, ways of living,
theoretical knowledge) or augment (create new capacities,
new forms of care).
Arnaud's book episode precisely illustrates the « poison » dimension:
AI, trained on dominant editorial conventions, pushes toward normalization
and formatting. It reproduces what is, not what could be. The
« remedy » dimension is the accelerated bibliographic monitoring, facilitated translation,
conceptual unscrambling.
The originality of Arnaud's position is that he doesn't choose sides: he maintains the
tension. It is a pharmacological stance in the proper sense — one that
thinks in terms of dosage rather than prohibition or obligation.
06
Digital double and identity: posthumanist questions
Arnaud's speculative reflections on consciousness transfer join a rich
philosophical debate. Sherry Turkle (2011), in Alone Together,
showed how relational technologies (robots, conversational agents)
modify our relationship to authenticity and intimacy. The risk she identifies:
preferring relationships with machines because they are less demanding.
On the question of consciousness transfer, Derek Parfit (1984)
showed in Reasons and Persons that personal identity is more fragile
than we think: if an exact digital copy of me is created, is it
still « me »? Philosophers of embodied cognition (Varela, Thompson)
would argue no: consciousness is not separable from its biological substrate.
This is precisely the tension Arnaud carries — an advocate of embodiment in
the first part, an enthusiast of consciousness transfer in the second. This contradiction
is productive: it reveals that even a practitioner convinced of the importance of the body
can be fascinated by the promises of disembodied intelligence.
Bibliography
Alexander, F., & French, T. M. (1946). Psychoanalytic therapy. Ronald Press.
Corrigan, P. W. (2004). How stigma interferes with mental health care. American Psychologist, 59(7), 614–625.
Damasio, A. R. (1994). Descartes' error: Emotion, reason, and the human brain. Putnam.
Kellogg, S. (2015). Transformational chairwork: Using psychotherapeutic dialogues in clinical practice. Rowman & Littlefield.
Lerner, M. J. (1980). The belief in a just world: A fundamental delusion. Plenum Press.
LoPresti, E. F., Mihailidis, A., & Kirsch, N. (2004). Assistive technology for cognitive rehabilitation: State of the art. Neuropsychological Rehabilitation, 14(1–2), 5–39.
Lussier, F., & Flessas, J. (2009). Neuropsychologie de l'enfant (2e éd.). Dunod.
Merleau-Ponty, M. (1945). Phénoménologie de la perception. Gallimard.
Parfit, D. (1984). Reasons and persons. Oxford University Press.
Perls, F. S. (1969). Gestalt therapy verbatim. Real People Press.
Ross, L. (1977). The intuitive psychologist and his shortcomings: Distortions in the attribution process. In L. Berkowitz (Ed.), Advances in experimental social psychology (Vol. 10, pp. 173–220). Academic Press.
Stiegler, B. (2010). Ce qui fait que la vie vaut la peine d'être vécue : De la pharmacologie. Flammarion.
Turkle, S. (2011). Alone together: Why we expect more from technology and less from each other. Basic Books.
van der Kolk, B. (2014). The body keeps the score: Brain, mind, and body in the healing of trauma. Viking.
Varela, F. J., Thompson, E., & Rosch, E. (1991). The embodied mind: Cognitive science and human experience. MIT Press.
Young, J. E., Klosko, J. S., & Weishaar, M. E. (2003). Schema therapy: A practitioner's guide. Guilford Press.
Why this section?
A single testimonial cannot be generalized. This section makes explicit the
methodological limitations and blind spots, to place this account in its proper context:
a valuable practitioner reflection that opens avenues without validating them.
Arnaud's profile: a singular practitioner
Characteristic
Implication for generalizability
Physician → psychotherapist
Rare dual training that enables the somatic/psychic analogy
IFTS President
Position of influence in the schema therapy field
SF curiosity / moderate early adopter
Atypical openness to technology in the mental health field
No direct clinical AI-patient practice
The cognitive prosthesis is a supervision hypothesis, not a lived experience
Multi-approach integrative practice
Unusual capacity for synthesis but not representative of the average practitioner
Methodological limitations
Interview between peers sharing presuppositions
The interview is conducted by Matthieu Ferry, himself the driver of the AI-Psy project and
favorable to a nuanced exploration of AI in mental health. The format produces
an echo chamber effect: both speakers reinforce
each other without confrontation with critical positions (Turkle, Sadin, professional
regulatory bodies). The pharmakon is invoked but not debated.
Hypothetical clinical case
The ASD patient case is a supervision case, not a direct use.
Arnaud formulates a hypothesis (« what if we suggested a prompt to her? »)
that has not been tested. The cognitive prosthesis analogy is appealing but remains
at this stage a clinical intuition, not an evaluated intervention.
Absence of confrontation with critics
Critical positions on AI in mental health are not represented in
the exchange: no voice to question the normalization of ASD communications
by AI, no reflection on data confidentiality, no discussion of
dependency risks documented in the literature.
What we CANNOT conclude
Tempting conclusion
Why it is unwarranted
« Psychiatrists should prescribe prompts »
Untested supervision hypothesis, no clinical evidence
« AI is a validated cognitive prosthesis »
Appealing analogy but not empirically proven in this context
« The digital double is desirable »
Philosophical speculation, not a clinical position
« AI is reliable for bibliographic monitoring »
Satisfactory personal use, no systematic verification reported
What we CAN cautiously affirm
An experienced physician actively explores AI in his professional practice — monitoring, writing, conceptualization
The cognitive prosthesis analogy deserves clinical exploration — the question is legitimate even if the answer is not yet documented
The clinical/speculative distinction is carried by the practitioner himself — he acknowledges the shift in register
AI reproduces conventions and can clash with creative intention — the book episode is a concrete and revealing case
Epistemological humility is possible in the face of AI — neither rejection nor unconditional endorsement
Key philosophical tension
Embodiment vs consciousness transfer
In the first part, Arnaud vigorously defends embodiment: cognition is
incarnate, therapeutic work passes through the body, somatic markers
are essential. In the second part, he gets enthusiastic about the transfer of
consciousness to a digital medium — which presupposes that consciousness is
separable from the body.
This contradiction is not disqualifying. It is productive:
it shows that a practitioner can hold together a clinical conviction (the body
is essential to care) and a prospective fascination (what if we could
abstract from it?). It is the very tension of the human relationship to technology.
Would you like to share your experience?
Anonymity guaranteed. Your account will be treated with respect.