Isabelle Leboeuf

Healthcare professional testimonial

Isabelle Leboeuf, clinical psychologist and researcher

“AI doesn’t take away our creativity. It’s the tedious, constraining side that gets relieved.” — A practitioner-researcher describes how AI transforms her writing, her scientific rigor, and her relationship with creation.

Isabelle Leboeuf is a clinical psychologist in private practice for twenty years and holds a doctorate in clinical psychology. A specialist in Compassion-Focused Therapy (CFT), she uses artificial intelligence in her daily professional practice: writing scientific articles, critically self-evaluating her reasoning, creating digital content. Her testimony illustrates a use of AI that does not replace the clinician but enables them to do what they would not have done without it.

What immediately stands out in Isabelle’s journey is the constant dialogue between clinical practice and scientific research. Twenty years in private practice, a doctoral thesis, continuing education in multiple therapeutic approaches. This dual role as practitioner and researcher has prepared her for a pragmatic relationship with AI — neither blind enthusiasm nor fearful rejection.

Her path toward digital tools began before AI, through the discovery of remote therapeutic programs. Initially skeptical — “I thought therapy had to be embodied in the direct relationship” — she was convinced by the empirical data.

From skepticism to data: when science educates the clinician

It was during her doctoral research that Isabelle discovered the effectiveness of remote therapeutic programs. Her work focused on Compassionate Mind Training (CMT), a set of structured exercises derived from Compassion-Focused Therapy (CFT) developed by Paul Gilbert.

“With a simple PDF format containing 15-minute exercises over 28 days, you see average depression scores drop from 12 to 6 on the BDI. People who were at the threshold of mild depression end up genuinely improved — not just statistically significant, but clinically.”

This discovery transformed her perspective. If a protocol as simple as a PDF could produce clinically significant effects, then digital technology was not the enemy of the therapeutic relationship — it could be its extension. Video therapy, initially met with the same reluctance as many colleagues, became a valuable tool, especially for patients suffering from social anxiety, agoraphobia, or severe depression.

From “I don’t do it” to “I do it”: AI as a writing enabler

The most concrete example Isabelle gives of her AI use is writing a clinical article for the Journal de santé mentale du Québec. A clinical case she had presented at a conference, theoretical material already prepared — but the meticulous work of writing represented a nearly insurmountable obstacle in a saturated schedule.

“I think if it hadn’t been for artificial intelligence, I wouldn’t have written it.”

This sentence encapsulates a fundamental shift. AI didn’t simply accelerate an existing process — it made possible what would not have happened. The distance between intention and realization collapsed.

Isabelle herself is dysorthographic. She insists: she is perfectly capable of producing structured writing without AI — she wrote her thesis without it. But the cognitive load of formalism (bibliographies, formatting, spelling corrections) consumed a disproportionate amount of energy relative to the added value. With AI, “I do my voice recording, it gives me an outline, I rework the outline and I get a flawless result.”

“Putting together a bibliography at the time of my thesis, we were tearing our hair out. Hours and hours adjusting the italics. Now, in two clicks, it’s perfect. It’s a gain in time and energy — there’s no creativity involved. It’s tedious work.”

AI as a personal reviewing committee

What distinguishes Isabelle’s use from simply treating AI as a “ghost writer” is the reversal she performs: she uses AI to critique her own work. Once the article is written, she submits it to AI with specific instructions: “Do you see errors in the bibliographic references? Can you critique my clinical-theoretical reasoning?”

And the AI responds: “This part is a bit vague, here you cite this reference but it’s not very clear why you use that one rather than another.” Isabelle sees it as a tool for scientific rigor: “You can see all your inconsistencies appear.”

“People think using AI means saying ‘do this, do that’ and then getting a pre-chewed output. If you do that, it doesn’t work very well. The idea is really to bring a theoretical construction, to tell the AI how you want it to work, and what output you expect.”

The comparison she offers is illuminating: even a calculator does nothing if you don’t know which equation to enter. AI is not an answer machine — it’s a process amplifier. You need to know what you’re building, how you’re building it, and what you want to achieve. Then AI helps clarify the steps, detect the flaws, structure the thinking.

The avatar and the voice: when the body resists synthesis

In her exploratory approach, Isabelle tested the creation of a video avatar with voice synthesis. The result is revealing of the boundary between the technical and the human.

“My face was a bit strange from my point of view, but people weren’t bothered — I think it was slightly embellished. However, they didn’t like the voice at all. Several messages saying: ‘No Isabelle, don’t do that, your voice is completely wrong.’”

As a clinician, Isabelle analyzes this rejection with subtlety. The voice carries far more than linguistic meaning: prosody, rhythm, fluency, bodily emotion. A patient who speaks quickly may be expressing anxiety, branching thoughts, or a manic symptom. These dimensions are at the heart of clinical work — and AI does not (yet) simulate them convincingly.

This is where the interviewer names the concept of the “uncanny valley”: when the resemblance to a human is almost perfect, a slight discrepancy — a failure of attunement — activates our error-detection system. Something feels off, and that something is everything that escapes content and belongs to the bodily, the situational, the relational.

What emotion do we start from? Curiosity versus fear

Isabelle’s most original reflection concerns the emotional relationship with AI. She proposes an interpretive framework drawn from her practice of Compassion-Focused Therapy: our relationship with technology depends on the emotion we start from.

“If you use AI, you need to ask yourself: what emotion am I starting from? Am I approaching it with anxiety? In that case, maybe you need to step back, look at what those anxieties are and how to feel secure. And if you approach it playfully, it’s going to be more creative.”

This analysis reverses the usual question — “Is AI dangerous?” — into a clinical one: “Which emotional system am I starting from when I interact with AI?” When we start from fear, we project our fears. When we start from curiosity and playful exploration, we grow, we awaken. This maps exactly onto Paul Gilbert’s three emotional regulation systems: threat, drive, soothing.

AI as a mirror: self-compassion and reflective space

The most striking moment of the interview comes when the interviewer shares a personal experience: after five hours of philosophical discussion with an AI, he asked it for a compassionate letter. Reading that letter moved him to tears — it was so accurate.

Isabelle’s reaction, as a specialist in Compassion-Focused Therapy, is illuminating:

“What matters in what you describe is creating a space between yourself and your inner dialogue, having this relationship recreated in space and time, defusing. In Compassion-Focused Therapy, we do this with chairs, with paper and pencil, writing a letter to oneself. AI is a medium I hadn’t yet considered.”

AI here is not a therapist. It is a medium for defusion — a support that creates a space between oneself and one’s thoughts, between oneself and one’s inner dialogue. And what moves us in the compassionate letter is not the AI’s performance: it is our own compassion, our own warmth, our own kindness, reflected by a mirror that patiently collected what we told it about ourselves.

As Isabelle summarizes: “You told it a lot of things, and what you told it was reflected back to you like a mirror.”

What this testimony teaches us

Isabelle Leboeuf’s testimony is that of a practitioner-researcher who learned from science to move beyond her preconceptions — first about remote therapy, then about artificial intelligence. Her use of AI is not spectacular: she writes articles, corrects bibliographies, creates content for her website. But it is precisely this ordinariness that is instructive.

AI didn’t turn her into a superwoman. It enabled her to do things she wouldn’t have done otherwise — an article that would have stayed in a drawer, a website she wouldn’t have had the technical skills to create, videos she wouldn’t have had the time to produce. The shift is not from “slow” to “fast” but from “impossible” to “possible.”

And her most original contribution is perhaps clinical: the idea that our relationship with AI begins in our relationship with ourselves. What emotion do we start from? How do we secure ourselves before exploring? Self-compassion as a prerequisite for healthy technology use — that is a path few researchers have explored.

Testimony collected on February 26, 2025. Isabelle Leboeuf is in private practice and holds a doctorate in clinical psychology (Université de Lille, SCALab).

Go further

Testimonials and field reports

This testimony is part of our series on AI use in mental health. Would you like to share your experience?