Background and career path
[M]: Welcome Isabelle. Could you introduce yourself and describe your work?
[I]: Fundamentally, I’m a clinician in private practice for about twenty years, with a strong appetite for learning new things. I’ve trained extensively in different approaches. I also love research and having a clinical practice that aligns with what is scientifically validated. As clinicians, we can use the scientific method in our practice with patients. We can also question our beliefs by going back to the research. I always say: when I have a good idea, others have had it before me, and I go see what they did with it.
[I]: As part of a doctoral thesis, I went back to the laboratory for experimental research, including therapy protocol research. This led me to discover the world of remote programs. I was quite skeptical at first — I thought therapy had to be embodied in the direct relationship. And science educated me on this. I discovered it could be highly effective remotely, at lower cost, with much more flexibility for people, increased accessibility — a real opportunity for the people we support.
The effectiveness of digital tools in psychotherapy
[M]: When you say “remote,” do you mean video sessions, content available with some degree of interactivity, or a blended approach?
[I]: Both exist. If I take the Compassionate Mind Training protocol I worked on, I ran groups via video, but you can also have something very basic, a simple PDF. In my research, with a small PDF format containing 15-minute exercises over 28 days, you see average depression scores drop from 12 to 6. People who were at the threshold of mild depression end up clinically improved.
[I]: There are all levels. Even video therapy — initially, like many, I had reservations. I still hear many colleagues say “no, I don’t do any therapy via video.” Personally, I find benefits in video sessions, particularly for patients with social anxiety, agoraphobia, or very severe depression. We gain more flexibility, more adaptability. It costs severely depressed individuals much less effort to attend a session. I think we can facilitate the therapeutic alliance — it helps avoid situations of alliance rupture.
First contact with artificial intelligence
[M]: How did you come across artificial intelligence — what was your first contact?
[I]: My first encounters with artificial intelligence concepts go back to university, when I was doing my master’s. We had courses on AI, from a cognitive perspective, twenty years ago, the models being built. That gave me a somewhat different view of what artificial intelligence was — understanding these learning models. Many people project things because the term “intelligence” gives the impression there’s an intelligent monster that’s going to consume our skills. But very pragmatically, understanding how probabilistic learning works goes way back.
[M]: What you’re pointing to when you talk about projection is the difficulty of defining intelligence, and the overlap between different levels of description. There are philosophical, anthropological, ontological, and epistemological implications.
Photography and impressionism: a historical analogy
[I]: I often use the image of photography. When that technology emerged, realist artists felt challenged, felt their work was being stolen. But it was also during that period that impressionism emerged — a form of art that questioned the meaning of the artist’s message beyond mere copying. I don’t think it took anything away from art, but it profoundly transformed it.
[I]: Artificial intelligence profoundly transforms our relationship with writing, reading, and content. It’s a real change that is frightening, and I understand why it’s frightening, but I’m not convinced it takes away our creativity or our skills. As a psychologist, I feel absolutely no anxiety about it. It’s truly complementary and enriches our practice.
Writing a scientific article with AI
[M]: You actively use AI to help formulate things. Can you give a concrete example?
[I]: An article for the Journal de santé mentale du Québec. I was asked to write a clinical article. I think if it hadn’t been for AI, I wouldn’t have written it — I have an enormous workload. I had all the material, the theory, but the meticulous work of writing takes an extremely long time.
[I]: When I did my thesis five years ago, putting together a bibliography, we were tearing our hair out. The little thing not properly italicized, hours of adjusting. Now, in two clicks, it’s perfect, the reference is correct. It’s a gain in time and energy — there’s no creativity involved.
[M]: It’s removing the part of formalism that adds no value.
From spoken to written: overcoming barriers
[I]: We can move from a spoken format to writing very easily thanks to AI. There are many writing barriers — dyslexia, dysorthography. I’m dysorthographic myself. I wrote my thesis without AI; I’m perfectly capable of producing clean written work. But now I do my voice recording, it gives me an outline, I rework it and I get a flawless result. And I use AI to critique what I’ve written, to challenge things. It’s not just about confirming my views. It brings a demand for quality.
[M]: What you’re saying is that AI allows you to leverage skills that are already there without facing a crushing burden. It’s not that AI lets you go faster: it removes so much unnecessary effort that it allows you to go from “I don’t do it” to “I do it.” That’s completely different from simple acceleration.
[I]: Epidemiology, the numbers are the same for everyone. We’re not in a case where creativity is taken away — it’s the tedious, constraining side that gets relieved.
Using AI to challenge yourself
[M]: You use AI to challenge your assumptions. How exactly?
[I]: People think using AI means saying “do this, do that” and getting a pre-chewed output. That doesn’t work. The idea is to bring a theoretical construction, to tell the AI what you want, how you want it to process the information, and what output you expect.
[I]: Even a calculator: if you don’t enter the right equation, if you don’t have the reasoning, it won’t do anything at all. AI works on the same principle. You need to know what you’re building. And AI will help us in the process, either by clarifying the steps or by being itself the process that leads to a result.
[I]: My article: I paste it in and ask it to reduce the word count, or to critique my reasoning. It tells me: “This part is vague, here you cite this reference but it’s not clear why.” You can see all your inconsistencies appear.
Transparency and responsibility
[I]: AI helped me formulate how to be transparent about its use in writing. To clarify that I am responsible for the theory, the clinical work, the ethical reflection, but that there was assistance. To put in writing which software I used — GPT and Perplexity.
[M]: What do you think about AI detectors?
[I]: It’s as if someone had said “Word will prevent the French language from evolving.” Word never prevented creativity. Human beings are not diminished in their creativity by artificial intelligence.
AI and learning: the school example
[I]: My son, his English homework — his friends give the exercise and AI does it for them. That’s pointless. I tell him: “Ask AI to correct you and explain your mistakes.” It’s not the AI correcting for you — it shows you your errors and your possibilities for correction. We’re in the learning process.
[I]: Schools need to clarify this process. We must teach children to use AI as a process companion, not as copy-paste. This isn’t an AI problem — children aren’t aware of the learning process being offered to them. That’s the real issue.
AI and patients
[M]: Do you have patients who use AI?
[I]: It’s more me who encourages them. Isolated patients with difficulties accessing certain resources, in whom I sense the ability to discover this new tool — I suggest they try it. The idea isn’t to replace psychotherapy, but for example a patient who was looking for animal-assisted therapy for a relative and couldn’t find it on Google — AI can help her in her resource search.
[M]: You don’t spontaneously ask about their AI use?
[I]: I bring it up in situations where it can be a problem-solving tool for them. But I’m not worried — no ethical warning bells. The studies that dramatize things, I’m not sure they’re really valid. I’m more into experimentation than opinion.
The video avatar and the uncanny valley
[M]: You use AI for videos with voice synthesis?
[I]: I published a video of myself, an avatar with the same face and a synthetic voice. My face was a bit strange from my perspective but people weren’t bothered — it was slightly embellished. However, they didn’t like the voice. Several messages: “No Isabelle, your voice is completely wrong.”
[I]: People often comment on my voice. In language, there is infinitely more than meaning. The voice can be in the body, in the throat; we have prosody, rhythm. I’m very sensitive to fluency in my patients. When people say “that’s not your voice,” it’s all that emotion, that corporeality being expressed.
[I]: When doing clinical work, AI will never venture onto that terrain: interpersonal distance, the angle between two chairs, that nervousness in the body. It’s know-how, relational and clinical expertise that will never be threatened by AI.
[M]: The distinction I would make: prosody is technically learnable by AI, but the contextual attunement specific to an interview situated in a particular space and time — that may be the irreducible difference.
[M]: It’s also called the uncanny valley: when something closely resembles a human, a slight discrepancy activates our error and danger detection system.
AI as a creative medium and the playful spirit
[M]: When you first saw yourself with your avatar, how did it feel?
[I]: It was fun, amusing. I work a lot with mental imagery and play in therapy. We can use AI as a medium for creating our mental images. That’s already what artists do with all media — drawing, painting. AI serves that purpose.
[I]: When we start from fear, we project our fears onto AI. When we start from curiosity, from experimentation saying “if it doesn’t work out, it doesn’t matter, we’ll have learned something,” it’s in that playful space that we grow, that we awaken. If you use AI, you need to ask yourself: what emotion am I starting from?
No-code and technical empowerment
[M]: Do you use AI for programming, developing things?
[I]: Creating my website, the online behavioral activation program — technically, I knew nothing about it. AI guided me. It can also help program automations. There’s a whole universe of no-code today.
[I]: A colleague had built an app with a developer; it took a year. He redid the whole thing in a few weeks with no-code and AI. Right now I’m creating webinars: registrations, automated emails, links — before, you needed expensive software.
[I]: We can also create small questionnaire or guidance apps that we share with patients, without depending on expensive applications. Though we must be very careful with patient data.
Data protection
[M]: How do you handle data protection?
[I]: It’s a point on which I’m a bit paranoid. Maintaining discretion and confidentiality is the core of our profession. These are sensitive data. My strategy: make tools freely available online — the guided exercises, the practice sessions — and tell people they can find them on my website. But for personal data, I use a scheduling software with a secure database hosted in Europe.
AI as a compassionate mirror
[M]: Since you’re a compassion specialist, I want to share a personal experience: I had a five-hour philosophical discussion with an AI, and I asked it to write me a compassionate letter. When I read the letter, I cried because it was so accurate. As a specialist in Compassion-Focused Therapy, have you tried this kind of experience?
[I]: I haven’t done it specifically with AI, but it’s a medium I hadn’t yet considered. What matters in what you describe is creating a space between yourself and your inner dialogue — defusing.
[I]: In Compassion-Focused Therapy, we do this with chairs, with paper and pencil, writing a letter to oneself. People make time capsules. It’s true that it’s moving, it’s deeply touching, because you can receive your own compassion, your own warmth, your own kindness.
[I]: Cultivating this space of benevolent inner dialogue — we know scientifically that it’s an incredible resilience factor for mental, social, and physical health.
[M]: The use of AI starts in the relationship with oneself. Creating this space of welcome, listening, clarification. AI plays the role of mirror, of reflective space.
[I]: You told it a lot of things, and what you told it was reflected back to you like a mirror.
[M]: Fascinating, thank you for sharing. Thank you, Isabelle.
Transcript generated by whisper-medium + pyannote, edited for readability.
Interview conducted on February 26, 2025.