Background and career path
[M]: Béatrice, thank you for being here for this interview. Would you start by introducing yourself and describing your career path?
[B]: I'm a clinical psychologist by training. I trained in family therapy and Solution-Focused Interventions. In the 2000s, I trained with David Servan-Schreiber in EMDR, of which I'm a certified practitioner. While working in addiction treatment, I realised that EMDR healed trauma but not attachment issues — that's what led me to schema therapy.
[B]: In 2007, I started working with Géraldine Tapia, a researcher at the University of Bordeaux. We conducted studies showing that EMDR alone did not reduce drug consumption. It was by introducing schema therapy and imagery work — the therapist caring for the vulnerable child, reconstructing past scenes — that we observed a decline in addiction.
[B]: In 2021, I brought over Eckhard Roediger and Arnoud Arntz. Their approach opened an extraordinary door for me. Since then, I've become an ISST-certified therapist, currently pursuing supervisor certification. In parallel, I founded CEFTI to teach schema therapy in France. We have trained over 1,800 people and certified around 200 professionals.
Discovering AI: from administration to training
[M]: How did you discover AI and what attracted you to these tools?
[B]: At the beginning, it was the basics: I had an email to write, I didn't feel like writing a long one, so I put it into AI and it rephrased it for me. Gradually, for training, I used it for syllabi: hour-by-hour programmes, learning objectives. Why waste time doing that? Afterwards, I check, correct and verify.
[B]: Since my international training courses are in English, AI helped me translate virtually in real time, make summaries. When I didn't understand something, I'd put the passage back into AI saying “can you explain this clearly and give me examples?” It allowed me to understand in depth and even to find additional studies.
[M]: What you describe is a trajectory where the ease provided by AI led you to develop your curiosity, rather than lose skills.
[B]: I have a bit of a dopamine issue; I constantly need stimulation. With AI, I rediscover what I experienced as a child with my encyclopaedias: browsing randomly, discovering things. It pushes me to be more curious. On the other hand, I've become a bit lazier with formatting: a table, for instance, I no longer do it myself.
Personalised AI: style and images
[M]: Which AI tools do you use?
[B]: I have the paid version of ChatGPT. I like it because it knows me. Sometimes it produces a rather dry text; I tell it “make it more like me” and it rewrites in my style. It has understood that I prefer something warm, not too formal. And I also told it to stop with the superlatives.
[B]: I like having it create images for my PowerPoints. At first, it made biblical, golden images. Now it understands that I prefer children's drawings, colourful and warm.
[M]: ChatGPT doesn't have a fixed style you have to adapt to: you give it instructions and it adapts to your preferences.
The reparenting image: when AI extends imagery work
[B]: Since September, quite a few patients had been telling me they were going on Gemini to generate an image of an adult holding the child they once were. I've seen quite a few images like that. It extends the imagery work between sessions.
[M]: It allowed them to extend the reparenting through a concrete image, where in session we do things in imagination. What was their feedback?
[B]: At first, they would tell me “it doesn't do anything for me, it's just fun.” When I asked them to show me the image and told them it was moving, they eventually admitted: “Yes, it does me good to have it. It reminds me that she is safe.”
[M]: Was the shame related to the vulnerable emotion or to the use of AI?
[B]: To the use of AI. Many people say it's not proper, that it's American, that it's polluting. And then, many are afraid of being seen as stupid. That's when I tell them: someone who is stupid and uses AI is still stupid.
AI stigma and gender stereotypes
[M]: What you're pointing out is that there is already a stigma around using AI. This stigma could prevent people from communicating about the positive effect.
[B]: There may also be the Vulnerable Child aspect. Women tell me more about this emotional effect than men do. I wondered whether, for women, it might trigger the stereotype that women are less intelligent. Using AI would confirm that. Whereas men would be entitled to use AI, but using it to care for their vulnerable child — that's a bit too much.
[M]: AI as a stereotype threat activator! That's an excellent research question.
[B]: In training, when I ask “Who is good at maths?”, I have 90% women in the room and three or four raise their hand. The stereotypes are there: girls supposedly aren't good at maths, even though studies show the opposite.
The Healthy Adult letter generated by AI
[B]: Two patients fed their imagery recordings into ChatGPT. I had told them about the possibility of writing letters as the Vulnerable Child and the Healthy Adult. The AI wrote them a letter as the Healthy Adult to their child.
[M]: And what did that do for them?
[B]: They told me they had cried. It had moved them. The feeling of being understood by someone.
[M]: They felt understood by AI. I know this sparks a lot of debate around notions of empathy, feeling understood when AI isn't a real person. But in any case, what you're saying is that it produces a real effect in patients.
[B]: AI is not at all empathic, that's not the point. In the imagery audio, there is empathy, there is the Vulnerable Child and the Healthy Adult protecting them. The AI quickly understood that the Healthy Adult's job was to protect them. It writes a summary, but it presses exactly where it needs to press.
[M]: There is therapeutic material that serves as a seed, and the AI grows that seed.
AI as a between-session coach
[B]: Some patients go further. The AI proposes a therapeutic plan with breathing exercises, walking. They can build an entire programme to get better. Some would say a coach would be better, but a coach costs 80 euros an hour and you see them once a month.
[M]: How do you perceive that as their therapist? Competition, complementarity...?
[B]: It's not competition. I work a lot on “small steps” between sessions. As a result, I no longer have to worry, because they're the ones doing them. When they come back and tell me they didn't follow the programme, that's very interesting: we can work on what's preventing them.
A clinical case: when schemas resist coaching
[B]: A patient had everything he needed, he was truly getting better. And then he came back and he wasn't better. He told me he had stopped: “None of this is any use.” Going back to imagery work, we discover that as a child, he came first in his class, but his mother hadn't come to the prize-giving. And the day he brings her all his prizes, she slaps him because he had dirtied his shirt. Even when he won, it was no use. So getting better — in the end, that's not going to be any use either.
[M]: This reveals that AI will trigger effects, but those effects are integrated into the overall psychic dynamics. That's where you need a psychologist with a broader vision.
Limits of AI and discernment
[B]: AI has no empathy, it's not human, it only knows what you feed it. It can help, but it can also say anything to please you. It's also true that it's very American, that it uses a lot of energy, that it will eliminate certain jobs. But is it actually interesting as a job, for example, sifting through documents? That's a question worth asking.
[B]: My patients are often psychologists or psychiatrists. They don't feel threatened — and they may be partly wrong about that, actually. Doctors are aware that AI outperforms them on certain diagnoses, like detecting breast cancer, for instance[1]. Most psychologists I talk to say they don't use it: they're interested but don't really see what it's going to bring them.
Future projects: resilience workshops
[B]: I'm preparing resilience workshops — self-awareness, emotional regulation, mental strength, cognitive biases, coping, sociability, communication, optimism. When the CPF validates them, I'll revamp them by integrating AI to make them more engaging, more educational. I'm also working on a Serious Game with the École Nationale Supérieure de Cognitique.
[M]: These workshops are aimed at professionals and also at the patients they will support?
[B]: Yes. The people I train will get everything for free, will be able to practise and use it with their own clients. Roediger confirmed it: group work is even more effective than individual therapy in schema therapy.
Discernment as a compass
[M]: Any final words or advice for colleagues?
[B]: You need to stay alert and not use AI as a sole source. I see it as a tool that helps me achieve what I want to do, rather than a tool that does everything for me. If I have no motivation or ideas, nothing happens. If I tell it “write me a text about schema therapy”, it will be bland. But if I give it plenty of detail, it will produce a decent text — which will still need correcting.
[M]: Your testimonial illustrates this well: you are the one driving the reflection, the curiosity. AI adapts to the framework you give it.
[B]: That's what discernment is. When you look at social media, if you don't have discernment, it's a mess. And I always ask for the sources, and when it gives them to me, I go and verify them.
[1] Some of these diagnoses rely on specialised medical imaging AI (mammography, dermatology, ophthalmology), not on large language models (LLMs) like ChatGPT. These are very different types of AI systems.
Transcript generated by whisper-medium + pyannote, edited with the help of Claude for readability.
Interview conducted on 19 February 2026.