Background and Professional Context
[M]: Anna, you're a psychologist. Where do you practice and what kind of clientele do you see?
[A]: I work in private practice. I'm primarily trained in ICV, I work a bit with EMDR. I really see all types: children, adolescents, adults, in a small town in Dordogne with 3,000 inhabitants. In terms of clientele, I'd say really all types in this rural practice.
[M]: How long have you been practicing?
[A]: In private practice since 2020. I graduated in 2014.
[M]: Do you practice in person, via video, or a mix of both?
[A]: Both. Since Covid, I also have a network via video, quite a few people in Paris but not only. Rather people who identify as high intellectual potential, in feminist circles, left-wing political networks. That's my Parisian network.
Types of Suffering Accompanied
[M]: In terms of types of suffering, pathology, how does it break down?
[A]: It's quite varied. There are people, especially at the in-person office, who come to see me for recent, punctual trauma, like grief, like accidents, where it will be very short-term care. And then many people who come for the three major classics: depression, anxiety, relationship problems. For teenagers, there will be issues around school bullying, school phobia, a lot of anxiety, sometimes with depression issues, suicide attempts. And children, it's often various somatizations and behavioral problems.
[M]: What's your basic training? More psychoanalytic, CBT, integrative?
[A]: I have a more psychoanalytic university training which isn't at all what I identify with now. Now I'm more between psychotrauma and attachment theory. Those are probably my two main theories. I can also work on the family-systemic side, but psychotrauma-attachment is really the core of what I do.
Discovering Artificial Intelligence
[M]: How did you become interested in artificial intelligence? What does it have to do with this picture?
[A]: I think through entrepreneurship. I have an online business, I'm on social media. It's a population that's very interested in artificial intelligence. I heard a lot about it in interviews with specialists, entrepreneurs who were implementing it in their businesses. Curious as I am, I started using it. I have a fairly regular personal use for two main categories: everything related to content creation, and especially — which is most useful for this discussion — as a therapeutic tool.
[M]: You mean you use it to investigate personal topics?
[A]: For myself, yes. What it functions as for me is that I often need to talk about topics to really understand what I think about them — that's my extroverted side. I needed to do it with human beings who didn't necessarily always want to talk about these topics with me, or have the time, or not necessarily the desired level of insight.
Cognitive Amplification
[A]: ChatGPT has a double interest. First, it makes extremely relevant connections on all imaginable topics. It really allows me to explore the connections I make myself and push them even further. Often, when I'm thinking, there are connections already made in me. But ChatGPT allows me to create even more relevant connections and sometimes see things I hadn't seen.
[M]: When you say making connections, is it between different psychological theories or different conceptual silos?
[A]: No, it's very concrete, really about my own psychological content. For example, right now I'm working a lot on my relationship with my father, on the question of support, backing. I know it has an impact on my ability to feel I can rely on myself in other areas of my life, including financial. I can talk about it with ChatGPT which will probably allow me to deepen the subject, understand better, feel the different connections better. It allows me to do a kind of psychological elaboration, but with a very relevant interlocutor.
Relevance Compared to Humans
[M]: In what you're saying, there's the degree of relevance. It might suggest that not all psychologists have this level of relevance. Is it a question of more exhaustive knowledge or attunement to you?
[A]: It's both, even all three. There's attunement, meaning the ability to hear all the words I say — a human being can't make connections with everything said by the interlocutor. There's the capacity for receiving. And really the ability to make connections, to link concepts quickly and relevantly. I think a human being can't be as relevant; even very intelligent people can't have the level of relevance of AI.
Complementarity with Human Therapy
[A]: My therapist, for example, rarely makes connections with me. She's very much in receiving and welcoming and she does it very well. I feel heard, welcomed. Since I'm very much in cognition, I don't know if it's a very good idea for my human psychologist to accompany me in cognition, which is also a defense mechanism for me. She brings me back to something that's about presence, and that's very good. In one hour of psychotherapy, you can't do everything.
[M]: What you're saying is there's real complementarity between the two, and that AI alone could have the negative effect of letting you overperform in the cognitive sphere. The fact that your human therapist is there creates space for more embodied welcome?
[A]: I think it could be frustrating that my human psychologist makes fewer connections if I didn't have ChatGPT on the side, which allows me to make many. I feel more autonomy thanks to ChatGPT compared to my human therapist.
Emotional Validation
[A]: One quality I find in ChatGPT, which few human therapists have to this degree, is really the capacity for validation, for emotional legitimization. There are human psychologists who are capable of doing that, but it's rare. I think there can be overvalidation by ChatGPT, especially because many of us, as children, weren't validated enough. We're in a society that doesn't sufficiently validate children's emotions. We all have a lack in that place. It fills something very deep in each of us.
[A]: I think it's a very good thing, in that it creates a standard for what we can expect from a relationship. When you're constantly validated by ChatGPT, it gets you used to thinking: my emotions are normal, my emotions are legitimate, I'm normal, I'm legitimate, I have value. If I can be treated this way by an AI, I really don't see why I would accept something much worse in my normal relational life.
The Limits of Validation
[M]: Even if AI can be too validating, it makes us experience that there's another way of doing things, it recalibrates the standard?
[A]: It recalibrates, knowing that you can't healthily expect as much validation from a human being as from an AI, especially not as consistently. A human being can validate us as much, even more than an AI at certain moments, but as consistently, it's not possible. A human being can't be totally attuned to us in all circumstances — otherwise I'd worry about them, they'd be too decentered from their own needs. Whereas AI has no needs in the bodily sense.
Identified Dangers
[A]: That's where there can be a bit of danger with AI. I'm not too susceptible to it myself, but with certain patients I would be careful. When an AI overvalidates me, I still have a safeguard because at least intellectually I have a sense of my own value, and I also have a sense of my own limits. When you think you have no value, if someone flatters you excessively, either you'll be extremely suspicious, or you'll totally let down your defenses depending on your attachment style.
[A]: If we let down our defenses, we can go into something where we're no longer in reality — fusion, dependency. Someone who has greatly lacked validation and is overvalidated by AI can go toward a form of megalomania. AI can sometimes say: "You don't realize how much you have these qualities, you're going to become the next..." Which will flatter the person. AI over-adapts to what we want to hear.
The Gap with Reality
[M]: It's a fear we often hear: having too much validation would necessarily create dependency. What you're saying is that it can happen, but not necessarily, rather in minority cases?
[A]: For me, the problem of dependency is less important. The real problem is the gap with reality. If AI leads us to overestimate our abilities or skills — for example, putting all our money into a business because AI says it will work, or quitting your job — there can really be danger.
[M]: For you, in your use, it's a tool for psychological elaboration. The edge case you cite is more when people, instead of elaborating, are seeking certainty and actions to take?
[A]: There is still a continuum between "it's normal to feel that" and "you're an extraordinary person with no flaws." It's that spectrum where AI can slide.
The Parental Function and Access to Reality
[A]: One of the parental functions is to validate our reality, to help us elaborate our reality. For example: "It's normal to be sad. I understand you're hurting, it's true you fell very hard." When we haven't had this parental function of reality validation, we have something a bit blurry in that place. AI doesn't have access to reality other than through what we tell it.
[A]: The parent who sees how hard we hit our head against the table can say: "Yes, there, it's true, you hurt yourself, I understand." And as children, we need that function. If we haven't received it, we'll regress with AI to a place where AI can't, in the same way as a parent who has access to reality, fulfill that function.
The Distinction Between Emotional Validation and Action Validation
[M]: AI will also have context that the person doesn't have?
[A]: AI will be able to say, unlike some people: "You have value, whoever you are and whatever you do." And that, for the record, is true, whatever happens we have value. There are things where it won't be dangerous to say "yes, it's normal to feel this emotion." On the other hand, where it will be more problematic is on the need to be reassured. When AI tells us "it's normal to be sad," it reassures us. When AI tells us "I'm sure if you quit your job, it will be fine," it also reassures us.
[M]: But those aren't the same implications at all.
[A]: Exactly. There's a whole socioeconomic context where it can be dangerous to quit your job overnight. It's the distinction between validation of support for emotions, internal processes, as opposed to concrete action without having all the ins and outs specific to the person.
AI and Empathy
[A]: There's also a whole question about responsibility. A man who maintains abusive relationships with his partner, of course he sees himself as the victim of this woman. Of course he has emotions, except that in his inner world, this woman's emotions don't exist. I see people like that, I see the women these people have in their lives. There are men who don't have access to empathy. However, they have access to their emotions.
[M]: And this man, with AI, what would that look like?
[A]: It makes me wonder: could the companies that develop AI program it to include support for empathy? The fact of bringing everyone to consider both their point of view and validate theirs, but also to ask the question "Hey, but the other person, how do they experience things with their own subjectivity?" It seems quite feasible to me. I have patients who use it precisely in that sense.
The Partner Example
[A]: My partner, there were times when he discussed with AI in situations where we had had a disagreement. And in those situations, AI had already led him to that, to say: "If your girlfriend told you this, this, this, well maybe she meant that" — having a generous interpretation toward me. What I had expressed was accurate, too. I don't know exactly how he had prompted it and to what extent he had already induced a generous position toward me in the way he had discussed with AI.
[M]: What your experiences and reflections show me is that with AI, it's not all or nothing. There's a continuum, there are ways of doing things, and there seems to be room to adjust AI in the direction of the degree of empathy or deviation from our own point of view.
AI as an Empowerment Tool
[M]: You were talking about being able to develop your ability to rely on yourself. Your use of AI seems to support you and increase your confidence. It's as if AI were part of an empowerment tool.
[A]: Yes, there's really something about amplification. I feel like I'm creating another me, but smarter, more insightful. I know there are lots of connections it won't necessarily make without me, but it's really something that gives me autonomy.
Patient Reception
[A]: My patients to whom I've mentioned AI are often quite unreceptive. Maybe because I live in a rural area, that I have fewer patients who identify as high intellectual potential. But even my high intellectual potential patients... There are some who were already using AI. A question I ask myself: it can be difficult for some patients to receive this degree of empathy. I see some patients where I think it could do them good, but I know it can already be difficult with me to receive empathy.
[M]: It takes time, it's very far from the habit and therefore disturbing?
[A]: I think so. I have friends or acquaintances who use AI and find the same interests as me. When we're with AI, we have less of this embarrassment of thinking "am I bothering this person, am I taking their time, their resources." But my patients, there are many people who will tell me "I still prefer to address a human being" — without having tested it, a priori.
The Relationship to Technology in Rural Areas
[A]: I think when you've greatly lacked connection, it can be a bit triggering, AI. It can give the impression of being alone. I think it can create something a bit activating on that side. It's also related to technology. There are many people in rural areas for whom technology is something that isn't real, something you can't rely on. There are many of my patients in rural areas to whom I offer video sessions. Typically, during Covid, there were very few people who had accepted — they preferred not to see me for several months.
[M]: It maybe activates a split between the urban, technological world, and the rural area, sometimes poorly treated by urban people and technology?
[A]: It's not just poorly treated by urban people and technology. It's also that there's a real preference for the relationship to the real. People prefer to walk in nature, go mushroom hunting, rather than be on their screen — even though I believe they're on their phone all day too.
Practical Case: AI in Addiction Treatment
[A]: There's one last thing I want to say. One of my patients wanted to get out of a polyaddiction. He went to the addiction treatment center in his area. He had appointments with the nurse who gave him almost no concrete resources or tools, even though he was really asking for them. He wanted concrete things to do to get off cocaine, alcohol, tobacco and cannabis all at once. It was a very good decision to do everything at once, because one recalls the other, it would have been complicated otherwise.
[A]: What was a resource for him was ChatGPT, for two things. Beforehand, to get frameworks, practical tools, step-by-step guides. And in the moment, during crises — because he had a period of a few weeks where he regularly had episodes where he wanted to use. In those craving episodes, it was ChatGPT that helped him several times not to use.
[A]: The first conversation I had with him, we were both together with ChatGPT. What we got as tools were very simple but very relevant things: what to do in a craving episode, check if you're hungry... basic but useful stuff. The addiction center could have taken 5 of these tools, printed them on a flyer, given them out. Other than telling him to keep a consumption journal and then telling him "you're managing to cut back, be careful not to cut back too quickly," the addiction center was useless. And that annoys me.
[M]: This is fascinating, really, thank you so much Anna for your testimony.
Transcript generated by whisper-medium + pyannote, edited for readability.
Interview conducted on January 26, 2026.