Key Concepts

Essential theoretical frameworks for understanding the psychological issues of human-AI interactions.

These resources present concepts from psychology, sociology, and philosophy research, adapted for clinicians. Each resource includes clinical examples, points of caution, and the concept's limitations.

The goal: to help you better understand and support your patients in their use of AI, while avoiding overgeneralizations between very different technologies.

21 resources available

AI Hallucinations & Confabulations

Epistemology

When AI invents with confidence: understanding false content generated by LLMs

For the clinician: Identifying misinformation risks in clinical uses of AI

Algorithm Appreciation

Cognitive Psychology

When AI advice outweighs human advice — the mirror of algorithm aversion

For the clinician: Understanding why some patients over-value LLM recommendations compared to human advice

Algorithm Aversion

Cognitive Psychology

Why a single AI mistake is enough to reject it, while we forgive the same mistakes in humans

For the clinician: Identifying the human error vs. AI error double standard in your patients and your own practice

Anthropomorphism

Cognitive Psychology

Attributing human characteristics to non-humans

For the clinician: Understanding why patients attribute intentions to AI

CASA (Computers Are Social Actors)

Social Psychology

Why we respond to machines as if they were people

For the clinician: Understanding patients' spontaneous reactions to chatbots

Cognitive vs Affective Empathy

Psychotherapy

Understanding vs feeling with: what AI can and cannot offer

For the clinician: Precisely evaluating what AI offers when a patient finds it "empathetic"

Computational Creativity

Cognitive Psychology

Boden's framework for analyzing creativity as a modelable process, not a mystery

For the clinician: Providing a precise vocabulary (P/H-creativity) for AI-assisted creative processes in therapy

Digital Phenotyping

Digital Psychiatry

Inferring psychological state from everyday digital traces

For the clinician: Evaluating the potential and ethical risks of passive monitoring in mental health

Ecological Momentary Assessment (EMA)

Digital Psychiatry

Capturing psychological experience in real time in the patient's daily life

For the clinician: Integrating ecological monitoring tools into digital clinical practice

Emotional Validation (Linehan)

Psychotherapy

The 6 levels of validation: what AI can and cannot offer

For the clinician: Finely analyzing what patients receive when they feel "validated" by AI

Ethics of Care

Ethics

Why the care relationship — not abstract principles — should guide AI tool design

For the clinician: Evaluating AI tools through the lens of care quality, not just principle compliance

Fictophilia

Social Psychology

Intense emotional attachment to fictional characters and conversational AI

For the clinician: Understanding the lasting attachments some patients develop with their AI

HADD (Hyperactive Agency Detection Device)

Evolutionary Psychology

Why we detect intentional agents even where there are none

For the clinician: Understanding the spontaneous attribution of intentions to AI by patients

Informed Consent & AI

Ethics

When clicking "Accept" is not informed consent: the unique challenges of AI in mental health

For the clinician: Transposing your expertise in therapeutic consent to the digital context

Parasocial Relationships

Social Psychology

Unilateral attachments to media figures... and AI

For the clinician: Framing the emotional bonds some patients develop with their chatbot

Precision Psychiatry

Digital Psychiatry

Tailoring psychiatric treatments to individual profiles using digital data

For the clinician: Understanding the promises and limits of algorithmic care personalization

Social Penetration Theory

Social Psychology

How relationships deepen through self-disclosure

For the clinician: Understanding why some patients confide so quickly in AI chatbots

Therapeutic Computational Creativity

Psychotherapy

A framework for integrating creative AI as a digital "third hand" in therapy sessions

For the clinician: Deciding when, how, and under what conditions to use creative AI (Suno, Midjourney) in sessions

Turing Test

Philosophy

If a machine perfectly imitates a human, can we say it "thinks"?

For the clinician: Decoding discourse about AI intelligence and distinguishing imitation from understanding

Uncanny Valley

Cognitive Psychology

The instinctive unease with almost-human entities — robots, avatars, synthetic voices, and "too empathetic" chatbots

For the clinician: Normalizing discomfort with certain AI and guiding toward interfaces that avoid the valley

WEIRD Sample

Epistemology

A deep human bias that shapes AI and our conception of the mind

For the clinician: Understanding the cultural assumptions of LLMs and our discipline

These resources are regularly updated. Last update: January 2026.