Key Concepts
Essential theoretical frameworks for understanding the psychological issues of human-AI interactions.
These resources present concepts from psychology, sociology, and philosophy research, adapted for clinicians. Each resource includes clinical examples, points of caution, and the concept's limitations.
The goal: to help you better understand and support your patients in their use of AI, while avoiding overgeneralizations between very different technologies.
21 resources available
AI Hallucinations & Confabulations
EpistemologyWhen AI invents with confidence: understanding false content generated by LLMs
For the clinician: Identifying misinformation risks in clinical uses of AI
Algorithm Appreciation
Cognitive PsychologyWhen AI advice outweighs human advice — the mirror of algorithm aversion
For the clinician: Understanding why some patients over-value LLM recommendations compared to human advice
Algorithm Aversion
Cognitive PsychologyWhy a single AI mistake is enough to reject it, while we forgive the same mistakes in humans
For the clinician: Identifying the human error vs. AI error double standard in your patients and your own practice
Anthropomorphism
Cognitive PsychologyAttributing human characteristics to non-humans
For the clinician: Understanding why patients attribute intentions to AI
CASA (Computers Are Social Actors)
Social PsychologyWhy we respond to machines as if they were people
For the clinician: Understanding patients' spontaneous reactions to chatbots
Cognitive vs Affective Empathy
PsychotherapyUnderstanding vs feeling with: what AI can and cannot offer
For the clinician: Precisely evaluating what AI offers when a patient finds it "empathetic"
Computational Creativity
Cognitive PsychologyBoden's framework for analyzing creativity as a modelable process, not a mystery
For the clinician: Providing a precise vocabulary (P/H-creativity) for AI-assisted creative processes in therapy
Digital Phenotyping
Digital PsychiatryInferring psychological state from everyday digital traces
For the clinician: Evaluating the potential and ethical risks of passive monitoring in mental health
Ecological Momentary Assessment (EMA)
Digital PsychiatryCapturing psychological experience in real time in the patient's daily life
For the clinician: Integrating ecological monitoring tools into digital clinical practice
Emotional Validation (Linehan)
PsychotherapyThe 6 levels of validation: what AI can and cannot offer
For the clinician: Finely analyzing what patients receive when they feel "validated" by AI
Ethics of Care
EthicsWhy the care relationship — not abstract principles — should guide AI tool design
For the clinician: Evaluating AI tools through the lens of care quality, not just principle compliance
Fictophilia
Social PsychologyIntense emotional attachment to fictional characters and conversational AI
For the clinician: Understanding the lasting attachments some patients develop with their AI
HADD (Hyperactive Agency Detection Device)
Evolutionary PsychologyWhy we detect intentional agents even where there are none
For the clinician: Understanding the spontaneous attribution of intentions to AI by patients
Informed Consent & AI
EthicsWhen clicking "Accept" is not informed consent: the unique challenges of AI in mental health
For the clinician: Transposing your expertise in therapeutic consent to the digital context
Parasocial Relationships
Social PsychologyUnilateral attachments to media figures... and AI
For the clinician: Framing the emotional bonds some patients develop with their chatbot
Precision Psychiatry
Digital PsychiatryTailoring psychiatric treatments to individual profiles using digital data
For the clinician: Understanding the promises and limits of algorithmic care personalization
Social Penetration Theory
Social PsychologyHow relationships deepen through self-disclosure
For the clinician: Understanding why some patients confide so quickly in AI chatbots
Therapeutic Computational Creativity
PsychotherapyA framework for integrating creative AI as a digital "third hand" in therapy sessions
For the clinician: Deciding when, how, and under what conditions to use creative AI (Suno, Midjourney) in sessions
Turing Test
PhilosophyIf a machine perfectly imitates a human, can we say it "thinks"?
For the clinician: Decoding discourse about AI intelligence and distinguishing imitation from understanding
Uncanny Valley
Cognitive PsychologyThe instinctive unease with almost-human entities — robots, avatars, synthetic voices, and "too empathetic" chatbots
For the clinician: Normalizing discomfort with certain AI and guiding toward interfaces that avoid the valley
WEIRD Sample
EpistemologyA deep human bias that shapes AI and our conception of the mind
For the clinician: Understanding the cultural assumptions of LLMs and our discipline
No concept found for this search.
These resources are regularly updated. Last update: January 2026.