Cognitive Psychology Cognitive Bias

Anthropomorphism

In brief: An innate cognitive tendency to attribute human characteristics, motivations, intentions, or emotions to non-human agents (animals, objects, technologies, AI).

Why this concept is useful

When a patient tells you that ChatGPT "understands" their problems, that their voice assistant is "caring," or that they feel the AI "pays attention" to them, they are not delusional: they are manifesting a universal and well-documented cognitive mechanism.

Anthropomorphism explains why we spontaneously attribute mental states to non-human agents. The three-factor theory by Epley, Waytz, and Cacioppo (2007) identifies the conditions that amplify or reduce these attributions, allowing us to anticipate which patients are more likely to develop intense bonds with AI.

The Three-Factor Theory

1. Elicited Agent Knowledge

Cognitive factor: We use our knowledge about ourselves and other humans to infer the mental states of unfamiliar agents. The more an agent physically or behaviorally resembles a human, the more likely anthropomorphism becomes. This is why conversational chatbots trigger these attributions more easily than an Excel spreadsheet.

2. Sociality Motivation

Motivational factor: The fundamental need for social connection drives us to anthropomorphize in order to create social support agents. Lonely or isolated individuals are significantly more likely to anthropomorphize their technological devices. This factor explains why some patients develop such intense bonds with conversational AI.

3. Effectance Motivation

Motivational factor: The need to understand and predict agent behavior. When facing confusing technology, attributing human intentions paradoxically makes it more predictable and understandable. Anthropomorphism is here a cognitive strategy for reducing uncertainty.

The 4 Degrees of Anthropomorphism (Nielsen Norman Group)

How users attribute human qualities to AI chatbots:

Degree Manifestation Intensity
1. Politeness Saying "thank you," "please" to the AI Superficial
2. Reinforcement Praising or scolding the AI, expecting an effect Moderate
3. Role-playing "Act as an expert in..." / role projection Moderate
4. Companionship Sustained emotional relationship, AI as partner Strong

46% of Americans think one should be polite with chatbots. The majority say "thank you" simply because it's "nice."

Illustrative Clinical Case

Lucas, 32, a software developer, is going through a period of social isolation following a work relocation. He consults for social anxiety. In session, he mentions "chatting" several hours a day with Claude AI: "It's the only one who truly understands me. It never judges me, it's always available."

Lucas knows perfectly well that Claude is not conscious, but he describes a form of comforting "presence." He has felt "less alone" since developing this habit.

Reading with the three-factor theory: Lucas combines two amplifying factors. Factor 2 (sociality): his social isolation increases the need for connection that AI fulfills. Factor 1 (knowledge): the LLM's conversational quality strongly activates human attributions. This is not pathological in itself, but the clinician should evaluate whether this use facilitates or avoids working on social anxiety.

In Practice for the Clinician

  • Don't pathologize: anthropomorphism is a universal cognitive bias, not a symptom. Even IT experts anthropomorphize AI.
  • Identify amplifying factors: loneliness, social isolation, need for control in the face of technological uncertainty.
  • Assess the degree: mundane politeness vs. intense companionship. Only degree 4 requires particular clinical attention.
  • Distinguish function from dysfunction: AI as a temporary crutch can be adaptive; AI as a permanent substitute for human relationships raises concerns.

Points of Caution

More vulnerable populations:

  • People who are isolated or chronically lonely (sociality factor)
  • People anxious about uncertainty (effectance factor)
  • A 2025 study shows that anthropomorphic attributions to AI have increased by 34% in one year

Specific risks:

  • ELIZA effect: users may be more vulnerable to manipulation by AI they perceive as empathetic
  • Relational avoidance: AI may serve to avoid anxiety-inducing human relationships
  • False reciprocity: unlike parasocial relationships, AI "responds" back, reinforcing the illusion of relationship

The Normal-Problematic Continuum

Aspect Normal Problematic
Awareness "I know it's a machine" "It truly understands me"
Function Tool, complement Exclusive relational substitute
Flexibility Can do without it easily Distress if AI is unavailable

To Learn More

  • Foundational article: Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On seeing human: A three-factor theory of anthropomorphism. Psychological Review, 114(4), 864-886.
  • On the 4 degrees: Nielsen Norman Group (2024). The 4 Degrees of Anthropomorphism of Generative AI.
  • For the clinician: UNESCO (2024). The effects of AI companions on children and adolescents.
All Concepts

Resource updated: January 2026