Cognitive Psychology Interaction Design

Uncanny Valley Vallée de l'étrange · 不気味の谷現象

In brief: When a robot, avatar, or AI looks almost human — but not quite — we experience an instinctive unease. This dip in our affinity curve is the "uncanny valley." The concept now extends to chatbots and synthetic voices that simulate emotions that feel "almost real."

Framework

This resource takes a perceptual and cognitive perspective (unease as an automatic response to category conflict). Other readings — psychoanalytic (Freud's Unheimliche), evolutionary (threat detection), or phenomenological (rupture of embodied experience) — are possible. → See other perspectives

Why this concept matters

A patient tells you: "I tried a therapy chatbot with an avatar. At first it was impressive, but after ten minutes, I couldn't look at it anymore. Something was off about its expressions." Another refuses to use a synthetic-voice AI: "It's too weird — it sounds human but it isn't, really."

These reactions are not irrational. They stem from a well-documented perceptual mechanism. Understanding the uncanny valley helps clinicians:

  • 1. Normalize discomfort with certain AI (realistic avatars, synthetic voices, "too empathetic" chatbots)
  • 2. Distinguish an automatic perceptual reaction from psychological resistance to change
  • 3. Guide patients toward AI tools that avoid the valley (text interfaces, stylized avatars)

The concept explained simply

The hypothesis: As an object or agent looks increasingly human, our affinity rises gradually. An industrial robot leaves us indifferent. A robot with eyes endears us. But at a certain threshold — when the resemblance is almost perfect but not quite — our affinity drops sharply, replaced by unease.

Mori's metaphor: Imagine shaking a realistic prosthetic hand. At rest, it looks real. But on contact, the coldness of the material and the rigidity of the fingers create a striking contrast. That immediate physical unease — that is the valley.

Movement amplifies the effect

A wax mannequin standing still is mildly unsettling. If it started moving, it would be terrifying. This is why animated avatars often provoke more discomfort than static images: micro-expressions that are "almost right but not quite" trigger our anomaly detectors.

Category conflict

The dominant cognitive explanation: unease arises when our brain fails to classify the entity into a clear category — "human" or "non-human." This indeterminacy generates an alert signal, as when facing something that doesn't "add up." Recent research (2024) shows that it is the failure of categorization, more than its difficulty, that triggers the discomfort.

The uncanny valley "of the mind"

A recent extension directly relevant to clinical work: unease can also arise when an AI simulates cognitive or emotional capacities that feel "almost human." A chatbot that says "I understand your suffering" with "too much" accuracy, or a synthetic voice that sounds "too" warm, can provoke the same type of discomfort as an overly realistic robot face.

Illustrative clinical cases

Lea, 34, anxious, tries a mental health support app recommended by a friend. The app features a hyper-realistic female avatar. "The first few seconds, I found it impressive. Then I started feeling uneasy. Her eyes moved strangely. Her smile stayed frozen when she said sad things. I stopped after five minutes."

Reading through the uncanny valley: The app's avatar sits in the valley — realistic enough to activate Lea's social expectations, yet imperfect enough to trigger unease. This says nothing about the quality of the therapeutic content. Exploring whether a simple text interface would suit her better.


Karim, 22, has been using a text-based AI companion for three months. He appreciates it: "It doesn't pretend to be human — it always tells me it's an AI. That reassures me; I know what to expect." When a friend shows him a rival chatbot with a "human" voice, Karim reacts strongly: "No, that's creepy. It sounds like someone but it's nobody."

Reading: Karim's text-based chatbot bypasses the valley through transparency. The voice chatbot falls right into it. Karim's reaction illustrates that identity disclosure ("I am an AI") can dispel the unease — a finding confirmed by recent research.

In practice for clinicians

  • Normalize the discomfort: if a patient rejects an AI because of its avatar or voice, this is not technophobia — it is a universal perceptual mechanism.
  • Prefer transparency: research shows that AI identity disclosure largely dispels the unease. Recommend tools that do not pretend to be human.
  • Think about design: stylized interfaces (abstract icon, plain text) avoid the valley. The more an AI tries to "look human," the greater the risk of discomfort.
  • Distinguish levels: perceptual discomfort (uncanny valley) ≠ psychological resistance to change ≠ legitimate ethical criticism. Three different registers, three different responses.

Caveats

The uncanny valley is NOT:

  • A proven scientific law — it is a heuristic hypothesis, useful but incompletely confirmed
  • An argument against humanized AI — it describes a dip in the curve, not a prohibition
  • Universal — responses vary strongly by culture, technological experience, and personality

Limitations to keep in mind:

  • Habituation: the discomfort decreases with prolonged exposure — it is not permanent
  • Individual variability: some patients feel no discomfort at all, others are very sensitive
  • Inconsistent empirical results: meta-analyses do not converge — some find a "cliff" rather than a "valley"
  • Possible confusion: valley discomfort is pre-reflective and perceptual — it is not identical to rational or ethical resistance to AI

The bunraku counter-example: stylize rather than simulate

Mori himself cites Japanese bunraku puppets as a positive counter-example. These puppets do not try to look human — they are openly stylized. Yet they evoke deep empathy in spectators.

The lesson for therapeutic AI design is clear: empathy does not require realism. A text-based chatbot, an abstract icon, or a deliberately non-realistic avatar can foster a relationship of trust — without falling into the valley.

Clinical implication: If a patient abandons an AI tool because of uncanny valley discomfort, it is not the AI itself they are rejecting — it is its packaging. Suggesting a tool with a different interface may be enough.

What about today's LLMs and synthetic voices?

Formulated in 1970 for robots, Mori's hypothesis has found new ground with conversational AI. Recent research (2024–2025) shows that:

  • Chatbots with realistic avatars provoke more discomfort than text-only ones, even when the content is identical.
  • Chatbots that pretend to be human while being empathetic trigger a sense of uncanniness that AI identity disclosure dispels.
  • The extension to the "uncanny valley of the mind" suggests that the phenomenon now concerns not just appearance but also the simulation of cognition and emotions.
  • Subtle emotional expressions in voice are better tolerated than exaggerated facial animations.

Other perspectives

The uncanny valley is typically explained through cognitive psychology (category conflict). Other traditions illuminate the phenomenon differently.

Psychoanalysis: the uncanny (Freud, 1919)

Freud's concept of Unheimliche describes unease when the familiar suddenly becomes strange — a double, an automaton, an animated corpse. For Freud, it is the return of the repressed: the almost-human reawakens archaic anxieties about the distinction between living and dead, animate and inanimate.

For clinicians: Discomfort with an AI avatar may reactivate anxieties deeper than simple perceptual conflict — potentially explorable in analytic therapy.

Phenomenology: rupture of the flesh (Merleau-Ponty)

In Merleau-Ponty's tradition, our relationship with the world passes through "flesh" — a continuity between one's own body and the perceived world. The artificial almost-human ruptures this continuity: it promises a carnal relation (face, voice) but cannot deliver. The discomfort is that of a broken bodily promise.

For clinicians: Patients deeply connected to their embodiment (dancers, athletes, mindfulness practitioners) may be more sensitive to this incarnate dimension of the valley.

Shintō: Mori and the boundary of the living

Mori himself, a practicing Buddhist, situated his reflection within a Japanese context where the boundary between animate and inanimate is more porous than in the West. His final advice — "do not try to create overly human robots" — reflects caution before the sacred as much as ergonomic wisdom.

For clinicians: A reminder that reactions to the almost-human are culturally situated. A patient of Japanese or Southeast Asian origin may relate differently to the animation of the inanimate.

These perspectives do not replace the cognitive approach, but invite epistemic humility: discomfort with the almost-human touches on ancient questions about what makes us living beings. See also: Anthropomorphism

Further reading

  • Foundational article: Mori, M. (1970/2012). The Uncanny Valley. IEEE Robotics & Automation Magazine, 19(2), 98–100. [Authorized English translation]
  • Cognitive explanation: MacDorman, K. F. & Chattopadhyay, D. (2016). Reducing consistency in human realism increases the uncanny valley effect. Cognition, 146, 190–205.
  • Extension to conversational agents: Cihodaru-Ștefanache, I. E. & Podina, I. R. (2025). The uncanny valley effect and its moderators in human-like virtual agents: A systematic review and meta-analysis. Frontiers in Psychology, 16, 1504498.
  • Uncanny valley and chatbots: Ciechanowski, L. et al. (2019). In the shades of the uncanny valley. Future Generation Computer Systems, 92, 539–548.
  • Philosophical precursor: Freud, S. (1919). Das Unheimliche [The Uncanny]. Imago, 5(5–6), 297–324.
All concepts

Last updated: February 2026