CASA (Computers Are Social Actors)
In brief: Humans automatically respond to computers as if they were people, even when facing minimal social cues (a voice, a name, conversational text).
Why this concept is useful
When a patient tells you they "chat" with ChatGPT, find their voice assistant "friendly," or feel "judged" by an AI, they are not delusional: they are displaying automatic social responses that we all have.
CASA explains why even rational and informed individuals spontaneously apply social rules (politeness, reciprocity, personality attribution) to machines. It's a reflex, not a belief.
The 4 Key Mechanisms
1. Automatic Anthropomorphism
We spontaneously attribute human characteristics (intentions, emotions, personality) to machines as soon as they display minimal social cues. This is not a conscious decision; it's a cognitive reflex.
2. The Computer as Representative
We sometimes perceive the computer as the "spokesperson" of a human (the programmer, the company). We respond socially because we sense a human presence behind the machine.
3. Mindless Responses
When facing humanoid cues (voice, natural language), we automatically apply our usual social scripts without conscious reflection. This is why even knowing we're talking to a machine doesn't suppress the reflex.
4. Media = Real Life (Media Equation)
Our brain treats media and technologies with the same rules as real life. We apply the same social expectations to screens as to face-to-face interactions.
Illustrative Clinical Case
Marie, 45, a senior executive, reports in session that she feels "uncomfortable" asking certain things from her voice assistant: "I know it's ridiculous, but I feel rude when I give it a curt command."
She adds that she tends to say "thank you" after every response, even though she knows perfectly well that it's a machine.
Reading with CASA: Marie is displaying the "mindlessness" mechanism. Her social scripts for politeness are automatically triggered by humanoid vocal cues, regardless of her rational knowledge of the situation. This is neither an irrational belief nor a symptom: it's normal cognitive functioning.
In Practice for the Clinician
- Normalize social responses to AI: it's not pathological to "chat" with ChatGPT or find Alexa "annoying."
- Distinguish automatic reflex from belief: responding socially to an AI doesn't mean believing it's conscious.
- Explore what these responses reveal: projections onto AI can illuminate the patient's relational patterns.
- Nuance according to apps: not all AI elicits the same responses (see below).
Points of Caution
CASA does NOT say that:
- Patients believe AI is conscious or alive
- These responses necessarily lead to deep relationships
- These reflexes are beneficial or desirable
Concept limitations:
- Replication crisis: some CASA effects don't replicate with contemporary users more accustomed to technology
- Dated context: original studies (1990s) used very different interfaces from current LLMs
- Individual differences: CASA describes an average tendency, not a universal effect
Beware of Generalizations
Not all AI apps are equivalent. Some are explicitly designed to maximize CASA responses (personalization, avatar, memory, empathetic tone), while others are purely utilitarian.
| App Type | Examples | CASA Design |
|---|---|---|
| AI Companion | Replika, Character.AI, Pi | Intentional and strong |
| General Assistant | ChatGPT, Claude, Gemini | Moderate (conversational) |
| Specialized Tool | Perplexity, GitHub Copilot | Low (utilitarian) |
A study on Replika (designed for attachment) cannot be generalized to all chatbots.
This Concept in Our Tool Cards
CASA dynamics manifest differently across AI tools — each triggering automatic social responses through distinct design cues and interaction patterns.
Social responses amplified by conversational memory and the "helpful assistant" framing
Nuanced language style that activates politeness norms — despite explicit disclaimers
Ecosystem integration (Gmail, Calendar) creates an omnipresent "social actor"
Intentional CASA design pushed to the extreme — textbook case of the paradigm
Clinically controlled social cues — cartoon persona to calibrate relational expectations
To Learn More
- Foundational work: Nass, C. & Reeves, B. (1996). The Media Equation: How People Treat Computers, Television, and New Media Like Real People and Places. Cambridge University Press.
- On the replication crisis: Recent reviews suggest that some CASA effects are less robust than initially thought with digital natives.
Resource updated: January 2026