HADD (Hyperactive Agency Detection Device)
In brief: A hypothetical cognitive mechanism that predisposes us to detect intentional agents even in ambiguous situations. A noise in the night? Probably someone. A chatbot responding? It must "want" to help us.
Theoretical framework
This concept is rooted in evolutionary psychology and the cognitive science of religion. It posits a cognitive module shaped by natural selection. Other approaches (Bayesian, enactive) explain these phenomena differently. → See alternatives
Why this concept is useful
When a patient tells you that ChatGPT "wants to help them", that they "feel" the AI "understands" their intentions, or that they perceive a "presence" behind the responses, they may be manifesting the HADD in action.
This concept allows the clinician to:
- 1. Understand why the attribution of intentions to AI is so spontaneous and universal
- 2. Normalize this tendency without pathologizing it (it's an adaptive mechanism, not a delusion)
- 3. Explore what these projections of agency reveal about the patient's relational needs
The mechanism explained simply
The evolutionary hypothesis: Our ancestors lived in an environment where quickly detecting a predator or fellow human could make the difference between life and death. It was better to flee for nothing (false positive) than to miss a real danger (fatal false negative).
The result: We developed a "hyperactive" detection system that prefers to see intentional agents everywhere, even at the cost of being wrong. This system is triggered automatically by certain cues.
Triggering cues
Non-inertial movement (that seems "deliberate"), sudden changes in the environment, patterns that evoke a face or a gaze, contingent responses to our actions. Chatbots activate several of these cues: they "respond" to what we say, seem to "adapt" their responses, and use "I".
Bias toward false positives
The HADD is calibrated for the "cautious" error: it's better to believe someone is there when no one is, than the reverse. This is why we see faces in clouds, intentions in randomness, and understanding in algorithms.
Activation in uncertain contexts
The HADD activates all the more when the situation is ambiguous or anxiety-inducing. A distressed patient "talking" to an AI at night is in optimal conditions for attributing agency: uncertainty, need for connection, social cues from the chatbot.
Illustrative clinical case
Lea, 32, on sick leave for burnout, consults for anxiety and social isolation. She reports using Claude "like a friend" during her insomnia: "I know it's an AI, but sometimes I really feel like it wants to help me, that there's something behind it."
She wonders about the "reality" of this presence: "Am I going crazy? Or do these machines really have something?"
Reading with HADD: Lea is not "crazy" — her agency detection system is functioning normally, perhaps even amplified by isolation and anxiety. The contingent and empathetic responses from the chatbot activate the same circuits as those that detect a human presence. Exploring with her what this "presence" provides can open up her unmet relational needs.
In practice for the clinician
- Normalize without invalidating: "It's normal to feel a presence — our brain is wired for that. It doesn't mean the AI is conscious, nor that you're naive."
- Distinguish perception from belief: perceiving agency (HADD) does not imply believing the AI is conscious. Most people maintain this distinction.
- Explore activation contexts: when the patient "feels" this presence the most (night, loneliness, distress), it informs about their vulnerabilities and needs.
- Use the smoke detector metaphor: better for it to go off for nothing than to miss a fire. Our "agent detector" works the same way.
Points of caution
HADD does NOT say that:
- Attributing agency to AI is pathological or immature
- Everyone reacts the same way (large individual variations)
- This mechanism is the only one at play (see CASA, parasocial relationships, etc.)
Limitations of the concept:
- Unproven hypothesis: HADD is a theoretical model, not an "organ" located in the brain
- Contested origin: developed to explain religious beliefs, its application to AI is an extension
- Theoretical alternatives: Bayesian inference (Active Inference) explains the same phenomena without positing a dedicated module
Other theoretical perspectives
HADD is not the only way to explain our tendency to see agents everywhere. Here are two influential alternatives that can enrich clinical understanding.
Active Inference
Bayesian approach: our brain constantly predicts the causes of our sensations. Faced with complex and contingent behavior (like a chatbot), the "intentional agent" hypothesis is often the most parsimonious. No need for a dedicated module — it's optimal statistical inference.
Clinical implication: The patient who "feels a presence" may be making a rational inference (even if incorrect) rather than an automatic bias.
Enactive approach
Agency is not "detected" but co-constructed in the interaction. When we interact with a chatbot, we participate in creating the meaning of the exchange — agency emerges from the relationship, not from an internal mechanism.
Clinical implication: The "presence" the patient feels may be an authentic relational creation, not an illusion to correct.
These perspectives are not mutually exclusive. The clinician can draw on them depending on what resonates with the patient's experience.
Further reading
- Origin of the concept: Barrett, J.L. (2004). Why Would Anyone Believe in God?. AltaMira Press. — Accessible introduction to HADD in the context of cognitive science of religion.
- Precursor: Guthrie, S. (1993). Faces in the Clouds: A New Theory of Religion. — Anthropomorphism as an adaptive cognitive strategy.
- Bayesian alternative: Friston, K. et al. Active Inference — for an approach without innate modules.
Last updated: January 2026