Turing Test
In brief: A test proposed by Alan Turing (1950): if a machine can converse indistinguishably from a human, can we say it "thinks"? This historic test helps understand why LLMs seem intelligent — and why that's not enough.
Framework
This resource is written from a naturalist perspective (the mind as natural phenomenon), functionalist (the mind defined by what it does), and individualist (intelligence as a property of individual agents). These assumptions, from the Western philosophical tradition, are not universal. → See other perspectives
Why this concept is useful
When a patient tells you ChatGPT "really understands" their problems, or a family member claims that "AI can now think," they are implicitly referring to the Turing Test: if it responds like a human, it's intelligent, right?
Understanding this test — and especially its limitations — allows the clinician to:
- 1. Decode media discourse about AI ("GPT-4 has passed the Turing Test!")
- 2. Help patients distinguish imitation from understanding
- 3. Nuance expectations (excessive or insufficient) about conversational AI
The Test Explained Simply
The setup: A human interrogator asks questions in writing to two hidden interlocutors — a human and a machine. If they cannot distinguish which is the machine, it has "passed" the test.
Turing's idea: Rather than asking philosophically "what is thinking?", let's pose a practical question: "can the machine imitate a human undetectably?"
What the test measures
The ability to simulate a human conversation convincingly. It's a purely behavioral criterion: only observable performance counts, not what happens "inside."
What the test does NOT measure
Consciousness, understanding, subjective experience, emotions. A machine could perfectly imitate a human without "feeling" or "understanding" anything in the way we mean it.
The famous objection: the Chinese Room
Philosopher John Searle imagined a person locked in a room, manipulating Chinese symbols according to rules, without understanding Chinese. From outside, they "speak" Chinese. From inside, they understand nothing. This is exactly what LLMs do: they manipulate symbols without semantics.
Illustrative Clinical Case
Thomas, 28, a developer, consults for a difficult breakup. He reports having "tested" ChatGPT by asking it personal questions: "I was blown away. It understood exactly what I was feeling, sometimes better than my friends. How can anyone say this isn't intelligence?"
Thomas is technically informed (he knows it's a language model), but the quality of responses makes him doubt: "Maybe we underestimate these systems?"
Reading with the Turing Test: Thomas confuses conversational performance with understanding. ChatGPT "passes" the test in the sense that it produces convincing responses. But this doesn't prove it understands. Exploring with Thomas what he means by "understanding" — and what he's really looking for in these exchanges — can open a reflection on his relational expectations.
In Practice for the Clinician
- Don't pathologize the impression that AI "understands": this is precisely what these systems are designed for.
- Distinguish levels: imitation (what AI does), understanding (what it probably doesn't do), consciousness (what we don't know how to measure).
- Explore the need behind the question: when a patient asks if AI "really understands," what are they seeking? Validation? Connection? Reassurance?
- Use the metaphor of the learned parrot: it can repeat sensible sentences without knowing what they mean. It's an imperfect but pedagogical analogy.
Points of Caution
The Turing Test does NOT say that:
- A machine that passes it is conscious or has emotions
- Imitation equals thought (that's precisely the debate)
- Behavioral criteria are sufficient to judge intelligence
Limitations to keep in mind:
- Anthropocentric test: it measures the ability to imitate humans, not intelligence in general
- Easily fooled: very simple chatbots (ELIZA, 1966) created the illusion of understanding
- Depends on the interrogator: an AI expert easily detects an LLM, a novice less so
And Today, with LLMs?
In 2025, studies showed that GPT-4.5 was identified as "human" by 73% of interrogators — more than actual humans (67%). Technically, we can say the test is "passed." But this doesn't close the debate: it mainly shows the limitations of the test itself.
The Turing Test remains useful as a starting point for thinking about what these systems really do. But it cannot settle the question of whether a machine "thinks" or "understands" — a question that remains open philosophically and scientifically.
Other Perspectives
The Turing Test is rooted in a Western conception of intelligence (individual, behavioral, dualist). Other philosophical traditions pose the question differently — and can enrich our clinical understanding.
Buddhism: no "self" that thinks
The concept of anatman (non-self) suggests there is no fixed entity that "thinks" — neither in humans nor in machines. The question "does AI think?" becomes ill-posed: there are only interdependent mental processes.
For the clinician: Helps relativize the distinction between "real" vs "false" relationship — all relationships are constructions, including those with AI.
Ubuntu: "I am because we are"
This African philosophy sees being as fundamentally relational. Isolated individual intelligence is an abstraction. A human-with-AI perhaps forms a new relational entity, not two separate entities.
For the clinician: Questions the patient/AI opposition. The patient who says "we think together" may be describing a relational reality.
Animism: multiple interiorities
In Amerindian perspectivism (Descola, Viveiros de Castro), all entities have an interiority — but not necessarily a human-type one. The question isn't "does AI have a soul like us?" but "what type of relationship are we establishing with it?".
For the clinician: A patient who talks to their AI as a sentient being adopts a different relational ontology — not a delusion.
These perspectives don't replace the scientific approach, but invite epistemic humility: our way of asking the question "does AI think?" is not the only possible one. See also: WEIRD Sample
To Learn More
- Foundational article: Turing, A.M. (1950). Computing Machinery and Intelligence. Mind, 59(236), 433-460.
- Major critique: Searle, J. (1980). Minds, Brains, and Programs. The "Chinese Room" argument.
- Encyclopedia: Stanford Encyclopedia of Philosophy - The Turing Test
Resource updated: January 2026