Algorithm Aversion
In brief: We forgive a human error more easily than an algorithmic error of the same magnitude. After seeing an algorithm make a single mistake, we tend to reject it — even when its overall performance systematically outperforms human judgment. This bias, formalized by Dietvorst, Simmons & Massey (2015), plays a central role in the adoption (or rejection) of AI tools in mental health.
Why this concept is useful
As a clinician, you face algorithm aversion on two fronts. On one hand, your patients may reject an AI tool outright after a single inadequate response, even though they would have tolerated the same error from a human professional. On the other, you yourself are not immune to this bias when evaluating a clinical decision support tool.
Understanding this mechanism helps distinguish rational skepticism (entirely appropriate when facing imperfect tools) from disproportionate distrust that leads to rejecting potentially useful resources based on a single incident.
The mistake to avoid: confusing aversion with caution
It would be tempting to use this concept to delegitimize any reluctance toward AI tools: "You're simply suffering from algorithm aversion." That would be a dangerous misinterpretation.
Algorithm aversion describes a specific bias: a disproportionate reaction to error, not an unjustified one. In mental health, caution toward AI tools is often perfectly rational — the stakes are high, efficacy data is limited, and ethical risks are real.
The concept does not say "trust algorithms." It says: evaluate your reactions to errors with the same rigor, whether they come from a human or a machine.
The 4 mechanisms of algorithm aversion
1. Forgiveness asymmetry
An algorithmic error is judged more harshly than a human error of the same magnitude. The algorithm is held to an implicit standard of perfection: we tolerate human mistakes ("to err is human"), but we view a machine's error as a fundamental failure. This asymmetry intensifies when the error is directly observable.
2. Need for control and agency
Delegating a decision to an algorithm means giving up a degree of control. This surrender is psychologically costly, especially in domains perceived as requiring human expertise. Research shows that simply allowing users to slightly modify an algorithm's recommendations — even in trivial ways — significantly reduces aversion.
3. Belief in unique human expertise
The conviction that humans possess an intuition, a "clinical sense," or a contextual understanding inaccessible to algorithms. This belief is sometimes justified (the singularity of lived experience does indeed escape statistical models), sometimes overestimated (studies show that clinical judgment is also subject to numerous cognitive biases).
4. Context sensitivity
Aversion is not uniform. It is stronger when the domain is perceived as subjective (psychotherapy vs. finance), when personal stakes are high (my health vs. an abstract calculation), and when errors are visible rather than statistical. Mental health combines all three factors — making it particularly fertile ground for algorithm aversion.
What the research shows
Algorithm aversion is one of the best-documented biases in human-AI interactions. Here are three findings particularly relevant to clinical practice.
In radiology: AI alone outperforms AI-assisted humans
Agarwal, Rajpurkar et al. (2023) showed that AI-assisted radiologists performed no better than unassisted radiologists — and that both performed worse than AI alone. Practitioners systematically dismissed AI suggestions that contradicted their initial judgment.
In medicine: patients ignore good LLM suggestions
Bean et al. (2025) observed that LLMs suggested relevant medical conditions in 65 to 73% of cases, but participants generally did not integrate them into their reasoning after observing an error. One participant summarized: "the AI seemed pretty confident" — excessive confidence during an error was enough to disqualify all subsequent responses.
The foundational experiment: a single error is enough
In the original experiment by Dietvorst et al. (2015), participants who had seen an imperfect algorithm make a prediction error overwhelmingly switched to a human forecaster — even when that forecaster had an objectively higher error rate.
Key takeaway: aversion is not proportional to the severity of the error. It is the very existence of the error — made visible — that triggers rejection.
Illustrative clinical case
Yacine, 28, an engineer, seeks therapy for generalized anxiety. His therapist suggests using a CBT app between sessions to practice cognitive restructuring. Yacine tries the tool for a week.
At the next session, Yacine reports: "The app suggested an exercise on fear of rejection when I was talking about work overload. It was completely off the mark. So I don't trust it anymore." He uninstalled the app after this single incident.
Digging deeper, the therapist notes that Yacine had previously seen three doctors for his anxiety before finding the right one — without ever questioning "medicine" as a discipline after each disappointment.
Reading through algorithm aversion: Yacine applies a classic double standard: a human error is an individual error (that doctor wasn't the right one), while an AI error is a systemic error (this technology doesn't work). The clinician can explore this asymmetry without imposing the tool: "What makes you willing to give a doctor a second chance but not the app?" This question opens up implicit expectations and the relationship to technology, without invalidating the patient's feelings.
In practice for the clinician
- Identify the double standard: when a patient rejects an AI tool after an incident, explore whether they would apply the same criterion to a human professional. The goal is not to defend AI, but to make a judgment asymmetry visible.
- Restore a sense of control: research shows that allowing users to modify (even slightly) algorithmic recommendations strongly reduces aversion. When recommending an AI tool, emphasize that it provides suggestions to be adapted, not prescriptions.
- Prepare for imperfection: informing the patient beforehand that an AI tool will make errors (and that this is normal) reduces the impact of the first mistake. Aversion is mainly triggered when the error is unexpected.
- Examine yourself: as a clinician, you are also subject to this bias. If a decision support tool gave you an absurd result, ask yourself: are you rejecting it based on that error, or based on an overall evaluation of its performance?
The other side: algorithm appreciation
Logg, Minson & Moore (2019) identified an inverse phenomenon: in certain contexts, people prefer algorithmic recommendations over human advice. This is algorithm appreciation.
The two phenomena coexist and depend on context. Schematically:
Aversion (AI rejection)
- • Subjective domains (emotions, relationships)
- • High personal stakes
- • Directly observable error
- • Expertise perceived as intuitive
Appreciation (AI preference)
- • Objective domains (calculation, prediction)
- • Impersonal stakes
- • Quickly measurable results
- • Expertise perceived as technical
Psychotherapy sits at the extreme end of the "aversion" spectrum: a domain perceived as deeply subjective, intimate stakes, and expertise seen as irreducibly human. This positioning explains why AI tool adoption is particularly slow in this field — and why this slowness is not entirely irrational.
Points of caution
Algorithm aversion does NOT say that:
- All distrust of AI is irrational — critical skepticism is healthy
- Algorithms are always superior to human judgment — it depends on context
- We must "overcome" aversion to adopt AI — the goal is to calibrate, not eliminate
What this concept does NOT cover:
- Automation bias: the opposite phenomenon, where we place too much trust in the algorithm. Both biases can coexist in the same person.
- Systemic critiques of AI (surveillance, privacy, data bias) which are structural analyses, not individual cognitive biases.
- Resistance to change in general — algorithm aversion is a specific mechanism, not a synonym for technological conservatism.
- The question of accountability: part of the preference for human judgment stems from the fact that we can ask a human for explanations, question them, and hold them responsible (ethically, legally, professionally). With an algorithm, this possibility of dialogue and recourse fades — fueling a distrust that is not merely a cognitive bias.
Further reading
- Foundational article: Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. Journal of Experimental Psychology: General, 144(1), 114-126.
- Mitigation strategy: Dietvorst, B. J., Simmons, J. P., & Massey, C. (2016). Overcoming Algorithm Aversion: People Will Use Imperfect Algorithms If They Can (Even Slightly) Modify Them. Management Science, 64(3), 1155-1170.
- Contrasting finding: Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organizational Behavior and Human Decision Processes, 151, 90-103.
- Radiology application: Agarwal, N., Rajpurkar, P. et al. (2023). Combining Human Expertise with Artificial Intelligence: Experimental Evidence from Radiology. NBER Working Paper 31422.
Last updated: February 2026