Cognitive Psychology Decision Science Human-AI Interaction

Algorithm Appreciation

In brief: Contrary to the popular belief of widespread distrust toward algorithms, research shows that non-experts often prefer algorithmic recommendations over human advice. Formalized by Logg, Minson & Moore (2019), algorithm appreciation is the mirror of algorithm aversion: both coexist on a continuum, modulated by context, expertise, and the nature of the decision.

Why this concept is useful

Much is said about distrust of AI, but an equally problematic phenomenon is underestimated: excessive trust. Some of your patients give ChatGPT or other LLMs a level of credibility they would never grant to a friend, a colleague, or even a healthcare professional.

This concept helps you understand why a patient might prefer a chatbot's opinion over yours — not because the chatbot is better, but because it is perceived as more objective, more neutral, more "scientific." Naming this mechanism opens a clinical workspace around the relationship to authority, expertise, and decision-making.

The mistake to avoid: pathologizing trust in AI

It would be tempting to view any preference for algorithmic advice as a lack of discernment or technological naivety. That would be an error.

Algorithm appreciation partly rests on a correct intuition: in many domains (forecasting, estimation, diagnosis based on quantitative data), algorithms do effectively outperform human judgment. The problem is not that people trust algorithms, but that they don't always calibrate that trust to the right level, in the right context.

A patient who trusts an LLM more than their social circle is not necessarily "alienated by technology." They may be expressing a rational distrust of biased human advice (judgment, morality, personal interests) — and a preference for what seems more neutral to them.

The 5 mechanisms of algorithm appreciation

1. An effect that extends beyond "technical" tasks

One might think that the preference for algorithms is limited to tasks perceived as computational or objective. In reality, Logg et al. demonstrated the effect across varied and sometimes surprising domains: visual quantity estimation, song success prediction, and even romantic attraction prediction — a domain where one would expect human judgment to be preferred. This unexpected breadth suggests that trust in algorithms does not rest on a rational assessment of their capabilities, but on a default perception of objectivity.

2. Psychological distance as a modulator

Algorithm appreciation is stronger for decisions about others and decreases for personal decisions. In other words, we trust an algorithm more to advise someone else than to advise ourselves. Clinically, this means a patient may find LLM recommendations very relevant for "people in general" while judging them inadequate for their own situation.

3. Expertise as a brake

Domain experts show significantly less algorithm appreciation. This is not surprising: having calibrated confidence in their own judgment, they are less inclined to delegate it. Clinical implication: as a psychologist, you are naturally less likely to over-value an AI tool in your area of expertise — but your patients do not have this safeguard.

4. Numeracy as a facilitator

Individuals with strong mathematical skills show more marked algorithm appreciation. They more readily perceive the statistical superiority of algorithms over intuitive human judgment. Clinically, this profile often corresponds to patients from scientific or technical backgrounds, who may be particularly receptive to the "data-driven" arguments of LLMs.

5. The aversion-appreciation continuum

Aversion and appreciation are not two types of people, but two poles of the same continuum. The same person can oscillate between the two depending on the domain, the stakes, and the moment. It is the context that determines the response, not the personality.

When appreciation becomes problematic

Algorithm appreciation becomes clinically concerning when it turns into automation bias: a systematic over-confidence that leads to suspending critical judgment in the face of algorithmic recommendations.

AI as a "source of truth"

When a patient says "ChatGPT told me I probably have ADHD" in the same tone they would say "my psychiatrist thinks I have ADHD," algorithm appreciation has moved beyond preference to become an attribution of epistemic authority. The LLM is no longer perceived as a tool that generates hypotheses, but as an entity that makes diagnoses.

The neutrality illusion

Algorithm appreciation partly rests on the perception that the algorithm is "objective" and "unbiased" — which is factually inaccurate. LLMs carry the biases of their training data, their empathetic tone is a design artifact, and their apparent confidence does not reflect a genuine degree of certainty.

Devaluing human expertise

In extreme cases, algorithm appreciation can lead to devaluing therapeutic work: "Why pay for a therapist when ChatGPT understands me better?" This increasingly common question deserves to be received without defensiveness — and explored as revealing the patient's relationship to help and vulnerability.

Key takeaway: uncalibrated algorithm appreciation is not a new disorder. It is a normal cognitive bias, amplified by a context where conversational AIs are designed to appear reliable, empathetic, and competent.

Illustrative clinical case

Nadia, 35, a manager in the technology sector, seeks therapy for marital difficulties. In session, she mentions that she regularly submits her relational dilemmas to Claude (Anthropic) before discussing them with her therapist or close ones.

"I prefer to ask the AI first. It doesn't judge me, it analyzes the situation logically, it doesn't take sides. My husband thinks it's weird, but I find its responses more objective than those of my friends who inevitably have their own perspective."

Exploring further, the therapist notes that Nadia particularly values the "affect-free" quality of the LLM's responses. She acknowledges that her friends' advice is sometimes relevant, but says she is "distracted" by the emotional tone in which it is delivered.

Reading through algorithm appreciation: Nadia illustrates several classic mechanisms: the perception of LLM neutrality, the preference for advice perceived as free from personal interest, and the valorization of the analytical register. Rather than confronting this preference ("the AI doesn't really understand you"), the clinician can explore it as material: what is Nadia trying to avoid in human advice? What does this preference say about her relationship to others' judgment, to intimacy, to the vulnerability involved in asking a human for help?

In practice for the clinician

  • Explore without invalidating: when a patient prefers an LLM's opinion over yours, resist the temptation to disqualify AI. Instead, explore what this preference reveals: need for neutrality? Difficulty with human judgment? Seeking control over the help relationship?
  • Deconstruct the neutrality illusion: help the patient understand that the LLM is not neutral — it is designed to appear pleasant, validating, and competent. Its lack of affect is not objectivity, it is a design choice.
  • Distinguish domains: algorithm appreciation is sometimes justified (factual information search, idea structuring) and sometimes problematic (self-diagnosis, relational decisions). Help the patient identify where algorithmic advice is useful and where it reaches its limits.
  • Use the continuum as a psychoeducational tool: explaining to the patient that aversion and appreciation are two normal responses to algorithms, modulated by context, can help them step back from their own position and calibrate it more consciously.

The aversion-appreciation continuum

Algorithm appreciation does not oppose algorithm aversion: the two form a continuum. An individual's position on this continuum depends on three main factors.

The nature of the task

Quantitative and objectifiable task (calculation, statistical prediction) → appreciation. Qualitative and subjective task (moral judgment, emotional support) → aversion.

Level of expertise

Non-expert → appreciation (lack of internal reference for judgment). Expert → aversion (calibrated confidence in one's own judgment).

Personal distance

Decision for others → appreciation (low emotional stakes). Decision for oneself → aversion (increased need for control).

In psychotherapy, your patients sit at the intersection of these three factors: non-experts in the psy domain (favors appreciation), but facing intimate personal stakes (favors aversion), on deeply subjective tasks (favors aversion). This complex positioning explains why the same patient can both over-value and reject AI depending on the moment.

Points of caution

Algorithm appreciation does NOT say that:

  • People blindly trust algorithms — the effect is modulated by context and expertise
  • Appreciation is always irrational — in many domains, algorithms do effectively outperform human judgment
  • All trust in AI tools should be discouraged — the goal is to calibrate it, not eliminate it

Concept limitations:

  • Sampling bias: Logg et al.'s (2019) results come from predominantly WEIRD samples. Cross-cultural generalizability remains to be demonstrated.
  • Framing sensitivity: how an algorithm is presented (name, interface, context) strongly influences observed appreciation or aversion. Experimental results are therefore partially artifactual.
  • Rapid evolution: with the massive spread of LLMs since 2023, attitudes toward algorithms are evolving quickly. The 2019 findings may not reflect current dynamics.

Further reading

  • Foundational article: Logg, J. M., Minson, J. A., & Moore, D. A. (2019). Algorithm Appreciation: People Prefer Algorithmic to Human Judgment. Organizational Behavior and Human Decision Processes, 151, 90-103.
  • Integrative review: MIS Quarterly (2023). An Integrative Perspective on Algorithm Aversion and Appreciation in Decision-Making.
  • Moderating effects: CHI 2023. Algorithmic Appreciation or Aversion? The Moderating Effects of Uncertainty on Algorithmic Decision Making.
  • Mirror concept: Dietvorst, B. J., Simmons, J. P., & Massey, C. (2015). Algorithm Aversion: People Erroneously Avoid Algorithms after Seeing Them Err. Journal of Experimental Psychology: General, 144(1), 114-126.
All concepts

Last updated: February 2026