Ethics Clinical Practice

Informed Consent & AI

In brief: Informed consent requires that the patient understands what they are consenting to and can freely choose. Applied to AI in mental health, this familiar principle faces unprecedented challenges: algorithmic opacity, invisible data collection, and the illusion of relationship.

Why this concept matters

As a clinical psychologist, informed consent is an everyday practice: you inform the patient about the therapeutic frame, the method, confidentiality, and limitations. You ensure they understand and freely choose.

But when a patient uses a therapeutic chatbot, a session transcription tool, or a mood tracking app, the conditions for informed consent change radically. Does the patient truly understand what's happening with their data? Can they assess the risks of a tool whose functioning is opaque? Does "clicking accept" on terms of service constitute consent in the clinical sense?

This concept allows you to transpose your expertise in therapeutic consent to the digital context — and to ask the right questions.

The 3 Conditions of Consent and Their AI Challenges

1. Complete and comprehensible information

In therapy: you explain the method, the frame, the objectives, the risks, and the alternatives.

AI challenge:

How do you inform a patient about how an LLM works when even the engineers don't fully understand why it produces a given response? Algorithmic opacity makes information structurally incomplete. The patient cannot understand what we cannot explain ourselves.

2. Capacity for discernment

In therapy: you assess that the patient can understand the information and weigh the consequences of their decision.

AI challenge:

A patient in a major depressive episode who downloads an "emotional support" app at 3 AM — are they in a position to discern the implications of their consent? The moment of acute suffering is precisely when discernment is most fragile — and when the temptation to use a digital tool is strongest.

3. Freedom of choice (absence of coercion)

In therapy: the patient can refuse or discontinue treatment at any time without negative consequences.

AI challenge:

Can the patient delete their data? Can they leave the app without losing their follow-up history? When a mental health app uses retention mechanisms (push notifications, gamification, streaks), is freedom of choice real? The coercion here is not physical but architectural.

What AI Changes About Consent

Dimension Traditional Therapy AI Tool
Nature of the interlocutor Human, identified, credentialed Machine, often not identified as such
Transparency Explicable method Opaque functioning ("black box")
Data Clinical notes, clear GDPR framework Continuous collection, multiple uses, invisible third parties
Evolution Stable, predictable method Model updated without notice, variable behavior
Renewal Ongoing dialogue ToS accepted once, never re-read

Illustrative Clinical Case

Thomas, 45, in treatment for generalized anxiety disorder, tells you he uses ChatGPT to "analyze" his anxieties between sessions. He copy-pastes entire passages from his personal journal and asks the AI to interpret them.

Thomas believes his conversations are "private." He doesn't know that the content can be used for model training (unless explicitly opted out), that conversations are stored on American servers, and that OpenAI teams can access them for security reasons.

Questions for the clinician: has Thomas given informed consent in the clinical sense? He clicked "Accept" on the terms of service — but does he understand the implications for the confidentiality of his most intimate psychological material? How do you address this topic in session without creating guilt or being patronizing?

In Practice for the Clinician

  • Integrate digital use into your therapeutic contract: ask your patients if they use AI tools and make it a subject of dialogue, not judgment.
  • Help with discernment: your role is not to forbid but to help the patient understand what they're entrusting to the tool, where their data goes, and what the limitations of AI are.
  • Evaluate tools yourself: before recommending or commenting, test the tool. Read the terms of service. Check the data policy. This is an act of care.
  • Document: if a patient uses an AI tool as part of their treatment, note it in the clinical file. This is part of the therapeutic frame.

Points of Concern

Particularly affected populations:

  • Patients in crisis: impaired discernment at the time when AI use is most likely
  • Adolescents: reduced consent capacity, vulnerability to engagement mechanisms
  • Elderly persons: digital literacy sometimes insufficient to assess the stakes
  • Patients under mandated care: is freedom of choice real when the tool is prescribed?

Common pitfalls:

  • Consent by default: opt-out (unchecking a box) is not informed consent
  • Evolving consent: an updated AI model changes behavior — does initial consent cover this evolution?
  • Confidentiality illusion: many patients believe their AI conversations are private when they are not

Further Reading

  • Founding reference: Beauchamp, T. L. & Childress, J. F. (1979). Principles of Biomedical Ethics. Oxford University Press.
  • AI application: APA Task Force (2023-2025). Artificial Intelligence: Guidance for Psychologists. American Psychological Association.
  • European framework: General Data Protection Regulation (GDPR, 2018), articles 13-14 (right to information) and 7 (conditions of consent).
All concepts

Last updated: February 2026