Veille IA

APA Ethical Guide: What Psychologists Should Know

The APA publishes an ethical guide for AI use in psychological practice. 71% of American psychologists have never used AI: what lessons can we draw?

The Facts

A Still Largely Unexplored Terrain

According to the APA’s Practitioner Pulse Survey 2024, 71% of American psychologists have never used AI in their clinical practice. Among those who do, about 1 in 10 uses it at least monthly, primarily for note-taking and administrative tasks.

These figures reveal a significant gap between the media hype around AI and its actual adoption by mental health professionals.

A Dedicated Ethical Guide

In June 2025, the APA published Ethical Guidance for AI in the Professional Practice of Health Service Psychology — a first of its kind. This guide specifically addresses clinical psychologists wishing to integrate AI into their practice ethically and responsibly.

Six Main Axes

The guide is organized around six major considerations:

  1. Transparency and informed consent: AI tool usage must be explicitly communicated to patients, other involved professionals, and any concerned third party (court, insurance). Consent must be obtained in a culturally appropriate manner.

  2. Bias and equity: AI systems can perpetuate or amplify existing health disparities. Psychologists are invited to actively evaluate potential biases in the tools they use.

  3. Data protection: Any AI solution handling sensitive data must comply with HIPAA and applicable privacy regulations. The guide emphasizes the importance of robust cybersecurity.

  4. Accuracy and misinformation risk: AI tools must be rigorously validated before clinical use. The psychologist remains responsible for verifying generated content and discontinuing use of a tool if hallucination problems appear.

  5. Human oversight: AI should augment, not replace, clinical judgment. The psychologist remains responsible for all final decisions.

  6. Legal liability: The legal framework is still emerging. Psychologists are encouraged to anticipate liability risks related to AI tool selection and usage.

Tools Mentioned by Experts

The accompanying article in APA Monitor cites several AI solutions for practice:

  • Documentation: Mentalyc, Upheal, Zanda Health — transcription and clinical note generation tools
  • Clinical insights: Blueprint — evidence-based measurement, treatment suggestions
  • Assessment: Assessment Assistant — support for assessment reports
  • Secure general AI: BastionGPT — alternative to ChatGPT trained on reliable medical sources

International Applicability

GDPR vs HIPAA: Comparable but Distinct Requirements

The APA guide is designed for the American context and HIPAA compliance. In Europe, it’s the GDPR that applies, with sometimes stricter requirements, particularly regarding:

  • Explicit consent for health data processing
  • Right to erasure and portability
  • Transfers outside EU — particularly problematic for American tools

For psychologists outside the US: Using American AI tools (ChatGPT, Claude, etc.) to process patient data raises GDPR compliance questions that go beyond the scope of the APA guide.

The Need for Local Guidelines

Unlike the APA, many professional psychology organizations have not yet published specific ethical guides for AI in psychological practice. General codes of ethics offer a framework but don’t explicitly mention AI technologies.

This regulatory gap leaves practitioners without clear institutional benchmarks.

Transferable Recommendations

Despite contextual differences, several APA recommendations are directly applicable:

1. Systematic transparency

If you use an AI tool to prepare your sessions, generate notes, or analyze patient data, explicitly inform your patients. This transparency is both an ethical requirement and legal protection.

2. Content verification

Never use an AI output (session summary, diagnostic hypothesis, intervention approach) without critical verification. LLMs can produce plausible but erroneous content.

3. Maintaining human oversight

AI can inform your clinical reflection, never replace it. Responsibility for therapeutic decisions remains entirely human.

4. Bias evaluation

Before adopting a tool, ask yourself: what data was it trained on? Which populations are underrepresented? What cultural or diagnostic biases might it carry?


Questions for Reflection

For Practitioners

  • Do my patients know when I use AI tools in their care?
  • Have I verified the data protection compliance of the tools I use?
  • Does my AI usage actually improve my practice, or does it mainly respond to productivity pressure?

For the Profession

  • Who should draft ethical guidelines on AI in psychology?
  • Do psychology programs integrate these new competencies?
  • How do we articulate practitioner autonomy and patient protection in this new context?

For Public Debate

  • Can we accept that psychologists massively use American tools without an adapted regulatory framework?
  • What place for digital sovereignty in mental health?

Our Position

This APA guide constitutes a significant advance — and a wake-up call. Significant because it recognizes that AI is already transforming psychological practice and requires specific ethical framing. A wake-up call because it reveals how much work remains to be done globally on these questions.

What we take away:

  • 71% non-users: the terrain is still largely virgin; this is the moment to lay the right foundations rather than catch up on deviations
  • Transparency and consent: the non-negotiable ethical minimum
  • AI augments, doesn’t replace: this phrase must become a professional reflex

What we call for:

  • Ethical guidelines developed by professional organizations worldwide
  • Continuing education integrating AI challenges
  • Dialogue between practitioners, lawyers, and developers to anticipate evolutions

Sources: APA Ethical Guidance, Practitioner Pulse Survey 2024, APA Monitor article

Mots-clés

APA ethics practical guide regulation AI in practice