Ethics of Care
In brief: A tradition in moral philosophy that places the care relationship at the center of ethical reflection, rather than abstract principles. Developed by Carol Gilligan (1982) and formalized by Joan Tronto (1993).
Why this concept matters
When you evaluate an AI tool for your practice, available assessment frameworks (APA model, EU AI Act) rely primarily on principlism: does the tool respect patient autonomy? Is it beneficial? Non-harmful? Equitable?
These questions are necessary but insufficient. They evaluate the tool as an isolated technical object. Yet in psychotherapy, what heals is not an object — it is a relationship. The ethics of care provides a framework that takes this reality seriously.
As a clinician, you already practice the ethics of care without knowing it: attention to the patient's suffering, responsibility for the therapeutic frame, technical competence in service of the relationship, verifying that the patient actually feels helped. This framework makes explicit what you do implicitly — and allows you to apply it to evaluating AI tools.
The 4 Phases of Care (Joan Tronto, 1993)
1. Attention (caring about)
Recognizing that a need exists. Being attentive to the other's vulnerability, perceiving their suffering before they even explicitly express it.
For AI:
Does the chatbot detect when the user is in heightened distress? Does it adapt its response accordingly? Or does it apply the same script whether the user is doing well or is in crisis?
2. Responsibility (taking care of)
Assuming responsibility for responding to the identified need. It's not just about seeing — it's about deciding to act.
For AI:
Who takes responsibility for verifying that the tool doesn't worsen the patient's isolation? Who decides to escalate to a human when the situation requires it? If the answer is "no one," that's a major ethical problem.
3. Competence (care-giving)
Concrete competence in the act of care. Intention is not enough — care must be genuinely competent, adapted, and effective.
For AI:
Is the tool genuinely competent for this type of suffering, or does it give generic responses? A chatbot trained on general data that responds to a borderline patient as it would to someone seeking productivity tips fails at this phase.
4. Reception (care-receiving)
Verifying that the need has been met from the recipient's perspective. This is the most often neglected phase — and the most important.
For AI:
Have we asked patients themselves whether the tool helps them? Not through an NPS score or an app store rating, but through a clinical evaluation: does the patient feel better, or do they simply feel less alone? The difference is crucial.
Principlism vs ethics of care: two complementary perspectives
| Dimension | Principlism | Ethics of Care |
|---|---|---|
| Central question | Which principles apply? | What does this person need? |
| Moral agent | Autonomous, rational individual | Being in relationship, interdependent |
| Quality criterion | Compliance with principles | Quality of the care relationship |
| AI tool evaluation | Checklist: autonomy, beneficence, justice... | Does the tool improve the care relationship? |
| Blind spot | Relational quality | Formal individual rights |
The two approaches are not opposed — they complement each other. Principlism establishes a foundation of rights; the ethics of care adds the requirement of relational quality.
Illustrative Clinical Case
Sarah, 28, in treatment for moderate depression, uses a therapeutic chatbot between sessions. She reports that the chatbot is "always there for her" and "asks the right questions."
Principlist evaluation: Consent is informed (Sarah knows it's a machine). Data is protected (GDPR). The tool is based on validated CBT protocols. Principles are respected. Green light.
Care evaluation: When Sarah mentions "I don't feel like doing anything anymore" on a Sunday evening, the chatbot responds with a cognitive restructuring exercise. It's technically correct, but relationally inadequate: Sarah needed to be heard, not redirected. The chatbot didn't detect the worsening (phase 1: attention). No one checks whether Sarah is substituting the chatbot for her human relationships (phase 2: responsibility). And Sarah was never asked about what the tool actually provides her vs. what it makes her believe it provides (phase 4: reception). Points of concern.
Principlism gives a green light that the ethics of care nuances. Both perspectives are necessary — the latter is currently absent from most evaluations of AI tools in mental health.
In Practice for the Clinician
- Apply the 4 phases as an evaluation framework: for each AI tool your patients use, ask whether it meets the criteria of attention, responsibility, competence, and reception.
- Ask the patient what they actually receive: not "do you like the app?" but "what does it concretely change for you?" (phase 4: care reception).
- Complement principlism, don't replace it: questions of consent, data protection, and equity remain essential. The ethics of care adds a relational dimension.
- Your expertise is directly applicable: clinical training in relationship, transference, supervision, and self-observation constitutes care expertise that professional ethicists don't necessarily have.
Limitations of the Ethics of Care
Risks of the approach:
- Paternalism: deciding "for the patient's good" without respecting their autonomy — principlism is a necessary safeguard
- Care burden: the ethics of care can make caregiver exhaustion invisible by naturalizing care as a vocation
- Operationalization difficulty: "caring" is harder to translate into measurable criteria than "respecting autonomy"
- Internal feminist critiques: the risk of assigning care to women and reproducing gender inequalities
Further Reading
- Founding work: Gilligan, C. (1982). In a Different Voice: Psychological Theory and Women's Development. Harvard University Press.
- Philosophical formalization: Tronto, J. (1993). Moral Boundaries: A Political Argument for an Ethic of Care. Routledge.
- Application to AI: Vallor, S. (2016). Technology and the Virtues: A Philosophical Guide to a Future Worth Wanting. Oxford University Press.
- Critique of principlism: Held, V. (2006). The Ethics of Care: Personal, Political, and Global. Oxford University Press.
See also: Informed Consent (AI), Principlism, Ethics Washing
Last updated: February 2026