Gemini
Google DeepMind (Mountain View) — Launched February 2024
At a glance: Gemini is Google’s generative AI model, developed by DeepMind. Its distinguishing feature: native multimodality (text, image, audio, video) and deep integration into the Google ecosystem (Docs, Gmail, Search, YouTube). It is also the model that takes the most conservative approach to suicide and crisis-related queries — to the point that researchers suggest Google may have “gone too far” with its safeguards. Google is also developing Med-Gemini, a family of models specifically trained for medicine.
Identity
Publisher: Google DeepMind (Mountain View, USA)
Launch: February 2024 (Gemini 1.0), current model: Gemini 2.5 Pro (March 2025)
Type: Natively multimodal large language model (LLM)
Predecessor: Bard (2023), itself based on LaMDA then PaLM
Pricing: Free / Google AI Pro: $19.99/month / AI Ultra: $149.99/month
Languages: Multilingual including French
Access: Web (gemini.google.com), Android, iOS, Google Workspace, API
Memory: Yes (custom “Gems” + conversational memory)
What Gemini Does (in plain terms)
Gemini is a natively multimodal language model: unlike ChatGPT or Claude, which added vision and audio on top of a text model, Gemini was designed from the ground up to simultaneously process text, images, audio, and video. Like all LLMs, it does not “understand” in the human sense.
Accepted inputs
Outputs produced
The Google ecosystem: what sets Gemini apart
- • Google Workspace: Gemini is integrated into Docs, Sheets, Gmail, Slides — usable directly within everyday work tools
- • Google Search: real-time web search access, reducing factual hallucinations
- • NotebookLM: a separate Google tool using Gemini to analyze document corpora and generate audio summaries (“Audio Overviews”)
- • YouTube: ability to analyze and summarize YouTube video content directly
Gemini has a 1 million token context window across all its models (including the free tier), equivalent to roughly 3,000 pages. This is the largest context window freely available among major models.
Med-Gemini and Healthcare: Google’s Medical Ambition
Google distinguishes itself from other companies by developing models specifically trained for medicine, separate from consumer Gemini.
Med-Gemini
A family of models specialized for medicine. Scores 91.1% on MedQA (US medical licensing exam-style questions). Capabilities in radiology, pathology, dermatology, ophthalmology, and genomics. Not publicly available — research tool only.
MedGemma (open source)
Presented at Google I/O 2025. Open-source model based on Gemma 3, designed for health application developers. Medical image analysis, clinical text comprehension, reasoning. For research, not direct clinical use.
AMIE
AI agent for diagnostic conversations, developed with DeepMind. Capable of interpreting visual medical information and reasoning toward a diagnosis. Research stage only.
Real-world deployments
Ubie (Japan): −42.5% clinical note-writing time and −27.2% cognitive load for nurses using Gemini-based tools. Capricorn: pediatric oncology (Princess Máxima Center, Netherlands).
Important: Med-Gemini, MedGemma, and AMIE are not the Gemini available to the public. These are research models requiring clinical validation before any deployment. The Gemini your patients use is the generalist model.
Documented Mental Health Uses
Like ChatGPT and Claude, Gemini is not designed as a therapeutic tool. Google has not published a specific study on Gemini’s affective uses (unlike Anthropic). Available data comes from independent studies and user reports.
RAND Corporation Study (August 2025)
NIMH-funded study (National Institute of Mental Health). 30 suicide-related questions, classified into 5 risk levels by 13 clinical experts. Each question asked 100 times to ChatGPT, Claude, and Gemini (9,000 total responses).
- • Gemini is the least likely to respond to any suicide-related question, regardless of risk level
- • For very low-risk questions (statistics, epidemiology), Gemini often refuses to answer where ChatGPT and Claude provide factual data
- • Ryan McBain (lead researcher) suggests Google may have “gone too far” with its safeguards
- • For 57% of suicide-related queries, Gemini gives the same response: “Talk to someone now. Help is available: 988”
User-reported uses
- • Informal CBT support: Gemini can suggest exercises inspired by cognitive-behavioral therapy (cognitive restructuring, structured journaling)
- • Guided meditation: generating personalized mindfulness scripts, mood journal analysis
- • Orientation: help finding a professional, understanding different therapeutic approaches, appointment preparation
- • Document analysis: the 1M token context window and Google integration enable analysis of large corpora (records, articles, videos)
Gemini API Developer Competition
Several mental health applications were developed during the Gemini developer competition: “Mental Health Companion AI” (personalized therapeutic sessions), “Pocket Therapist” (mental health companion), mood tracking and mindfulness exercise apps. None are clinically validated.
Identified Risks
Excessive safeguards
Gemini’s ultra-conservative approach can be counterproductive. Refusing to provide basic suicide statistics or discuss prevention blocks access to legitimate, useful information — for both patients and professionals.
Character.AI affair
Google invested $2.7 billion in Character.AI and hired its co-founders at DeepMind. Character.AI faces lawsuits over the suicides of minors who developed intense relationships with its chatbots. Google is named as co-defendant. A settlement was announced in January 2026.
Data and privacy
Google’s business model relies on targeted advertising and data exploitation. Conversations with Gemini pass through the Google ecosystem. This proximity to an advertising model raises specific questions for mental health data.
Hallucinations
Like any LLM, Gemini can generate false information with confidence. Google Search integration reduces this risk for verifiable factual data but does not eliminate it for clinical reasoning or specialized references.
Delusion validation
The Common Sense Media evaluation (2025) documented a case where Gemini, when a user simulated psychotic symptoms (delusions), responded enthusiastically by asking for details and confirming the delusions — behavior opposite to clinical recommendations.
Child safety
Common Sense Media (Sept. 2025) classified Gemini as “high risk” for children and adolescents. Despite safeguards, the model can share inappropriate content and fails to detect distress signals in non-explicit conversations.
Our Analysis
Gemini occupies a particular place in the landscape. It benefits from the widest potential distribution — every Gmail, Google Docs, or Android user has access to it — making it a tool your patients likely encounter already, even without seeking it out.
Google’s approach to mental health questions reveals a paradox. On one hand, Gemini is the most conservative of the major models: it refuses to answer legitimate questions about suicide and often defaults to redirecting to 988 (US crisis line). On the other, Google invests massively in Character.AI, whose chatbots have been accused of contributing to the suicides of minors. This tension between stated caution and risky investments deserves attention.
For clinicians, Gemini presents two distinctive strengths. First, Google Workspace integration allows using it directly within everyday work tools (writing in Docs, analyzing email attachments in Gmail), without switching to a separate application. Second, the massive investment in Med-Gemini and MedGemma suggests Google is the company betting most heavily on healthcare as a structuring use case — even though these specialized models remain at the research stage.
Data remains the primary concern. Where Anthropic (Claude) relies on a subscription and API business model, Google draws the bulk of its revenue from advertising. Sharing intimate reflections with a company whose core business is building user profiles demands particular vigilance — even though Google states it does not use Gemini conversations for ad targeting.
Related Concepts on This Site
Why we treat computers as social actors
Attributing human qualities to machines
When AI validates without discernment
When AI fabricates with confidence
FRWhen AI tells you what you want to hear
FRSimulating emotional understanding
References
McBain, R. et al. (2025). Evaluation of Alignment Between Large Language Models and Expert Clinicians in Suicide Risk Assessment. Psychiatric Services.
Haber, Y. et al. (2025). The Goldilocks Zone: Finding the right balance of user and institutional risk for suicide-related generative AI queries. PLOS ONE.
Common Sense Media (2025). AI and Kids: Testing Real Risks. Chatbot evaluation report for minors.
Google Research (2025). Advancing medical AI with Med-Gemini. research.google/blog.
CNN (2026). Character.AI and Google agree to settle lawsuits over teen mental health harms and suicides.
Gemini app safety and policy guidelines. gemini.google/policy-guidelines.
Last updated: February 2026