ChatGPT is no longer just helping people write emails or debug code—it’s increasingly being used as a healthcare guide. According to a new OpenAI report shared with Axios, more than 40 million people worldwide now rely on ChatGPT for daily medical-related questions. That number alone raises a critical question: is using AI for healthcare actually safe?
The short answer: it depends on how it’s used. The longer answer reveals a complex mix of accessibility, affordability pressures, and real risks that are reshaping how people seek medical help in 2025.
How People Are Using ChatGPT for Healthcare
The OpenAI report is based on anonymized ChatGPT interactions and user surveys, offering a rare look into how generative AI fits into people’s health decisions.
Some of the most common use cases include:
- Asking about symptoms and possible causes
- Seeking explanations of medical terms or diagnoses
- Drafting insurance denial appeals
- Identifying potential medical billing errors or overcharges
In other words, ChatGPT is acting as a mix of medical explainer, administrative assistant, and sometimes—even if unintentionally—a virtual second opinion.
This trend isn’t entirely new. A Harvard Business Review analysis last year found that psychological therapy was the single most common use of generative AI. The latest data simply confirms that AI tools have become trusted confidants for sensitive, personal issues.
The Scale Is What’s Alarming
Here’s where things get striking. According to Axios:
- More than 5% of all ChatGPT messages globally are healthcare-related
- As of mid-2024, ChatGPT processed about 2.5 billion prompts per day
- That translates to at least 125 million health-related questions every single day
Roughly 70% of these conversations happen outside normal clinic hours, highlighting one of AI’s biggest advantages: it’s always available.
For people dealing with confusing symptoms at 2 a.m. or battling insurance bureaucracy, that availability can feel invaluable.
Why This Is Happening Now
The surge in AI-assisted healthcare use is colliding with a tough reality—especially in the United States.
In early 2025, millions of Americans saw sharp increases in healthcare costs after pandemic-era Affordable Care Act subsidies expired. Reports suggest:
- Over 20 million ACA enrollees were affected
- Average monthly premiums jumped by 114%
For younger, healthier, and more cash-strapped individuals, the math is simple: skip insurance and turn to cheaper alternatives—including AI tools like ChatGPT.
This helps explain why AI healthcare usage isn’t just growing—it’s accelerating.
The Risks: AI Can Sound Confident—and Be Wrong
Despite its convenience, ChatGPT is not a doctor. And that distinction matters.
Generative AI systems are known to hallucinate—confidently generating information that sounds correct but is factually wrong. In healthcare, that can be dangerous.
A July study posted on arXiv by a group of physicians found that leading AI models, including OpenAI’s GPT-4o and Meta’s Llama, produced medically unsafe responses about 13% of the time.
The authors warned:
“Millions of patients could be receiving unsafe medical advice from publicly available chatbots.”
That doesn’t mean AI is useless—but it does mean blind trust is risky.
Where AI Can Help—and Where It Shouldn’t
OpenAI says it’s actively working to improve how its models handle health-related questions safely. Still, experts agree that generative AI should be treated more like WebMD on steroids—not a replacement for professional care.
Reasonable uses include:
- Understanding medical terminology
- Preparing questions for a doctor
- Navigating insurance paperwork
- Learning about general health topics
Less advisable uses include:
- Diagnosing serious or chronic conditions
- Deciding treatment plans
- Handling medical emergencies
If anything, AI answers should be taken with more skepticism than a Google search—not less.
The Bigger Picture: AI as a Healthcare Pressure Valve
The popularity of ChatGPT in healthcare says less about AI perfection and more about systemic gaps in access, affordability, and trust.
When people can’t afford care—or can’t get timely answers—they turn to what’s available. AI fills that gap, even if imperfectly.
Long-term, this trend may push regulators, healthcare providers, and AI companies to define clearer boundaries for safe AI use in medicine.
Final Takeaway
ChatGPT is becoming a go-to health companion for tens of millions of people—not because it’s flawless, but because it’s accessible.
The key takeaway: AI can help you understand healthcare, but it shouldn’t replace healthcare.
As AI tools continue to evolve, the real challenge will be ensuring they empower patients without putting them at risk.
Would you trust an AI with your health questions—or do you see it as a last resort?