News and Insights
There’s a new health influencer in town
July 11, 2025
How GenAI Is Reshaping the Health Information Landscape, and Why Credible Content Matters More Than Ever
There’s a new health influencer in town—and it doesn’t post selfies, run webinars, or promote supplements. It answers questions instantly, never sleeps, and has read (almost) everything on the internet. This influencer goes by many names: ChatGPT, Gemini, Perplexity, Claude, and others. These Large Language Models (LLMs) are not just redefining the digital health experience—they’re quietly becoming the go-to confidant for millions of people trying to make sense of their health. And they’re doing it with an understanding of a fundamental truth: what we ask in private often reveals more than what we say out loud.
The Hidden Truth About Health Information Seekers
For years, people have turned to “Dr. Google” when they’ve felt anxious about an unexplained symptom, or that taboo question they weren’t quite ready to bring up in the exam room. And it makes sense. Search engines allow for anonymous, judgment-free exploration. You can ask anything (ie “Is my hair falling out from stress?” or “How long does chemo fatigue last?”) without fear of stigma. What we type into that search bar can sometimes say more about our worries and behaviors than what we say in a doctor’s office or even in a support group.
However, traditional search engines were never built to handle the emotional complexity or cognitive load of health queries. They respond with an overwhelming variety of content: medical journals, news articles, PDFs, Reddit threads, advocacy sites, pharma ads, and even fake news articles. While that breadth is impressive, it’s not curated for clarity—and certainly not for comprehension. The result is often confusion, overload, and misinterpretation.
Why People Turn to Forums, Groups, and “Real” Influencers
Over the last decade, a patchwork ecosystem of health communities has sprung up to fill that comprehension gap. Forums, Facebook groups, Reddit threads, and patient influencers offered something that traditional medical and online search sources didn’t: plain language, lived experience, emotional resonance, and importantly—validation. For example, a mother of a child with epilepsy might find more actionable comfort and insight from another parent on Instagram than from a CDC fact sheet.
These “health influencers” may not hold medical degrees, but they hold trust – and trust is everything when you’re scared, vulnerable, or newly diagnosed. They speak like humans. They share personal journeys. And most importantly, they validate feelings, not just symptoms.
But this network has its limits. Anonymity, while preserved in search engines, often vanished in forums and social media. Many people were reluctant to post or comment publicly about stigmatized issues like STDs, fertility challenges, mental health concerns, or chronic pain—especially in smaller communities or under their real names. We know this because even today, when we compare social media and forum conversations about certain diseases to related topics searched using Google Search Trends, we usually see a discrepancy in the topics being searched versus those being discussed.
LLMs Bridge the Gap Between Anonymous Search and Human Conversation
This is where LLMs come in. Unlike a static website or disjointed Google results page, tools like ChatGPT allow users to ask open-ended, conversational questions—and receive answers in real time, in natural language. You can explain your situation in your own words (“I’ve been feeling foggy for weeks and my legs hurt when I wake up”) and get a synthesized response that, ideally, connects relevant knowledge across medical, experiential, and emotional domains.
In essence, LLMs offer the best of both worlds: the privacy of anonymous search and the clarity of human-like conversation.
Even more powerful? They’re adaptive. You can ask a follow-up question. Challenge an answer. Ask for clarification. The model won’t get frustrated or impatient. It will keep answering—immediately, 24/7, in multiple languages and tones—on your terms.
Managing the drawbacks (There’s always a catch)
Despite their promise, LLMs are not without serious flaws – especially in healthcare.
First: hallucinations. These are fabricated facts or misleading interpretations that seem plausible but aren’t grounded in credible evidence. In an everyday context, that might be annoying. In a healthcare setting, it could be dangerous—particularly for rare diseases, where limited data can make it harder for the model to separate fact from fiction.
Second: misinformation amplification. If an LLM is trained on biased, pseudoscientific, or outdated content, it may unintentionally reinforce those inaccuracies. That’s especially troubling when myths are widespread online—like miracle cures, anti-vaccine rhetoric, or discredited treatments. If those narratives are more common in the data set than validated clinical information, they can rise to the top in generated answers.
Third: lack of source transparency. Many LLMs still struggle to clearly cite where their information comes from, making it difficult for users to evaluate credibility—or to distinguish between a response drawn from Mayo Clinic versus one pulled from an unmoderated blog.
What’s Next: Feeding the Machine with Credibility
So how do we find the balance between a powerful search companion that reacts with empathy and helps people find answers and sources of information, and an imperfect machine that can inadvertently lead those same people astray?
If we want LLMs to provide better answers, we need to feed them better data. As communicators, that means:
- Doubling down on SEO for trustworthy, medically reviewed content
- Partnering with advocacy organizations to publish more evidence-based resources that reflect patient experience and scientific rigor
- Empowering patient influencers with accurate content and media literacy skills to help guide their communities safely
- Working with AI developers to build guardrails, transparency tools, and bias mitigation into health-focused language models
We also need to educate users about what LLMs are—and aren’t. These tools are powerful allies in the information journey, but they are not replacements for medical professionals or clinical decision-making. They are first stops, not final destinations.
The good news? Much of the foundational work we’ve been doing as PR pros—building credible websites, collaborating with advocacy organizations, elevating patient voices, optimizing content for search—is exactly what today’s LLMs need to generate better, more accurate health responses.
Our Mission Hasn’t Changed—But Our Tools Have
At the heart of healthcare communications is a core responsibility: to help patients, caregivers, healthcare professionals, and the public find clear, trustworthy, and accessible information. That hasn’t changed. What has changed is how, and where, people look for that information.
When Google became the primary destination for health queries, we evolved. SEO optimization and content strategy became essential tools to guide users toward accurate, user-friendly sources—whether branded therapy sites, advocacy content, or healthcare provider resources.
When forums and influencers became dominant voices, we adapted again. We worked to amplify those voices responsibly, providing accurate talking points, content partnerships, and patient ambassador programs that prioritized transparency and compliance.
Now, in the GenAI era, we must once again evolve.
The Bottom Line: In the GenAI era, Health Communication must once again evolve
There’s no going back. LLMs are already changing how people find and digest health information, and the genie isn’t going back in the bottle. But that doesn’t mean we’re powerless. In fact, it’s just the opposite.
By focusing on credibility, clarity, and compassion, health communicators can shape the future of AI-driven health conversations for patient and HCP communities alike. We can make sure that when someone sits down, alone and anxious, to ask a question of the “new health influencer in town,” they get an answer that helps—not harms.
In a world where knowledge is power, the greatest power we have is to make sure that knowledge is truthful, empathetic, and accessible to all.