Generative AI for Health Information: A Guide to Safe Use
“Dr. Google” is the first place many turn to when looking for answers to a new itch, bump, or pain. But with the emergence of generative artificial intelligence (AI) platforms, many are wondering if they can be used to perform the same tasks—and maybe even do a better job.
AI is already being used in health care. For diagnostics in radiology, for example, AI tools may help detect lung nodules in CT scans. AI-produced algorithms have also been helpful in bridging a gap between complex data and clinical decision-making. In this context, AI creates algorithms based on patterns in raw data to find connections (such as between a genetic mutation and a medical condition—or clusters of symptoms to a particular disease) that would be very hard, if not impossible, for a person to identify.
However, that is not the same as someone opening their laptop and using generative AI to answer a medical question.
Below, Yale doctors answer questions about generative AI and how to use it safely when seeking medical information online.
What is generative AI?
Generative AI is a type of artificial intelligence that can produce content, including text, music, art, and images. OpenAI’s ChatGPT, Google’s Bard, and Microsoft’s Copilot are a few examples of generative AI platforms that let users enter prompts (or questions) to receive humanlike answers within seconds. (These platforms are separate from a standard web search.)
For example, you could use generative AI to summarize the plot of a book or movie, or to draft a letter of a general length and particular tone. Most of the platforms are free, but many require you to enter a telephone number and/or email address to use them.
Are people really using AI for health information?
F. Perry Wilson, MD, a Yale Medicine nephrologist, says that many patients seem to still be more comfortable using Google for health questions.
“There has been a lot in the news about the inaccuracies of generative AI in health care, and that might have scared some people off,” he says. “However, I think this will change as search engines start to include generative AI into routine searches.”
“I am floored by the speed at which all of this has moved,” Dr. Wilson adds. “A future in which people start to triage their medical problems through generative AI is not far away.”
What is the best way to use AI to answer health questions?
Do not use generative AI for advice, such as whether you should go to the emergency room for chest pain, the doctors say.
“Currently, the chatbot cannot create a risk profile on an individual patient at a particular point in time, so it’s better to avoid those types of questions,” says Andrew Taylor, MD, MHS, a Yale Medicine emergency department (ED) physician, who is also leading Yale’s 2024 AI in Medicine Symposium.
Rather, here are some tips for trying generative AI:
1. Use it to provide context or education.
For example, try the prompt: “I was told to take these medications; please explain them to me.” Or “How is [insert condition] diagnosed?”
Generative AI can also explain medical terminology you find on a lab report or imaging results, Dr. Taylor adds. “From a patient education standpoint, AI has the potential to be a great tool,” he says.
2. Know that some AI platforms are not updated in real time.
Although there are reports that some AI platforms have up-to-date information for users with premium—or paid—subscriptions, for others, the data AI relies on to answer questions may not have been updated for a few years.
Because medical information is always changing, that lag in data may mean that the AI responses are not capturing the latest medical knowledge on conditions or treatments.
3. Consider the source.
One of the advantages of doing a standard search through Google is transparency, Dr. Wilson explains. “If I see that the top link [in the search results] is from a trusted source, such as the American Medical Association, I can be sure they vetted it and that the information will be accurate,” he says. “But if I use generative AI, it might not tell me where the information is coming from.”
4. Maintain some skepticism.
AI is known for sometimes “hallucinating,” or providing information that is not true. For example, Dr. Taylor says he asked a chatbot to create a scientific paper summarizing opioid use disorder and to provide references. “It was a nice summary with information in the body of the text that was, for the most part, correct, but the references were made up,” he says. “Although the references listed names of scientists and titles that seemed plausible and they were associated with legitimate journals, a closer inspection using search engines revealed they were fictitious.”
There are other potential source issues, too. Some users report that, at times, the information AI provides is correct, but the cited sources don’t include answers to the questions they asked. Other times, users say that AI provides source links that don’t exist or that give them a “page not found” result—all of which call into question the accuracy of the answer. “These models aren’t pulling information from one particular resource or site, and it might not necessarily be evidence-based,” Dr. Taylor says.
Some platforms now allow users to customize searches by asking that information come only from medical literature, for example, or other specified sources, Dr. Wilson adds.
Dr. Wilson compares the way AI gathers information to playing a video game. “Its goal is to get the highest score it can, and the score is based on how humanlike it sounds and its readability,” he says. “When it sounds so human and confident, it can be hard to distinguish between what is accurate and what is not. But this is an active area that is being refined as greater restrictions are being imposed on AI.”
Ultimately, patients should keep in mind that just because something sounds correct does not mean it is.
“It’s fun to try generative AI, but you should always be skeptical of the source. In the end, trust your doctors, as we are the ones who have the responsibility to look out for your best interest,” Dr. Wilson says.
However, this is the beginning of a new technological era, and people should be aware that it is out there, he adds.