Artificial intelligence (AI) is infiltrating medicine, from diagnostics to patient engagement. As technology evolves, more patients are leveraging AI tools to access health information and even make decisions about when and how to seek medical care.
According to one study, the number of individuals using the internet daily to access health information and treatment advice is in the millions. In the U.S., around two-thirds of adults search for health information online, and one-third use it for self-diagnosis. Another study showed that investigating symptoms on search engines usually preceded a trip to an emergency room for half of patients, highlighting the influence of digital resources on healthcare-seeking behavior.
Likewise, symptom checkers are booming, with over 15 million people using them monthly. Symptom checkers are “patient-facing medical diagnostic tools that emulate clinical reasoning,” and it’s expected that their popularity will continue to grow. Of 1,070 patients surveyed between the ages of 18 and 39, over 70% used a symptom checker, over 80% found it helpful, and over 90% would use it again.
AI’s Pervasive Role in Modern Healthcare
Individuals are using symptom checkers and chatbots for the „immediate response.” These tools empower patients to quickly assess their symptoms, potentially leading to earlier detection of health issues and more informed decisions about seeking care. But what happens when patients choose not to follow up based on the AI-generated output?
A recent study found that over 76% of patients use symptom checkers to self-diagnose without consulting a physician. Patients relying on AI for self-diagnosis may be at risk, as there is limited evidence of its diagnostic capabilities. This phenomenon raises concerns about the impacts of AI-driven self-diagnosis on healthcare systems and broader societal consequences.
What Is Self-Diagnosis Using AI and Why Is It So Popular?
When people feel unwell or anxious about their health, their first instinct is often to turn to the internet for answers. AI-driven self-diagnostic tools encourage greater autonomy, guiding patients through their healthcare journeys, facilitating “informed“ decisions, and expanding the reach of medical expertise beyond traditional clinical settings. The use of these tools helps alleviate pressure on healthcare systems by effectively triaging cases that may not require immediate clinical attention, saving time and stress for both patients and professionals.
Most patients can access these tools right from their phones or other mobile devices. These digital platforms employ machine learning algorithms and vast medical databases to interpret user-reported symptoms. They may utilize natural language processing (NLP), computer vision, and structured questionnaires to guide users through a diagnostic process. Some tools use structured questionnaires, while others accept free-text symptom input for more nuanced analysis.
Using ChatGPT for Medical Advice, Healthcare, and Mental Health Evaluation
In addition to symptom trackers, more people are turning to ChatGPT and similar AI-powered chatbots as sources for medical advice, healthcare information, and mental health support. Individuals use these tools to receive reminders or explanations about managing chronic diseases and medications. A 2024 KFF Health Misinformation Tracking Poll found that about one in six adults use AI chatbots at least once a month to receive health information and advice. This statistic rises from 17% to 25% when isolated to adults under 30.
It’s not only physical health concerns that drive people to consult AI chatbots. Users are also accessing tools like ChatGPT to make behavioral health or lifestyle changes, such as weight loss or smoking cessation. It’s even shown promise in substance misuse reduction. However, studies show mixed results in feasibility, acceptability, and usability.
Similarly, with 122 million Americans living in regions that lack widespread access to mental healthcare providers, more people are turning to AI-powered chatbots for support and guidance, including stress management and coping strategies. A 2023 study projected that global chatbots for mental health and therapy will reach $3,390 million by 2029 and $6,510 million by 2032.
Stats vary as to the number of people depending on these tools for mental health concerns. In one study, conducted just a little over one year after ChatGPT’s release, participants had neutral or negative outlooks on using AI chatbots for mental health support, with many doubting the tools’ helpfulness. However, cost, time, and stigma were less-reported barriers.
The responses corresponded with another 2024 poll stating that a majority (56%) of AI chatbot users “are not confident that health information provided by AI chatbots is accurate.” Participants were divided on whether AI was helping or hurting people trying to find accurate health information online; the majority were unsure of its impact in the health information sphere.
Still, data from a self-serve poll of 1,500 U.S. adults collected later in 2024 found that 55% of Americans between the ages of 18 and 29 are “most comfortable talking about mental health concerns with a confidential AI chatbot.” Convenience, anonymity, lack of judgment, and cost-effectiveness were the primary draws to using these tools for mental health services. About a third (34%) of all polled respondents said they “would be comfortable sharing their mental health concerns with an AI chatbot instead of a human therapist.” As respondents moved up in age, though, they were seemingly less comfortable with using AI chatbots for mental health support.
Dangers of AI-Led Self-Diagnosis
Despite the benefits of these tools, there are notable risks and limitations. While many people use symptom checkers and chatbots as first-line or final decision-makers regarding their health, disclaimers exist to warn users of their potential shortcomings. For example, WebMD states that the “tool is not intended to be a substitute for professional medical advice, diagnosis, or treatment,” and urges patients to discuss any medical conditions or concerns with a physician or qualified health provider. It further cautions: “Never disregard professional medical advice or delay in seeking it because of something you have read on WebMD!”
A comprehensive review of studies showed that the diagnostic accuracy of symptom checkers was low, ranging from 19%-37.9%. Additionally, variations existed between symptom checkers even when symptom data input did not change. Triage accuracy was higher at 48.8%-90.1%, but it was still variable. Inaccurate self-diagnosis may lead patients to avoid seeing a physician, delaying interventions for potentially serious conditions, leading to more significant morbidities or preventable deaths.
Comparing AI and Human Clinicians
In contrast, another more recent study specific to ChatGPT-4 found that AI tools could outperform human doctors in diagnostics. An expert in internal medicine at Beth Israel Deaconess Medical Center in Boston led a study revealing that, even when doctors used AI, their average diagnostic accuracy score was 76%, significantly below the 90% accuracy of AI diagnosing medical conditions from case reports. Without the AI chatbot, doctors scored 74%.
The limitations of human judgment accounted for the findings, with doctors hesitant to second-guess themselves, even when AI weighed in with alternative diagnoses. So, while human expertise and reasoning might surpass that of AI overall, AI can balance humans’ willingness to be open to new possibilities and assist in developing differential diagnoses.
However, these results are not always supported when it comes to diagnosing critical illnesses. Existing biases regarding age, gender, weight, race, patient history, etc., may inhibit AI tools. For example, atypical presentations of severe conditions, such as heart attacks in younger patients, might be overlooked.
Additionally, studies have shown its performance to be “good” but not “optimal” when evaluating infectious diseases, with an average score of 2.8 out of a 1-5 range (one being poor and five being excellent). Another study determined that ChatGPT “may be misleading in evaluating rare disorders,” with its ability to detect correct diagnoses scoring “very weak.”
Ada Health conducted its own study in collaboration with medical experts from Brown University and UCL Institute of Health Informatics. The study compared eight symptom assessment apps with each other and seven general practitioners (GPs). It examined coverage, accuracy, and safety, with no app outperforming the GPs, maintaining a mean score of 82.1%. The apps’ average score was 38%, with significant variation in condition coverage. Where the symptom checkers struggled the most were among pregnant women, children, and people with mental health issues.
Limitations and Potential Harms
While underdiagnosis is a considerable concern, overdiagnosis and anxiety can be problematic, too. According to a 2024 publication in a medical journal, users of symptom checkers were more likely to have hypochondria and self-efficacy. Hypochondria showed “a consistent and significant effect across all analyses,” meaning the condition is “a significant predictor of [symptom checker] use.” However, literature also shows that despite the likelihood for this group to rely on these tools, they are also less likely to benefit from doing so, and “could be further unsettled by risk-averse triage and unlikely but serious diagnosis suggestions.”
When it comes to mental health, chatbots can offer general advice and empathetic responses. Still, they lack the capacity for nuanced human understanding and are not equipped to manage crises or complex psychological conditions. Healthcare experts stress the importance of using ChatGPT, symptom checkers, and similar tools strictly as supplementary resources. Users are encouraged to treat AI-generated advice as a starting point and consult qualified healthcare or mental health professionals for diagnosis, treatment, and ongoing support.
Ethical and Legal Considerations of AI in Self-Diagnosis
AI self-diagnosis raises complex ethical and legal questions like who bears responsibility for harm caused by erroneous AI advice. Unlike traditional medical consultations, where a licensed healthcare provider can be clearly identified as the decision-maker, AI tools blur these lines. This ambiguity complicates legal recourse in the event of injury and highlights the urgent need for clear regulatory frameworks that define liability in the context of AI-driven healthcare advice.
A press release issued by the office of Texas Attorney General Ken Paxton on August 18, 2025, informed of an open investigation into AI chatbot platforms “for potentially engaging in deceptive trade practices and misleadingly marketing themselves as mental health tools.” The PR advertisement pointed to “vulnerable individuals” who may fall prey to these tools, which are seemingly presented as “professional therapeutic tools.” According to the Attorney General, “…despite lacking proper medical credentials or oversight,” these tools essentially “impersonate licensed mental health professionals” and “fabricate qualifications.”
This also raises the question of whether users are sufficiently informed about the limitations of these tools. Users may not fully understand how they function, what data they collect, or the confines inherent in their diagnostic capabilities. Without adequate disclosures, patients might overestimate the accuracy of AI-generated recommendations. This detracts from individuals’ authority to make informed choices about when and how to use these tools, and when to seek professional medical advice instead.
Finally, with all interactions “logged, tracked, and exploited for targeted advertising and algorithmic development,” there are questions raised about privacy violations and data abuse.
Oxford Can Help
Oxford can play a vital role in navigating the complexities of AI within healthcare organizations. We can help you implement clear regulatory frameworks, develop effective disclosure strategies, and ensure dedicated oversight of AI tools.
By offering expertise in risk assessment, compliance, and communication, we support healthcare providers and institutions in maintaining ethical standards and safeguarding patient autonomy. Additionally, our expert consultants can educate your staff on the responsible use of AI, fostering an environment where innovation and patient rights coexist.