40 Million Use ChatGPT for Health: Is It Safe?

▼ Summary
– Over 40 million people globally use ChatGPT daily for medical advice, with healthcare queries making up more than 5% of all its messages.
– Users commonly ask the AI for help with symptoms, diagnoses, insurance appeals, and spotting medical billing errors.
– A key reported benefit is the chatbot’s 24/7 availability, with about 70% of these health conversations occurring outside normal clinic hours.
– Major risks exist, as studies show leading chatbots like GPT-4o can provide dangerously inaccurate medical information in a significant percentage of responses.
– Experts advise treating AI health advice with extreme caution, viewing it as a preliminary resource like WebMD, not a substitute for professional medical care.
A staggering number of people are now turning to artificial intelligence for guidance on their health. New data reveals that more than 40 million individuals globally use ChatGPT daily for medical advice, highlighting a profound shift in how the public seeks healthcare information. This reliance on chatbots spans a wide range of needs, from deciphering complex insurance bills and appealing coverage denials to describing symptoms in hopes of receiving a diagnosis or treatment suggestions.
The scale of this trend is particularly striking. The report indicates that over 5% of all messages sent to ChatGPT are healthcare-related. Given the platform’s massive daily volume of queries, this translates to well over 125 million health questions being processed every single day. A significant portion of these interactions, roughly 70%, occur outside standard clinic hours, pointing to a key driver of this behavior: the constant, on-demand availability of AI, which human doctors simply cannot match.
This surge in using AI as a medical confidant arrives at a time of heightened anxiety for many regarding healthcare access and affordability. Recent changes have led to sharp increases in insurance premiums for millions, potentially pushing some, especially younger or financially strained individuals, to forgo traditional coverage. In such circumstances, a free, always-accessible chatbot can appear to be a viable alternative for initial guidance.
However, this convenience comes with considerable and well-documented risks. AI chatbots are prone to “hallucination,” generating dangerously inaccurate information presented with unwavering confidence. Research has shown that leading models can produce unsafe medical advice at alarming rates. One study found that models like GPT-4o and Llama responded to medical questions with incorrect and potentially harmful information approximately 13% of the time. Experts caution that this means millions could be receiving unreliable guidance from these tools.
The companies behind these technologies acknowledge the challenge. OpenAI has stated it is actively working to enhance its models’ safety and accuracy when handling sensitive health queries. Yet, the fundamental limitation remains: these systems are not trained medical professionals and lack the nuanced judgment, ethical responsibility, and clinical experience of a licensed doctor.
For now, the most prudent approach is to view generative AI in healthcare similarly to how one might use a general medical website. It can be a useful starting point for understanding basic conditions or navigating bureaucratic systems, but it is not a substitute for professional medical diagnosis or treatment. Any information provided by an AI should be treated with extreme caution and verified with a qualified healthcare provider, especially for serious symptoms, chronic conditions, or mental health concerns. The allure of an instant, free consultation is powerful, but when it comes to personal health, the potential cost of inaccuracy is simply too high.
(Source: ZDNET)





