Abi's experiments with ChatGPT and other AI chatbots reveal a troubling gap between the technology's appeal and its reliability for medical advice. When she tested various bots with real health scenarios, responses ranged from reasonable to dangerously incomplete.

The core problem: AI chatbots lack context. They cannot examine you, order tests, or access your medical history. They generate responses based on pattern-matching from training data, not clinical judgment. A bot might offer textbook information about a symptom while missing critical red flags that would prompt a doctor to escalate care.

One case demonstrated the stakes clearly. When Abi described symptoms that warranted immediate attention, a chatbot provided generic reassurance instead of flagging urgency. A healthcare provider would have recognized the pattern differently. These gaps matter because people increasingly turn to AI as a first-line resource, particularly in countries with strained health systems or limited doctor availability.

Chatbots also struggle with rare conditions and drug interactions. They excel at summarizing common knowledge but falter with edge cases that make up a real patient population. Their training cutoffs mean they miss recent treatment advances or withdrawn medications. Most lack disclaimers proportional to their limitations.

The responsible use case exists. Chatbots work best as information supplements after a doctor's diagnosis, or to help patients formulate questions before appointments. They function poorly as primary decision-makers. Medical societies increasingly warn against relying on AI for diagnostic work or treatment recommendations without professional oversight.

The technology improves constantly, and future versions may integrate real medical databases and practitioner feedback. Today, however, patients should approach AI health advice with skepticism. A chatbot cannot replace the physical exam, the medical history, or the accountability a licensed provider carries. Efficiency and accessibility matter, but not at the cost of safety.