
Last Updated:
A mother used ChatGPT to diagnose her four-year-old son Alex’s mysterious illness after 17 doctors failed to diagnose the issue.
Four-year-old Alex underwent a lifesaving spinal surgery. (AI-generated image for representation)
In a remarkable intersection of parental determination and artificial intelligence, a mother’s desperate search for answers about her four-year-old son’s mysterious illness led her not to a hospital, but to a chatbot. And it may have saved his life.
The child, Alex, had been showing unusual symptoms of tooth pain, slowed growth and difficulty with balance since the COVID-19 pandemic. His mother, Courtney, consulted at least 17 doctors across multiple specialties but none of those appointments delivered a diagnosis.
As her son’s condition continued to deteriorate, Courtney turned to an unconventional option: ChatGPT, an AI-powered language model. She meticulously uploaded Alex’s symptoms and MRI findings line by line into the system. Within moments, ChatGPT returned a chilling but clear possibility – Tethered Cord Syndrome, a rare neurological disorder in which tissue attachments limit the movement of the spinal cord.
With this clue in hand, Courtney reached out to online support communities and found parents of children with similar symptoms. A neurosurgeon eventually confirmed the diagnosis. Alex soon underwent spinal surgery, and today, he is on the road to recovery after years of unanswered questions and declining health.
Despite their best efforts, doctors had not been able to pinpoint the condition. It was the AI model, trained on vast datasets, that finally provided a crucial breakthrough.
Courtney’s experience resonated widely, with social media users applauding both her persistence and the life-saving potential of artificial intelligence. Her case added fresh fuel to a global conversation about the future of AI in medicine.
However, medical experts were quick to emphasise that AI tools like ChatGPT, while promising, are no substitute for qualified medical professionals. “AI can be helpful as a supplementary tool but it is not infallible. There is always a risk of misinterpretation or incorrect suggestions,” said one neurologist.