A recent case study published in the Annals of Internal Medicine reveals a disturbing incident involving a 60-year-old man who suffered adverse effects from health advice obtained through an AI model. The man sought dietary guidance from ChatGPT and began taking sodium bromide instead of table salt, believing it to be a healthier alternative. Doctors admitted him to the emergency room after he exhibited signs of severe paranoia and experienced hallucinations.
Upon his arrival, he expressed fears that his neighbor was poisoning him. Fortunately, medical staff found his vital signs to be stable and noted no immediate neurological issues. Initially, the man failed to disclose taking any medications or supplements. However, he later confessed that he had self-prescribed sodium bromide after being inspired by a background in nutritional study.
The AI’s recommendation of sodium bromide, likely not intended for dietary use, led to a troubling diagnosis of bromide toxicity, otherwise known as bromism. Symptoms of this condition include tremors, memory loss, and confusion. In more serious cases, patients may experience seizures, kidney damage, or even respiratory failure. Doctors treated the man with risperidone, a medication typically used for schizophrenia, and he was discharged three weeks later.
This case underscores a critical flaw in relying on AI chatbots for medical advice. The study noted, “While it is a tool with much potential to provide a bridge between scientists and the nonacademic population, AI also carries the risk for promulgating decontextualized information.” This highlights a concerning aspect of AI, as the outputs are tailored based on previous queries. Without access to specific conversation logs, determining the nature of the advice given to the patient is impossible.
The immense popularity of ChatGPT cannot be overlooked. As of July, it ranked as one of the most-visited websites globally, garnering approximately 4.61 billion visits per month. Additionally, more than 45 percent of its users are under the age of 25. This demographic shift raises further questions about the reliance on such technologies among younger individuals.
Given this incident, there is a clear need for caution when using AI-generated advice, particularly in sensitive areas such as health and nutrition. Medical professionals have the expertise to provide safe and accurate guidance, whereas AI tools can sometimes lead users astray, as demonstrated here. As technology continues to evolve and integrate into our lives, the boundaries of its application, especially in health-related contexts, must be navigated with care and consideration.
"*" indicates required fields