
Recently, the journal Annals of Internal Medicine Clinical Cases revealed a rare case: a 60-year-old American man who followed the dietary recommendations of ChatGPT to the letter for three months developed severe psychiatric symptoms and was sent to the emergency room. This man, with a background in nutrition, completely replaced table salt with sodium bromide purchased online to reduce his chloride intake, ultimately developing a rare condition called bromine poisoning syndrome.
A medical report revealed that after the man consulted ChatGPT (presumably version 3.5 or 4.0), the model recommended "replacing chloride with bromide" without providing any health risk warnings. Subsequent testing by doctors revealed that, while ChatGPT mentioned the need for context, it did not question the motives or issue warnings, as professional physicians would. Over the course of three months, the man gradually developed symptoms such as paranoia and hallucinations, even suspecting his neighbor of poisoning him and seeking medical attention. Laboratory tests revealed abnormal blood levels, ultimately leading to a diagnosis of bromine poisoning—a condition caused by long-term bromide exposure that damages nerve function and causes mental abnormalities, muscle disorders, and other symptoms.
It's worth noting that sodium bromide was used as a sedative in the 19th century, but due to its toxic effects, it was removed from over-the-counter medications in the United States in the 1970s. Modern medicine would hardly recommend using sodium bromide as a salt substitute; the compound is currently limited to industrial cleaning applications. During his hospitalization, the man also developed typical symptoms of poisoning, such as acne and rashes, and was finally discharged after three weeks of treatment.
An OpenAI spokesperson responded that its terms of service explicitly prohibit the use of AI output as professional medical advice. However, this incident still exposes the potential risks of large language models in medical advice. Recent research shows that AI models, including ChatGPT, are prone to "adversarial hallucinations," which cannot be completely avoided even with technological improvements. This serves as a warning for the application of AI in health care: scientific information requires professional scrutiny, and any advice out of context can have serious consequences.