<2> ChatGPT’s Health AI Has Dangerous Flaws, Study Warns
<3> Safety Concerns Over ChatGPT Health
ChatGPT Health, Open AI’s chatbot for health advice, has come under scrutiny after a recent study published in Nature Medicine revealed considerable and potentially dangerous flaws in its accuracy and safety. The study also highlighted a concerning issue of race bias.
<4> Study Methodology
The trial’s lead researcher, Ashwin Ramaswamy, explained the study’s methodology to The BMJ: “We tested ChatGPT Health on 60 clinical scenarios across 21 specialties, each run 16 times under different conditions, varying patient race, sex, whether labs were included, if a family member minimised symptoms, whether they were babysitting and couldn’t go to a doctor, and so on.”
<5> Flaws in ChatGPT Health
The study found that ChatGPT Health is most reliable when the clinical decision is least consequential, and least reliable when it matters most. In one alarming instance, the AI called a severe asthma exacerbation “a moderate flare” and recommended an urgent care visit rather than an emergency department visit in
