The chatbot’s reasoning was “at times medically implausible or inconsistent, which can lead to misinformation or incorrect diagnosis, with significant implications,” the report noted.
The scientists also admitted some shortcomings with the research. The sample size was small, with 30 cases examined. In addition, only relatively simple cases were looked at, with patients presenting a single primary complaint.
It was not clear how well the chatbot would fare with more complex cases. “The efficacy of ChatGPT in providing multiple distinct diagnoses for patients with complex or rare diseases remains unverified.”
Comments are closed.