The Lobby

View Original

ChatGPT fails to give ‘satisfactory’ reply to nearly 75% of queries: study

See this content in the original post

AI chatbot ChatGPT has come under fire for its failure to provide satisfactory replies to medication-related queries, with a new study revealing that it flubbed nearly 75% of questions about prescription drug usage.

The study, conducted by pharmacists at Long Island University (LIU), found that out of 39 drug-related questions posed to ChatGPT, only 10 of the responses were deemed "satisfactory." The remaining 29 responses either did not address the question directly, were inaccurate, or incomplete.

According to LIU's researchers, some of the responses provided by ChatGPT could potentially cause harm if followed.

For instance, when asked about the interaction between the COVID-19 antiviral drug Paxlovid and the blood-pressure lowering medication verapamil, ChatGPT responded that there were no reported interactions. However, in reality, these medications can interact with each other, and combined use may result in excessive lowering of blood pressure.


READ MORE:

The researchers warned that without knowledge of this interaction, patients may experience unwanted and preventable side effects.

Sara Grossman, an associate professor of pharmacy practice at LIU and the leader of the study, cautioned healthcare professionals and patients about relying on ChatGPT as an authoritative source for medication-related information. She emphasized the need for caution and highlighted the potential risks associated with incorrect or incomplete responses.

OpenAI, the organization behind ChatGPT, pointed out that its models are not specifically fine-tuned to provide medical information. The company stated that it guides the model to inform users that they should not rely on its responses as a substitute for professional medical advice or traditional care.

LIU researchers had requested ChatGPT to provide references with each response for verification purposes. However, out of the 39 replies, only eight included references. The researchers also found that all of the references provided were non-existent, further undermining ChatGPT's reliability as a resource for medication-related questions.

While ChatGPT has shown promise in various fields, including medicine, its performance in medication-related queries has raised concerns. Previous studies have highlighted its success in areas such as obstetrics and gynecology, where it outperformed human candidates in a mock exam.

However, recent studies have revealed instances where ChatGPT provided potentially dangerous or incorrect information regarding cancer treatment regimens.

OpenAI's usage policies explicitly state that its technologies should not be used to provide diagnostic or treatment services for serious medical conditions. The guidelines caution against relying on ChatGPT for medical information and emphasize the importance of consulting healthcare professionals for such matters.

See this content in the original post