Oct 10, 2024
(KRON) — Would you trust artificial intelligence with your visit to the emergency room? Researchers at the University of California, San Francisco, answered that question on Tuesday with a newly-published study.  To keep it brief, researchers do not believe AI should perform surgery or prescribe medication to patients – just yet. However, AI has not only proven to be a hopeful addition to hospitals in the future, but more interestingly, it’s already outperforming local clinicians in certain sectors of a patient’s trip to the ER.  Experiment ER departments nationwide are overcrowded, according to the National Library of Medicine. As a result, researchers wanted to see if a Language Learning Model (LLM) could help with, at least, a hospital’s basic task of admitting patients to the ER.  UCSF researchers gathered and used over 10,000 real adult ER visit records and put CHATGPT-4, one of the world’s most popular LLMs, to the test. The AI software was tested in three different sectors of a patient visit to the ER, according to the study:  Admission status Radiological investigation(s) request status Antibiotic prescription status  The study is one of only a few to evaluate an LLM using real-world clinical data rather than simulated scenarios and is the first to use more than 1,000 clinical cases for this purpose. Results Across all three clinical recommendation tasks, ChatGPT performed “poorly,” with accuracy scores lower than the physician on average. According to the study, the AI software was “overly cautious in its recommendations, with high sensitivity at the cost of specificity.”  “This is a valuable message to clinicians not to blindly trust these models,” said postdoctoral scholar Chris Williams, lead author of the study. “ChatGPT can answer medical exam questions and help draft clinical notes, but it’s not currently designed for situations that call for multiple considerations, like the situations in an emergency department.”  Despite being significantly outperformed in both radiological and antibiotic prescription recommendations, ChatGPT was better at determining a patient’s initial admission status upon entering the ER.  ChatGPT and clinicians were asked to prioritize a patient's “severity” and priority by picking between a “serious condition” and “less urgent conditions such as a broken wrist.” According to the study, in a smaller sub-sample of 500 pairs, AI was correct 89% of the time, compared to 86% for the physician. Despite outperforming them, clinicians said having AI assist in the process of prioritizing a patient’s need with that of the hospital’s resource could free up “critical time to treat patients with the most serious conditions while offering backup decision-making tools for clinicians who are juggling multiple urgent requests.”  Issues moving forward According to the study, bias is one of the main issues of incorporating AI into the ER. More specifically, “due to biases within the data to train them,” both racial and gender biases could be found.  “It’s great to show that AI can do cool stuff, but it’s most important to consider who is being helped and who is being hindered by this technology,” said Williams. “Is just being able to do something the bar for using AI, or is it being able to do something well, for all types of patients?” Previously, GPT 3.5-turbo provided “largely appropriate responses” when asked to give simple cardiovascular disease prevention recommendations. When asked how they felt about their recommendation process on a public social media forum, the public said they both preferred the AI interaction and rated it as having higher empathy than physician responses. “There’s no perfect solution, but knowing that models like ChatGPT have these tendencies, we’re charged with thinking through how we want them to perform in clinical practice,” Williams said. “Upcoming work will address how best to deploy this technology in a clinical setting."
Respond, make new discussions, see other discussions and customize your news...

To add this website to your home screen:

1. Tap tutorialsPoint

2. Select 'Add to Home screen' or 'Install app'.

3. Follow the on-scrren instructions.

Feedback
FAQ
Privacy Policy
Terms of Service