- cross-posted to:
- medical_professionals@midwest.social
- cross-posted to:
- medical_professionals@midwest.social
My thoughts are summarized by this line
Casey Fiesler, Associate Professor of Information Science at University of Colorado Boulder, told me in a call that while it’s good for physicians to be discouraged from putting patient data into the open-web version of ChatGPT, how the Northwell network implements privacy safeguards is important—as is education for users. “I would hope that if hospital staff is being encouraged to use these tools, that there is some significant education about how they work and how it’s appropriate and not appropriate,” she said. “I would be uncomfortable with medical providers using this technology without understanding the limitations and risks. ”
It’s good to have an AI model running on the internal network, to help with emails and the such. A model such as Perplexity could be good for parsing research articles, as long as the user clicks the links to follow-up in the sources.
It’s not good to use it for tasks that traditional “AI” was already doing, because traditional AI doesn’t hallucinate and it doesn’t require so much processing power.
It absolutely should not be used for diagnosis or insurance claims.
I blame my tech background for being intensely suspicious of pretty much all AI. The AI developed by MIT for early detection of preliminary stages of breast cancer on mammograms that was trained on an extremely rigorously vetted and sanitized data set is probably the only breed of AI I would actually trust in medicine.
I once had ideas about creating a learning algorithm (not quite as complex as AI and not a black box) that uses data from medical professional input to generate suggestions for triage and protocols in emergency medicine. My idea was to feed it the triage notes, vitals, labs, diagnosis, and disposition with patient demographics (and NO PII) to create a statistical model that would look at the triage notes and the intake vitals to make a suggestion for triage level and empiric labs/testing to expedite care.
Obviously, the triage nurse (or any other staff member, really) could override it and input a higher level of triage because there’s no good way to reliably teach a machine gestalt or heuristics. A really experienced healthcare provider will almost always have a good sense for which patients are currently just compensating and will be crumping shortly. I just think having a statistical model that puts in empiric orders to get stuff started while the patient is still waiting to be brought back could expedite care a lot.
The thing that made me think of this is the fact that every time I have seen a kiddo come through the ER with vision changes that were not fixed by glasses, they had some kind of intracranial mass, and it would just make stuff go so much faster if the head CT was already done by the time the physician could actually see the patient. (Or patients that are on the border of meeting SIRS criteria having a bunch of labs already done.)