Ottawa doctors using AI to help diagnose patients on the fly

News story today: doctors in Ottawa hospitals are using AI to help with diagnosing patients. Apparently they have the app on their phone, turn it on when they start to discuss the patient’s symptoms, and then at the end rely on that to help consider diagrnose and possible treatment. Article indicates that this process has helped reduce doctor fatigue and increased patient satisfaction.

I’d heard from a health care professional I know that AI would likely be useful for medical purposes, but this is the first real-life example I’ve heard of.

I assume that this type of AI use is becoming more common in the medical profession?

From reading that article, it seems mostly about a shortcut to charting patient notes, not formulating diagnoses and treatment. I love the hospital exec’s take on AI-generated patient notes.

Not to mention if you have “seven minutes saved per encounter”, the hospital or clinic management can shove more patients through the system and make $$ from extra visits, though how that cuts physician burnout by 70% is a mystery. But hey, maybe doctors should be more concerned with “throughput”. :thinking: :face_with_raised_eyebrow:

As an added note, I’m not too keen on Sophie the AI bot reading my expression and body language and telling me I’m lying to the physician.

Shove it, Sophie.

I gather that “venture beat” is a pro-AI platform. Although AI will certainly have some health applications, it is true Canadian hospitals often have a long history of trumpeting technology that makes little difference in practice.

In most cases, the differential diagnosis is not very difficult and the actual diagnosis can often be made with a good degree of confidence. This is not particularly taxing, and is not a major cause of physician burnout. I am not convinced AI is better than decades of regional clinical expertise- or even close. It would be useful for ECGs (which have long used interpretations, which are occasionally seriously wrong), and doubtless will become better at radiology interpretation (though useless if it comes up with a huge differential instead of the likely diagnosis to CYA, like a few radiologists do).

Some emergency rooms tried using “scribes” that take notes for busy physicians. This has been automated more recently. Many family doctors use a transcription program that writes down the details of the encounter. These are fairly accurate, though may need mild editing. Most GPs read the notes back to the patient for assurance and to ensure accuracy. I would not consider this advanced use of AI. AI has been used to improve triaging, reduce ICU, complications, consider drug-drug interactions, etc. with reported benefit; but if it worked poorly you would be unlikely to be reading about it.

Writing notes takes some time. It needs to be done carefully. I don’t doubt perfect transcription might save a few hours of administrative time per week, and might slightly improve physician-patient interaction if the doctor takes a lot of notes in the moment. But “reducing burnout by [large claimed percentage]” invites a lot of questions about what is being measured. I doubt the real benefit is as high as the hype. They never talk about drawbacks or privacy issues in these rah-rah articles either. It is pretty helpful, though.

Patients rarely lie to physicians, outside of seeking certain medications, notes or something specific. Sure, stoic patients may claim to be in somewhat less pain than melodramatic ones. This hardly requires pinky swearing or whatever else AI substitutes for diplomatic and compassionate care.

Here’s how that will go:

  1. You seem overworked! Here’s a tool that will make you all more productive - increasing our overall capacity.
  2. OK, you don’t seem so busy now and there is less crying. I’m not sure you’re all working hard enough.
  3. Yeah, it’s clear we have too many staff as nobody is working 16 hour days any more and it seems like we’re paying people without completely owning them.
  4. Let’s fire some of them to restore the proper balance.

I have no problem with this in principle, but it’s the “rely on” part that, if the reporting is accurate, would really bother me. This is not the first time this has been tried in a medical setting. IBM’s Watson DeepQA engine was thought to have important applications as a medical diagnostic tool, but was at least temporarily abandoned when it gave dangeously bad advice, despite Watson famously having a confidence-rating algorithm. This particular example was attributed to improper training data, but it’s well known that Large Language Models (:LLMs) are sometimes prone to “AI hallucinations” – making up answers that are not backed up by evidence.

I was in a meeting a few months ago where someone turned on the auto-transcribe feature of the video call. It missed the word “not” in a sentence.
The verification step is very important, as you point out.

Under Obama there was a big push to support EMR introduction. Electronic Medical Records were supposed to greatly cut down on physician charting time, make medical records easily accessible to health care providers and save oodles of money.

That was until providers found out that incompatibilities between EMR systems and software problems seriously complicated matters, updates and fixes cost a lot of money and docs spent ever greater amounts of time inputting required data into EMRs. The good thing was that patient note illegibility was largely overcome and it eventually became a lot easier to look up patient histories and test results (invaluable for my job).

A panacea it wasn’t.

The brave new world of AI will cause a lot of problems in addition to the ones it helps with.

Hospital technology is rarely a panacea.

The CEO is correct, however, when he says seeing two extra patients per physician per day would result in an increased throughput of 7000 patients for 10 doctors. Though this is significant, it might mitigate the claimed gains of reduced pressure. One might reasonably how much “more efficient charting” counts as “AI”. I don’t have the expertise to say.

One issue patients are not always forthright about is alcohol consumption. If a patient has a suspicious AST to ALT ration and signs of pancreatitis, it would be important to discuss alcohol. Some patients minimize their use of alcohol, marijuana or how much they weigh.

However, I would only consider this “lying” if done when directly challenged, and very few patients do that in my experience. It is not always helpful to aggressively accuse the patient, and I am skeptical AI would address these sensitive concerns with the required tact. However, I could certainly be wrong.