Are You Prepared for an AI-Driven Doctor?

In reading up on AI, I quickly came across the idea of AI in medicine. I do not consider myself an AI zealot, I think the potential is amazing but the possibility for developing SkyNet is very real.  However, when I read through a few studies around AI models in healthcare, I started to wonder if the upsides outweighed the downsides.

Specifically, I think that an AI can hold much more information than a human brain can. It is not uncommon to have a doctor that is not aware of new techniques or treatments. Usually this has to do with willingness, ability, or access to the newest approaches.  AI could, in theory, be updated almost instantly. (Of course, vetting the “new” would be a potential risk.) I also understand AI enough to know that this isn’t really like a software patch, there is a training and validation process, but those times will reduce with technology advances…so, not today but it will likely be immediate tomorrow.

An AI also seems much more empathetic. This really isn’t a surprise because doctors are human. Humans get bothered, overworked, frustrated, and simply have bad days. An AI could slow down when overworked but not change the “empathy level” of responses. 

One major drawback that I’ve seen from ChatGPT is that it does get it’s channels crossed.  This would be a HUGE problem in healthcare.  Let me explain:

I’ve asked ChatGPT for an answer to something and the response I got back was clearly for a different request. ChatGPT didn’t realize it’s mistake and also didn’t have the ability to look back on the conversation and evaluate its own response.  It assumed I was simply not happy with the answer and reworded it (again with the wrong request).  While this is clearly a scary thing when talking about health care, it’s a fairly easy technical problem to address and is more coding than AI. ChatGPT, after all, should not be expected to be perfect yet.

So, am I ready for an AI doctor? Maybe not yet but I totally see the upside potential here.

One interesting study on the matter:

Want to know more about the study I looked at?  It’s pretty interesting but not entirely surprising, if you think about it.

To conduct the study, 195 question-answer exchanges on Reddit were sampled where a verified physician had responded publicly to a patient question. They presented the original question to ChatGPT and asked it to generate a response. A panel of three licensed healthcare professionals evaluated each question and its corresponding responses, unaware of whether the response came from a physician or ChatGPT. They assessed the responses based on information quality and empathy, expressing their preference for one over the other.

The panel of healthcare professionals preferred ChatGPT’s responses to physician responses 79% of the time. ChatGPT’s messages provided nuanced and accurate information, often addressing multiple aspects of the patient’s question, which impressed the evaluators. Additionally, ChatGPT’s responses received significantly higher ratings for both quality and empathy compared to physician responses. Good or very good quality responses were 3.6 times more common for ChatGPT, while empathetic or very empathetic responses were 9.8 times more common.

The study was clear that this was for an AI “assistant” and not a doctor replacement.  Since the study was done in conjunction with UC San Diego School of Medicine, this is no surprise.  The best way to kill research is to make it threatening to a powerful group.  Doctors are willing to accept “assistance” but not be replaced!  That leads immediately to a reactionary self-dense stance which is never helpful.

Overall, the study demonstrates the potential of AI assistants like ChatGPT to address real-world healthcare challenges with accurate and empathetic responses. The technology may not be ready yet but these promising results suggest a future where AI-augmented care plays a significant role in improving healthcare delivery and enhancing patient outcomes.

A link to the study is here:

Paul Bergman runs a business strategy and cybersecurity consulting company in San Diego. He writes on cybersecurity and board management for both corporate and nonprofit boards.

Paul Bergman
Follow me