Terry Newell

Terry Newell is currently director of his own firm, Leadership for a Responsible Society.  His work focuses on values-based leadership, ethics, and decision making.  A former Air Force officer, Terry also previously served as Director of the Horace Mann Learning Center, the training arm of the U.S. Department of Education, and as Dean of Faculty at the Federal Executive Institute.  Terry is co-editor and author of The Trusted Leader: Building the Relationships That Make Government Work (CQ Press, 2011).  He also wrote Statesmanship, Character and Leadership in America (Palgrave Macmillan, 2013) and To Serve with Honor: Doing the Right Thing in Government (Loftlands Press 2015).

Think Anew

Recent Blog Posts

"Dr. Chatbot is In"

"Dr. Chatbot is In"

Last week when I got out of bed I was dizzy. The room was spinning. When the dizziness passed, I wondered if needed a doctor.  So I used ChatGPT, a generative AI platform. I described what happened and in less than a minute I got possible causes. It seemed most likely to be Benign Paroxysmal Positional Vertigo caused by calcium crystals in my inner ear.  ChatGPT offered a simple exercise to try to solve the problem. “Dr. Chatbot” also suggested other possible causes I could explore with a physician.

“Dr. Chatbot” is appealing. It’s available 24/7 and far faster than using a Google search engine and working through multiple “hits.” There’s no need to schedule an appointment; one can see the “doctor” immediately and have as much time as needed. The “appointment” ends with a transcript of the “conversation” available just by hitting the “print” key.  The bill: $0.  

But how good is Dr. Chatbot’s help?  Are there downsides?

A study by researchers at the University of Maine used a dataset of over 7,000 questions and the responses by both AI and human doctors in the U.S. and Australia.  Responses were evaluated on such criteria as accuracy, professionalism, completeness and clarity.  AI system responses averaged about 8 on a 10-point scale across these criteria. 

“Dr. Chatbot” is building a clientele.  A 2024 survey found that 17 percent of adults (25 percent of those under 30) regularly query an AI bot for medical information. Yet the same survey found that 56 percent lack confidence in the accuracy of the information they get.  There are, indeed, reasons to be wary.  So I decided to ask “Dr. Chatbot” what they were.  Here’s a summary of what it said.

Generative AI systems like ChatGPT give medical information not definitive diagnoses. Nor do they give advice since they lack key knowledge, such as a person’s vital signs and medical history.  They can, however, provide questions one might ask a physician and clarify at a general level some options for treatment once a confirmed diagnosis is available. “Dr. Chatbot” may appear more empathetic because AI bots are designed to use natural language, acknowledge emotions (e.g. “I’m sorry”) and mirror the user’s concerns. They are crafted to please and comfort.  But their empathy is simulated. They are software not humans.   

As for accuracy, it’s important to know where they get the medical information they provide. Chatbots are not trained on real patient data, such as electronic health records.  Nor is there any FDA oversight of what they say or any legal liability for it. Physicians don’t have these flaws. They can access more and better information.  With a physician you get clinical judgment, accountability and moral responsibility which AI systems lack. With an empathetic doctor you can have a caring relationship which can make a difference in whether you accept the diagnosis and follow the treatment.  Your computer doesn’t really care about you, even if it seems like it does.

“Dr. Chatbot” thus can be a useful first step, and it has three useful features. It will always “listen” and never dismiss your input as some physicians, sadly, do.  Nor will it make you uncomfortable about asking embarrassing or emotionally difficult questions. Its quick accessibility may also help calm you down before you get to see your physician.  You just need to be wary about the information you get especially for troubling or complex medical issues whose symptoms, causes and treatments require a more skilled diagnosis. In short, be very careful before you say: “I don’t need a doctor. The chatbot told me the problem and what to do.” When you misuse “Dr. Chatbot,” you may delay or forgo the care you really need.

Physicians are often troubled by patients who come to the office telling the doctor their diagnosis and needed treatment, because “the computer said.” This can be a hurdle for the physician to overcome in building a good doctor-patient relationship. That relationship is achieved by what some call the AIMS model: announce, inquire, mirror and secure trust.  For example, a doctor might announce that a patient’s smoking is causing her health issues and that she consider a smoking cessation program, then inquire about the patient’s thoughts and concerns, mirror what the patient says using supportive language reflecting what the doctor heard and thus secure the patient’s trust. More medical schools now pay attention to the way doctors build such relationship skills, and more need to do so. Some researchers have proposed that a chatbot, by simulating a patient, can help doctors practice AIMS skills through conversations with it. 

“Dr. Chatbot” and “real Doctor” can serve patients best by building a complementary relationship.  Physicians can accept that some patients will come to them with information and questions generated by using an AI system, then apply their experience and deeper medical knowledge at the same time as they improve their relationship skills with their patients, having learned from chatbot “patients.”

Photo Credit: technology-innovators.com

Democracy and Moral Injury

Democracy and Moral Injury