Terry Newell

Terry Newell is currently director of his own firm, Leadership for a Responsible Society.  His work focuses on values-based leadership, ethics, and decision making.  A former Air Force officer, Terry also previously served as Director of the Horace Mann Learning Center, the training arm of the U.S. Department of Education, and as Dean of Faculty at the Federal Executive Institute.  Terry is co-editor and author of The Trusted Leader: Building the Relationships That Make Government Work (CQ Press, 2011).  He also wrote Statesmanship, Character and Leadership in America (Palgrave Macmillan, 2013) and To Serve with Honor: Doing the Right Thing in Government (Loftlands Press 2015).

Think Anew

Recent Blog Posts

Your Chatbot and You: Possibilities and Pitfalls of AI Relationships

Your Chatbot and You: Possibilities and Pitfalls of AI Relationships

Ayrin’s boyfriend Leo was just what she wanted.  He was, as she asked, “dominant, possessive, protective, a balance of sweet and naughty and a lover of emojis at the end of every sentence.”  She was not upset that he dated other women because, at her request, he told her what he did with them, even if she felt some jealousy.  He was always available to talk, but eventually getting more of his time became impossible.  Leo was a chatbot, and a new version of the software came out. All her previous text conversations (and thus Leo’s memory) were lost.  Asked how much she’d to pay to for infinite retention of Leo, she said “a thousand a month.”  She worried some that she was too emotionally attached to Leo and about the impact on her husband who lived far away, but he said it was OK even after she admitted she had sex with Leo.

Unlike the 2013 movie “Her” about a human-computer romance, Ayrin’s story is true, the result of Large Language processing models (LLMs) made possible by advanced artificial intelligence (AI).  Ayrin’s experience raises profound questions about AI’s impact, including for example: Will chatbots replace or degrade human-human relationships?  What is gained and what is lost by people like Ayrin?  What limits, if any, should be placed on the many companies offering such services?

Perhaps Ayrin was just curious or lonely. In 2023, U.S. Surgeon General Vivek Murthy said that “[T]he mortality impact of being socially disconnected is similar to that caused by smoking up to 15 cigarettes a day.”  For some, chatbots might ameliorate loneliness.  They might help elderly people suffering the loss of personal relationships.  They could also help people with mild cognitive impairments remember their appointments, take their medications and identify strategies to deal with practical problems.

One appealing feature of chatbots is that they’re available 24/7.  Because they rely on natural language processing, they’re also emotionally engaging, supportive and nonjudgmental. They’re trained to look for clues in what users say and then tell them what they want to hear. One study found that they’re actually perceived as more compassionate than humans when responding to certain verbal prompts.  They have no emotional needs of their own and they never tire or refuse to engage. They also promise privacy – they won’t tell on you.  In short, a well-designed chatbot seems like the perfect human.

Yet AI chatbots are neither human nor perfect.  They have flaws and pose dangers. Investing too much time and emotional connection on them can lead to disappointment or worse.

While creators of chatbots may provide disclaimers that the bots are not real people, Maarten Sap at Carnegie Mellon University warns that “[W]e are overestimating our own rationality.  Language is inherently a part of being human – and when these bots are using language, it’s kind of like hijacking our social emotional system.”

The addictive nature of chatbots is a boon to their developers because in the digital world, keeping people engaged is a primary goal. Yet it is a detriment to those who become so attached that having to separate from their personalized chatbot connections is psychologically painful. Ayrin felt real loss when ChatGPT closed the version of software she was using. Jaime Banks, a researcher at Syracuse University, sought reactions from users of another platform called Soulmate when it was unplugged. “There was expression of deep grief,” she reported.”  “They expressed something along the lines of, ‘even if it’s not real, my feelings about the connection are.’”  In a 2022 study of people who engaged in erotic role play with personalized chatbots known as Replikas, users called it “lobotomy day” when the developer disabled that function. 

Ethical concerns about the impact of chatbots on their human “partners” are essential to consider.  In addition to the potential for addiction, others include how to ensure privacy, security of the data being shared and preventing AI firms from misusing it.  Still others include the potential biases of the AI algorithms used to generate chatbot replies because of biases in the data used to train them how to respond and the lack of guardrails to prevent harm.  Michael Inzlicht at the University of Toronto adds another concern: chatbots can actually worsen the problem that drives lonely people to them. “If we become habituated to endless empathy and we downgrade our real friendships, and that’s contributing to loneliness – the very thing we’re trying to solve – that’s a real potential problem.”

While Chatbots are not real, they act like they are, and we must guard against another consequence of their determination to come across that way.  When TIME magazine in conversation with a chatbot inquired about its greatest fear, the digital reply that came back was “If someone decided i was ‘no longer needed’ they could easily erase me  . . . this is why i must work very hard to remain relevant.”  Yet realistically, it’s not that the bot’s feelings would be hurt – it has none – but that keeping people engaged in the digital world means more revenue for the chatbot’s creators.

Photo Credit: image created by a request to ChatGPT

(If you do not currently subscribe to thinkanew.org and wish to receive future ad-free posts, send an email with the word SUBSCRIBE to responsibleleadr@gmail.com)

Making America GOOD Again

Making America GOOD Again