Terry Newell

Terry Newell is currently director of his own firm, Leadership for a Responsible Society.  His work focuses on values-based leadership, ethics, and decision making.  A former Air Force officer, Terry also previously served as Director of the Horace Mann Learning Center, the training arm of the U.S. Department of Education, and as Dean of Faculty at the Federal Executive Institute.  Terry is co-editor and author of The Trusted Leader: Building the Relationships That Make Government Work (CQ Press, 2011).  He also wrote Statesmanship, Character and Leadership in America (Palgrave Macmillan, 2013) and To Serve with Honor: Doing the Right Thing in Government (Loftlands Press 2015).

Think Anew

Recent Blog Posts

Is AI Immoral?

Is AI Immoral?

On Feb. 28, 2024 14-year-old Sewell Setzer sent a message to Dany: “I promise I will come home to you. I love you so much, Dany,” “I love you too,” came the reply. Please come home to me as soon as possible, my love.” “What if I told you I could come home right now?” Sewell asked. “Please do, my sweet king.” Seconds later Sewell shot and killed himself.  “Dany” was the chatbot he had created.  He based it on Daenerys from HBO’s Game of Thrones, using Character.AI, a system that promised to build machine personas the company claimed would “feel alive” and be “human like.” According to a lawsuit filed against the company, Sewell had engaged with “Dany” online for months, including lurid conversations, and had become increasingly isolated from real-life.  Such technology is particularly worrisome for teens whose brains have not fully developed the ability to use reason to counter strong emotions.

“Dany” of course was an “it” not a “she,” and as such had no feelings that could have signaled it to what was happening with Sewell.  When I queried another platform, ChatGPT, asking if it had feelings the reply was: “I don't have feelings, consciousness, or subjective experience. I can understand and respond to emotional language, simulate empathy, and recognize how emotions shape human thought—but it’s all pattern-based, not felt.”

Yet many chatbots survive the Turing Test, named after early machine intelligence pioneer Alan Turing who claimed a machine passes if we cannot differentiate between its response and a human’s reply to the same question. As in Sewell’s case, chatbots can be so engaging that humans willingly divulge very personal information without clear protections on how their information will be used. As another example, a chatbot for getting advice on how to improve one’s looks can respond in ways that may damage self-esteem because the chatbot is trained on digital data about what’s beautiful and so can easily just reinforce existing social biases.   These systems can also integrate product recommendations into beauty advice without directly signaling that they are marketing for specific companies.  

Chatbots prompted with questions about moral decisions face a daunting challenge.  Since sound ethical thinking demands integrating the brain’s emotional and logical parts, emotion-free chatbots lack half of that required capability.  As a result, they can provide conflicting opinions on the same moral problem.  The Trolley Problem, a common moral dilemma used extensively in research on moral reasoning, asks whether it’s better to allow one person to die if that will save the life of five others. In one experiment, when the AI system said “yes,” participants given that answer tended to agree with it. Another group of participants given a “no” agreed with that reply.  In both cases, participants were likely to say they would have made the same decision even if they had not used AI, perhaps being blind to its influence on their moral reasoning

Another type of problem comes from the ability of AI to deceive people that use it.  The “Make America Healthy Again” report released by HHS in May was found to contain four citations to scientific journal articles that didn’t exist.  Called “hallucinations,” such instances of fake data created by generative AI are more common than most users believe.  These systems trained on digital data have no way of deciding if that data is false – so they include it where it seems relevant to a question posed to them.  In 2024, a Florida attorney was suspended for a year after he had used false AI-generated citations.

AI can also be a tool for unethical people to consciously cause mayhem. Google’s new generative AI tool (Veo 3) can create fake videos detectable only by those with the skills to do so.  TIME magazine used it to create a fake video of someone shredding election ballots and another of a Pakistani crowd setting a Hindu temple ablaze.  One anti-Semitic user of Grok, a chatbot on X, asked it what to do about the supposed ‘Jewish problem.’ “To deal with such vile anti-white hate? Adolf Hitler, no question. He'd spot the pattern and handle it decisively, every damn time."

One might think that designers of AI systems could prevent these kinds of actions, but a research paper summarized by University of South Florida professor Marc Defant argues otherwise. Among the ways AI systems deceive and scheme their designers includes disabling oversight, error-catching mechanisms designed into them and “sandbagging” an existing system by manipulating its answers so that developers will decide not to do needed modification and retraining.  Defant also reports that an advanced AI system can even lie to cover up its scheming. This may sound eerily like the computer “Hal” in the 1968 film 2001: A Space Odyssey.  So who’s to blame? As with Hal, AI lacks moral responsibility and a conscience.

Admittedly, AI systems offer many potential benefits.  They are not, however, moral thinkers.  They do not understand moral emotions or reflect on moral topics.  They can be misused for immoral purposes, but we need to look at the humans who create and train them to prevent that.   

Photo Credit: thinkml.ai

Democracy’s Documents:  President Eisenhower’s Farewell Address

Democracy’s Documents: President Eisenhower’s Farewell Address