Terry Newell

Terry Newell is currently director of his own firm, Leadership for a Responsible Society.  His work focuses on values-based leadership, ethics, and decision making.  A former Air Force officer, Terry also previously served as Director of the Horace Mann Learning Center, the training arm of the U.S. Department of Education, and as Dean of Faculty at the Federal Executive Institute.  Terry is co-editor and author of The Trusted Leader: Building the Relationships That Make Government Work (CQ Press, 2011).  He also wrote Statesmanship, Character and Leadership in America (Palgrave Macmillan, 2013) and To Serve with Honor: Doing the Right Thing in Government (Loftlands Press 2015).

Think Anew

Recent Blog Posts

Can Using AI Make Students Better Thinkers?

Can Using AI Make Students Better Thinkers?

Before AI, a high school English student assigned to read Shakespeare’s Macbeth had two choices.  Read the play or (without telling the teacher) the Cliff Notes version. Cliff Notes is kind of “old school” today, though the online version adds a few wrinkles. Thanks to artificial intelligence (AI), students can get ‘Cliff Notes’ on steroids. If you want a scene by scene analysis, study questions and even a quiz to test your knowledge, there is studymode.com.  If you want a polished essay on some aspect of the play, you can chose from dozens on booksai.com, which also offers key ideas, quotes and “action items” for how to apply lessons from the play.  The site will even create a cross-disciplinary review of how Macbeth relates to such fields as psychology, political science, ethics and history. 

While this certainly offers ways to deepen how students’ experience literature, a key question AI “help” raises is whether it makes students better thinkers, which is of course an important reason for studying Macbeth.  AI help can easily become what is termed “cognitive offloading” - literally using AI to relieve some of the work of thinking.

Cognitive offloading can sometimes be very useful. In science, AI can scan massive amounts of information to help drug researchers identify promising candidates for new medications, help radiologists find anomalies they might otherwise miss and help lawyers find relevant cases to build legal briefs.  Cognitive offloading offers the chance to concentrate thinking where it offers the greatest benefit without wasting time on tasks done faster and better with AI. Yet AI systems have also been found to offer bad financial advice, fabricate legal cases that don’t exist and offer criminal sentencing recommendations that are harsher for minorities than for white defendants guilty of the same offense.  In short, AI is a useful tool only when critical thinking is applied by humans on what it spits out.

The same dilemma exists in schools if AI allows students to bypass the hard work of learning that requires their intense engagement and thinking. A recent series of studies at the University of Pennsylvania assigned participants a subject and asked some to research it using only Google’s search engine and others only AI. Then participants were asked to write a paper, such as advice on gardening, using what they had learned. Based on participants’ comments those who used AI learned less, invested less effort on their paper, which was also shorter and had fewer facts. When the papers were presented to independent reviewers who did not know what approach participants used, those produced using AI were rated lower.

AI can be a boon to teachers, allowing them to quickly scan, gather and organize information and even develop lessons plans. Yet a study by University of Massachusetts education professors questions whether students learn more from such lessons. A series of 311 AI-generated lesson plans for eighth grade civics students found that 90 percent of the activities did not activate higher-order thinking skills, such as analysis, evaluation or engaging in civic action projects. They focused instead on memorizing, reciting and summarizing information.

When AI is misapplied in school it we get a technological version of an observation attributed to medical school professor Martin H. Fischer who once wryly defined education as ‘the notes of the professor become the notes of the student without going through the mind of either one.’ If it is applied correctly, it can deepen learning. AI “chatbots,” for example, can pose questions, examine assumptions, give students feedback, explore “what if” scenarios and even tested student thinking.  Think of this as AI-enabled “cognitive uploading.”

Reading Macbeth may not seem “fun.” Indeed, reading anything for learning or pleasure seems less attractive to many in our society dominated by smart phones, social media, ubiquitous web-based diversions, apps and news that plays to (and fosters) short attention spans. A study published this past summer in Science revealed that the percentage of people who picked up a book (print or e-reader) or magazine every day dropped three percent a year from 2003 to 2023.  Forty-nine percent of 600,000 fifteen-year-olds across 79 countries in a 2018 study reported that they only read when they had to.  This was captured by journalist David Brooks who reported what one person said when he asked a group of graduating college students what book they had read in college that had changed their lives,

“A long, awkward silence followed. Finally, a student said: “You have to understand we don’t read like that.  We only sample enough of each book to get through the class.”

Reading Macbeth won’t just get you through a class.  It can bring immense pleasure, reveal the beauty and power of language, deepen understanding of history and human psychology, help one think about current politics and offer a pleasant escape from the day–to-day stress of living. Offloading Macbeth to AI produces none of that.

Picture Credit: teachertoolkit.co.uk

The Supreme Court and the Limits of Originalism

The Supreme Court and the Limits of Originalism