Whenever I look back at more than four decades of teaching at the university, I often ask myself what it was about a student’s exam, a term paper, or a thesis that impressed me most, or made me say, “Wow! This is exceptional!” It is such moments that a teacher remembers most, and which make the encounter with young minds a truly rewarding experience.
I think that I can now formulate more precisely what I was looking for in my students, and what I sought to bring out and nurture in every one of them. It is the quality of mind that one sees in a student’s patient attempt to grasp concepts, to sort out their meanings and implications, and to apply them in such a way that they become the student’s personal tools for illuminating the world and how it works, and how one should live. Such achievement typically shines through a piece of work with piercing lucidity, unadorned by formulaic words piled upon words.
It is what ChatGPT lacks, which it tries hard to feign (for example, by the clever pauses of the blinking cursor, suggesting deliberate thought), and to compensate for by deploying an amazing amount of information, some of it sometimes fabricated. ChatGPT is the pathbreaking artificial intelligence (AI) machine that has been trained on an unimaginably large amount of text and can communicate in a natural language like a real person.
What ChatGPT does not have, and will never have, is the quality of mind that every human being uniquely possesses and that schools unwittingly suppress in the name of efficient learning and standardized measurable competencies. This gift, which lies at the very core of human judgment, is the complex product of an individual’s life history and circumstances. A person may go through life without grasping it in any conscious way, and yet be capable of using it to solve practical problems in daily life.
A teacher’s task, I think, is to encourage students to trust their minds in searching for answers, to remind them not to be afraid of committing mistakes in perception and understanding, and, most of all, to help them make sense of their own modes of reasoning and the judgments that unconsciously shape these. In a word, the primary purpose of education is to enlarge a person’s insight into oneself.
As an AI tool, ChatGPT shook the world with its human-like capacity to understand, communicate large amounts of information, and perform a broad range of specialized tasks requested by users. Here is a machine that seems capable of doing almost anything—writing poems, essays, scripts, syllabi, project plans, novels, homilies, etc. When it was publicly introduced late last year as a free resource, its inventors invited users in various fields to explore its limitless applications to human tasks.
Its millions of avid users at once put it to work, casting it in the role of a resource person with an encyclopedic grasp of every available piece of information that has ever been published, or an all-round executive assistant or ghostwriter who can produce a speech or talking points for any occasion. The seeming ease with which ChatGPT could assume these roles and many others was mind-boggling. The cognitive and intellectual capabilities it demonstrated threatened to put many people in creative lines of work out of job.
It didn’t take long for users to believe that this was more than a machine that could process and communicate information at astounding speed. They began treating it as a person, a sentient being with beliefs and feelings that was capable of giving out advice and offering options when prompted to do so. This tendency to attribute human characteristics to animals or inanimate things is called “anthropomorphism.” It’s how the Amazon product “Alexa” is often treated in some households—initially to remember things like shopping lists, execute simple tasks like switching off lights and providing information—but later as someone to turn to for opinions and advice.
Machines are trained to perform the most complicated calculations in processing data, which accounts for their ability to predict and offer reliable representations of reality. But, unlike human beings, they are not capable of judgment. The reason for this is simple: They don’t have values of their own, other than the preferences that have been, knowingly or unknowingly, programmed into them by their engineers. Usually, these value preferences tend to prioritize maximizing efficiency and productivity over, let’s say, equity or fairness or other nonquantifiable goals like ecological sustainability, human flourishing, or quality of life.
This danger is compounded when decision-makers whose actions affect the lives of millions of people allow machines to substitute algorithms for human judgment on the mistaken belief that that’s how the world works and that all we can do is abide by it.
—————-
public.lives@gmail.com