Terminator and the rise of ChatGPT-800
Technology comes double-edged. On one hand, it has the power to make our lives easier, more efficient, and more enjoyable. With Skype, FaceTime, and the like, the globe seems to have grown much smaller. Gone are the days of clunky typewriters and snail mail. Now, even families across seas may stay connected at a click of a button.
Yet, on the other hand, technology also has the power to exacerbate old problems and create new ones. Hollywood has been everything but shy in reminding of this to its extremes.
Article continues after this advertisementWe’re no stranger to the Skynet scenario. For over four decades now, we’ve watched Arnold Schwarzenegger on the silver screen to reprise his breakout role in “The Terminator” as the Cyberdyne Systems Model 101 Series 800 Terminator (T-800). Sent back in time to kill (and, later, to save) Sarah and John Connor, the Terminator universe captures—albeit, again, in its most drastic, dire, and dollar-milking form—how technology may serve as our victor, as well as our vanquished.
Now, at the risk of stating the obvious, that Terminator’s Judgment Day is far from fact; which a true relief considering that it was originally scheduled for seven years past, on Aug. 29, 2016. Today, we find no T-800s in sight. Instead, we find a much smaller friend (or foe) in a most recent development in artificial intelligence (AI) technologies: the ChatGPT.
ChatGPT is a variant of a Generative Pre-trained Transformer. It is a “language model developed by OpenAI that uses machine learning techniques to generate human-like text. It is trained on a large dataset of human conversations, and can be used for a variety of natural language processing tasks, such as text generation, conversation simulation, and language translation.”
Article continues after this advertisementNow, dear readers, if you’re anything like me, you probably would have little to nil idea as to what all this gizmo techie talk means. And, personally, I am one who is never quite satisfied with not knowing, so I decided to figure this out myself by going straight to the source itself. I visited the ChatGPT webpage, opened an account, and typed in the following task: “Explain what ChatGPT is to a five-year-old.”
It’s response: “ChatGPT is like a robot that knows how to talk and write like a person. It can make up stories, have a conversation and even write letters for you. Think of it like a talking computer that can understand and respond to what you say. It’s like having a friend who can talk to you all the time and never get tired! It’s similar to how we learn new words and how to talk. This robot is also learning, but with the help of a lot of information from books and conversations that people have had before.”
Much simpler, ei? Comparing these two descriptions (both of which were generated by ChatGPT), the chatbot clearly has its use in presenting data and information tailored to the prompt. And when put this way, ChatGPT sounds like the farthest thing from a ChatGPT-800! But as we already know, technology is a double-edged sword.
Last week, an extensive debate was sparked on the ethical use of AI in academia. University of the Philippines, Department of History faculty member, Professor Francisco Guiang—a contemporary of mine and fellow student-leader from our UP days of yore—observed how AI might have been used to generate answers to his final exam.
Now, much has been said and will be said on the impacts of AI on academic integrity among students. Indeed, for a student to simply pass off a work not his or her own, AI-generated or otherwise, would easily tick the box of academic dishonesty and plagiarism.
Academics and pseudoacademics alike have flooded social media to voice their protest to ChatGPT. They ask: How can we now ensure that our students do not cheat if they can simply prompt an AI to give them the answers? To this, I say: If a chatbot can give students the right answers to their exams, perhaps we have simply been asking them the wrong questions.
You see, in DIKW hierarchy of things, there is a breadth of difference between data, information, knowledge, and wisdom. In gist, “data is raw facts and figures; information is data that has been organized and processed; knowledge is information that has been understood and internalized; and wisdom is the application of knowledge and experience to make sound judgments and decisions.”
The problem with Philippine education is how it has focused too narrowly on data and information, and fails to account for and aspire to knowledge and wisdom. There is no discipline more guilty of this error than Philippine legal education, which over the past decades has been diminished as nothing more than regurgitations of doctrine and the rote recitation of black letter law. Many memories come to mind where a grade of “uno” (the highest possible grade in UP) would be awarded to students able to recite codal provisions verbatim, but questions of justice, ethics, and the spirit of the law would be received with scorn and dismissed as lofty mumbo jumbo.
In the advent of Ctrl+F, the market value of memory work has sorely depreciated. In the age of ChatGPT, the premium is no longer simply with knowing, but understanding. The challenge, therefore, is not to simply monitor and deter students from relying on AI. On the contrary, we have earlier seen how AI technology is able to simplify the otherwise complex. Rather, what we should strive for is to raise our own academic standards from the tier of data and information to that of knowledge and wisdom.
ChatGPT may be prompted for the right answer. What remains to be seen is whether the right answer will be prompted from us.
——————