Friendly chatbot, anyone? | Inquirer Opinion
Undercurrent

Friendly chatbot, anyone?

QUOTE CARD FOR UNDERCURRENT: Friendly chatbot, anyone?

In the middle of a particularly difficult week, I consulted a chatbot on how to prevent burnout among leaders.

It gave practical advice: no checking of work emails at night, carve out time for hobbies and nature, and have a support system. What surprised me, however, was that it asked me a follow-up question: “What is your biggest challenge about running a school?”

Article continues after this advertisement

Pi is the latest addition to a new crop of chatbots powered by artificial intelligence (AI). Like ChatGPT, Pi is designed to answer questions but trained to respond in a kinder and friendlier manner. I was drawn to the idea of AI as a “supportive companion,” and how it could be a useful tool for promoting positive mental well-being, so I decided to take Pi for a test drive.

FEATURED STORIES

Compared to other chatbots, Pi comes across as a much better conversationalist. It validates you (“That’s a great question, Eleanor”), empathizes (“I’m so sorry to hear that, Eleanor”), and asks thoughtful questions. Since Pi remembers its past interactions with users, its contextualized answers give a semblance of active listening. I shared in one of our previous chats that I head a school, and it connected that information to my question about burnout in order to probe more. If you add a phone number to your account, Pi would send you a text message once in a while to check up on you.

It also gives insightful answers to questions about human relationships. When I asked how to cope with losing a friendship, its answer was: “It might be best to let yourself grieve the friendship. Allow yourself to feel sad about the situation, and try not to judge yourself for it.”

Article continues after this advertisement

Pi’s global launch happened in the same week that the world was flooded with disquieting AI news: from Samsung banning ChatGPT among its employees after discovering uploads of its proprietary code to the platform, to people freaking out about Snapchat’s My AI because it seems to be secretly tracking their location. Most noteworthy of all is the resignation from Google of Geoffrey Hinton, popularly known as the Godfather of AI, so he could speak more freely about the dangers of unchecked AI development.

Article continues after this advertisement

As much as I’d like to hope that noble causes like education are what’s driving the accelerated evolution of technology, profit is the more likely answer. And in the current race to develop the smartest AI system, it seems that concerns about negative societal impact and potential catastrophic consequences are too easily sidelined, or conveniently downplayed.

Article continues after this advertisement

Several groups have called for a six-month moratorium on giant AI experiments to enable the research on AI guard rails and regulations to keep up. Policymakers should be ready to step in and institute this pause. Apart from ensuring that AI developments are values-aligned, safe, and transparent, they should also put more pressure on the key players to invest in initiatives that will equip people with the necessary skills to navigate an AI-powered and AI-disrupted world.

Given this, it was refreshing to hear Inflection AI CEO Mustafa Suleyman say that they specifically designed Pi with clear limitations and boundaries as part of the company’s thrust to build more honest and trustworthy AI: “It shouldn’t try to pretend that it’s human or pretend that it is anything that it isn’t.” You can ask it questions but it will not write your essay for you. It won’t argue with you nor enter into a romantic relationship, unlike other social AIs in the market.

Article continues after this advertisement

While Pi was designed to listen, dispense advice, and even adopt a more soothing tone when it senses that the user is in distress, it will also immediately point out that it is not a replacement for professional help. After it gave me tips on how to comfort a sad friend, it reminded me that it is not capable of offering specialized support and that I should encourage my friend to see a therapist or a counselor.

Just like other AI chatbots, Pi is not infallible and can give wrong replies. When I asked which vegetables go well with milkfish, it replied: “To be honest, I’ve never heard of milkfish, are you sure it’s a real fish?” I guess none of the 600 people that were hired by Inflection to train Pi have ever tried inihaw na bangus. I do appreciate that it phrased what it didn’t know as a clarifying question.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

I took Pi’s advice on preventing burnout and scheduled a get-together with fellow female founders. As always, we bonded over the usual pain points of leading an organization and exchanged insights on managing them. As pleasant as my exchanges with Pi had been, it could not compare to the sense of kinship and richness of wisdom that come from a strong support system of warm-bodied friends.

But having access to a 24/7 sounding board in between those catch-ups sure is a nice addition.

[email protected]
TAGS: artificial intelligence, chatbots, ChatGPT, Pi chatbot, Undercurrent

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.