AI: Between morality and money | Inquirer Opinion
Public Lives

AI: Between morality and money

/ 05:02 AM November 26, 2023

david11262023

It has been exactly one year now since the nonprofit company OpenAI released ChatGPT, the large language model (LLM) program that has triggered a worldwide fascination with artificial intelligence (AI). Like an early Christmas gift, ChatGPT 3.5 was offered to users at no cost—in the same spirit of experimentation that had inspired its creation. So much has happened since then. Within a short span of time, the free AI tool acquired a legion of subscribers, serving more than a hundred million users every week. A more versatile version, ChatGPT 4.0, trained on more recent and larger amounts of text, has been made available to a limited number of subscribers for a monthly fee of $20.

Students and researchers reveled in its ability to condense incredible amounts of information into coherent responses to questions and problems. Critics and skeptics tested the limits of its capabilities, pointing to the dangers inherent in using unverified information. They have seen that ChatGPT is not above fabricating information, even as it is quick to admit error when told it has provided wrong information. But enthusiastic users dismiss these shortcomings as birth pains. To make things clear, the chatbot now has this caveat below the message line: “ChatGPT can make mistakes. Consider checking important information.” But, on the whole, AI, as embodied by ChatGPT, has been positively received as the most awesome breakthrough in digital technology, alongside the internet. As a consumer product, it has barely been monetized. All this, however, may change in the coming months.

Article continues after this advertisement

Beneath the software’s success, a problem of a different but related sort has been brewing within OpenAI itself. It first appeared as a crisis in the governance structure of the company. On the surface, it is understandable that scientists and tech geniuses do not always make the best managers. But at the heart of OpenAI’s recent crisis is a deeper issue that is not easy to manage.

FEATURED STORIES
OPINION

OpenAI was originally set up by tech entrepreneurs (one of them was Elon Musk) and academics as a laboratory for AI, with the freedom to explore its vast potential for improving the human condition without the pressure to make a monetizable product. In OpenAI’s 2015 founding blog post, this idealism was effusively expressed thus: “OpenAI is a nonprofit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return. Since our research is free from financial obligations, we can better focus on a positive human impact.”A lot of money, however, is needed to fund scientific research. In time, whatever seed money OpenAI had at the start quickly ran out. In 2018, Musk, one of its initial contributors, left the company and stopped giving money for its operations, allegedly after a failed attempt to take it over. It was then decided that a subsidiary that would earn money from OpenAI’s products be established to fund the expenses of the laboratory and pay the salaries of its employees, many of whom are scientists. The tech giant Microsoft poured $10 billion into the business venture, named OpenAI Global, LLC, much of it in the form of usage of the company’s supercomputers to process immense amounts of data. As minority partner, Microsoft was granted early access and a perpetual license to all OpenAI technology.

To ensure that the business side does not set the agenda for the scientific laboratory, OpenAI’s original board of directors kept tight control over the affairs of its business subsidiary by appointing themselves as directors of the latter. Some observers say this was a mistake. Perhaps a separate board would have eased the problem, but it would not have entirely avoided it. For, what is really at issue here is how to preserve the autonomy of science (while keeping it aligned with broader human values) vis-à-vis the business side that is answerable only to the shareholders.

Article continues after this advertisement

At the center of this organizational conundrum are two of the founders of OpenAI—Sam Altman, a well-connected Silicon Valley entrepreneur and the organization’s CEO, and Ilya Sutskever, its chief scientist. Together, they represent two ideological poles that are not easy to reconcile: business and science. A week ago, OpenAI, the nonprofit, fired Altman for being secretive about his plans for the business. Sutskever, a member of the board, was assigned to inform him of his sacking. Microsoft, the main partner in the business, offered to take in Altman and anyone else who wanted to leave OpenAI. Days later, Altman was reinstated as CEO of OpenAI, after more than 700 of its 800 employees signed a petition demanding his return. Acknowledging that their coup had failed, the directors who deposed Altman promptly resigned. Sutskever, the chief scientist, bowed to the will of the majority and stayed. The subsidiary has prevailed over the parent company. Not surprisingly, business has, once more, trumped science. It would be interesting to see what happens next to ChatGPT.

—————-

[email protected]

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

TAGS: AI issues, artificial intelligence, Public Lives

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our newsletter!

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

This is an information message

We use cookies to enhance your experience. By continuing, you agree to our use of cookies. Learn more here.