Should we be worried about superintelligence? | Inquirer Opinion

Should we be worried about superintelligence?

/ 06:44 PM August 04, 2019

DHAKA — Many of us have heard of Artificial Intelligence, but do we know about Superintelligence?

Alan Turing, the pioneer of computation, had imagined a program that could solve any kinds of problem and argued that a machine capable of convincing human subjects that it is also a human would mean that it has achieved human-level intelligence.  

Things have come a long way from Turing’s days. Artificial Intelligence research has seen many ups and downs over the past decades. From the Good Old-Fashioned Artificial Intelligence (GOFAI) to neural networks, AI research has gone through several seasons of hopes and despairing AI winters. The result is that, the Siri in your iPhone can recognize your voice, Facebook algorithms can detect your face more accurately, Google Translate can convert your language into any foreign languages with near-flawless precision. AIs are also being used in proving theorems, disposal of bombs, and intelligent scheduling.

ADVERTISEMENT

At first, very few suspected that machine intelligence could improve to a human level in narrow domains. Now, we are being replaced by AIs in areas that require basic skills such as typing or making spreadsheet reports. If a machine can learn so quickly to reach human-level intelligence in narrow domains, there are reasons to worry that it might not stop learning just there.

FEATURED STORIES
OPINION
OPINION

Humans are said to have general intelligence, that is, we have a balanced intelligence in all narrow domains which are also connected to each other. A calculator can do calculations, but it cannot play Go. A Go AI can beat humans at Go but it cannot recognize your voice. You, on the other hand, can do all these with surprising ease. This is the difference between general and narrow intelligence.  

Today’s AIs are getting so competent in narrow domains of intelligence that they are replacing humans in jobs. Therefore, it is reasonable to be worried that it might also become competent in general intelligence at some point and outperform us in general intelligence.

In his book “Superintelligence: Paths, Dangers, Strategies”, philosopher Nick Bostrom considered several pathways to a superintelligence. He believes the development of artificial intelligence could, at some point, lead to the emergence of a superintelligent entity. He posits that a superintelligent entity might have fast computational powers, ability to extend its powers, or even have traits that could have made us superintelligent. Bostrom further argues that if a machine does gain human-level intelligence, the time to gain superintelligent capabilities could be very short.  

In a mathematical graph, if one draws the path, it looks as if the path slowly ascends, and then suddenly, within a short time, it escalates. The point where it escalates is therefore often termed “take-off.” According to Bostrom, a slow take-off case, where we would know that a superintelligent entity is taking over the world, is unlikely. He shows that there are valid reasons to believe that a fast take-off where we would have no time to know what is happening, or a medium take-off—where geopolitical chaos could descend with people trying to take advantage of the impending crisis—is more likely.  

So, what can we do to ensure that we are safe in case a superintelligent entity arises?

Some philosophers have proposed that we give it a goal that is aligned with our interest, since it is unlikely that after becoming superintelligent, it will listen to us. However, even having a superintelligence with goals aligned with us, which machine ethicist Eliezer Yudkowsky terms “Friendly AI”, is risky. If we give a Friendly AI some narrowly envisioned goal, it can result in serious trouble for humanity. Giving a goal to a Friendly AI to make everyone smile, for example, could result in unforeseen circumstances where the whole planet is immersed in laughing gas. Machine ethicists, thus, emphasize giving a well-thought-out goal to a superintelligence so that it does not accidentally harms or even destroys us.  

ADVERTISEMENT

What can we do to save ourselves from a superintelligence destroying us, consciously or unconsciously?

Yudkowsky offers an approach that honors our collective will and dignity. He argues that we should give such a goal to a Friendly AI that is based on our common morality, and will be beneficial for all of us in the long run. He further argues that this should not be decided by a person or community alone, but the whole civilization should have a say in it.  

The hardest part of this solution is to find a common moral ground. It seems from the recent events that morality is somewhat going out of fashion. Or maybe that we have formed a diverse sense of morality for ourselves. Coming to a common moral understanding and developing a code will then be difficult for us. If we are to create a superintelligent entity that would be far more capable than we are, giving it a proper goal should be highly considered. Such a goal should have proper moral ingredients, otherwise, the superintelligent entity might not treat us the way we would like to be treated.

While the scientists work relentlessly on improving computational power to make our lives better, we must not rule out the possibility that a superintelligent entity is imminent. Specialists predict that a human-level machine intelligence will be invented by 2022-2075, which is within our lifetime. After that, the take-off to superintelligence could happen roughly within the next 2-30 years. This means that we do not have much time at our disposal. It is high time we started preparing for that and worked towards a consensus about what forms the basis of our moral values. Failing to do so before a superintelligence arrives might lead to very unfortunate conditions for our civilization.

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

(The author, Muhammad Mustafa Monowar, is currently studying Philosophy of Mind and Cognitive Science at the University of Birmingham.)

TAGS: AI, Asia, Bangladesh, opinion, Technology

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.