Superintelligence by Nick Bostrom

One day, machines will surpass humans in intelligence. The question is not if but when. Already we have machines which are more intelligent than humans on narrowly defined tasks such as chess, scrabble, checkers, etc. This is known as Artificial Narrow Intelligence (ANI) which has already been achieved. There will come a time when machines surpass humans on a broad variety of general intelligence tasks. This is also known as Artificial General Intelligence (AGI). What will happen after that? Post that the machines will keep on getting better until they become superintelligent, known at Artificial Superintelligence (ASI). At this point they will be way way more intelligent than the most intelligent human being. Imagine the intelligence of human beings when compared to the intelligence of an ant and you may begin to grasp the gravity of the situation. This period of getting from AGI to ASI is known as takeoff period.

Superintelligence is a book about the journey from ANI to AGI to ASI. We will learn how can we get from ANI to AGI to ASI. We can reach there via a variety of routes – it can be through Artificial Intelligence / Machine Learning, or whole brain emulation, or biological cognitive enhancements, or brain computer interface, etc. We will learn how long will it take for us to get to ASI. The really interesting part is how long will it take for us to get to ASI once we have reached AGI. When the takeoff happens, it can be fast (hours or days), moderate (months or years) or slow (decades). And we will learn what are the implications for human beings in each of these scenarios.  

From the perspective of human beings will this be a good thing? When we are confronted with superintelligent machines, what will be our relationship with them. What will be the values and motivations of superintelligence machines and will they be on the same side as the humans. Will there be one superintelligence or will there be multiple. Who will control them? Will it be possible for humans to control them at all?

A superintelligent AI will have it within its power to takeover humanity and subjugate or destroy us. So how can we prevent this from happening. We can do that by being prepared and taking steps now to avoid such a situation. This is known as the control problem. There are two broad ways in which this can be accomplished – capability control methods or motivation selection methods. As you may have guessed, capability control methods try to control the capability of the AI through various methods. And motivation selection methods aim to incorporate motivation or goals for the AI which are compatible with what we want it to do.

After centuries of anaemic growth rates, the global GDP has been on a steady upward march after the industrial revolution. We have experienced unprecedented growth and prosperity during this period. However, this will pale in comparison to what is coming once we achieve AGI. There will be a huge increase in wealth post the transition but the wealth will not be distributed evenly. The unemployment will obviously go up with more automation as we approach AGI. And this will lead to an increase in inequality with highly uneven distribution of wealth.   

Superintelligence covers all of the above and more. The book deals with a lot of esoteric and complicated concepts and is not easy to read by any means. Especially the last part of the book which deals with the values of the AI. How will an AI acquire values and how can we incorporate values in the AI which are consistent with our own values. The good thing is that it is a thought provoking book and will make you stop and think as you read it. The bad thing is that I found it difficult to understand many concepts. At several places, you have to take a leap of faith and take the authors word for it. But I guess that is to be expected from any book on AI, as it is a complicated and evolving subject. Some concepts may be difficult to understand for a layman with little prior understanding. The author has simplified it as much as possible but not any more.

Value investor

Leave a Reply

%d bloggers like this: