When Artificial Intelligence Exceeds Human Intelligence

In astrophysics, a singularity is the point at which gravity becomes infinite as in a black hole or at the point of the Big Bang. That is not the kind of singularity discussed in this book. Rather, the singularity in this book refers to a potential future phenomenon wherein artificial intelligence (AI) reaches the level of human intelligence and then "explodes" into a process of ever increasing intelligence far exceeding that of humans.

If and when such a singularity is reached, it has profound implications for the future of Homo sapiens and calls into question the essence of what it means to be human, the future of our species and even our very existence going forward. Clearly I needed to sort out science from science fiction in reviewing the literature on this possible phenomenon.

The first person to apply the term singularity as a serious possible future human condition was the great mathematician, John von Neumann. In the 1950s he defined the singularity as the moment beyond which "technological progress will become incomprehensively rapid and complicated."

Irving J. Good was a British mathematician who worked with Alan Turing at Bletchley Park during World War II to decrypt German codes. In 1965, he said, “The survival of man depends on the early construction of an ultraintelligent machine.” He further said, “Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an ‘intelligence explosion’ and the intelligence of man would be left far behind…Thus the first ultraintelligent machine is the last invention man need ever make, provided the machine is docile enough to tell us how to keep it under control.”

Vernor Vinge is a mathematician and computer scientist. In 1993, at a NASA conference, he described the singularity as the point at which we have created a computer with intelligence greater than human intelligence. At that point, he said, "the human era will be ended."

                          Ray Kurzweil

                          Ray Kurzweil

Ray Kurzweil has taken on the mantle of popularizing the discussion of the singularity and bringing it to near cult-like status. He is a graduate of MIT and has pioneered in the development of numerous technologies including voice recognition, optical character recognition and electronic keyboards.  His books The Singularity is Near and How to Create a Mind are must reads for anyone interested in understanding the science and technology of the singularity. Kurzweil's premise is based on the fact that technology is increasing at an exponential pace and that we are reaching a critical "knee" in that pace where our knowledge and technology explode. This applies particularly to our ability to understand the human brain and to be able to simulate it entirely in a computer. This, he predicts, will happen by the year 2030. At that point, we will be unable to distinguish computer intelligence from human intelligence. That is, a computer will be able to pass the Turing Test. Further, by using nanobots, tiny computerized robots that can be injected into the bloodstream and populate the brain, we will greatly enhance our brain understanding and capability. These nanobots will allow us to “download” wirelessly all information in our brains into a computer so that basically a computer will be able to “think” like a specific individual human. In essence, a person’s mind will exist in silicon. Since these electronic technologies do not have the physical limitations of our biological brains, they will process information far faster than our slow neurons can. The computer emulations will learn and change their own software accordingly. Thus they will evolvefar faster than natural selection allows our biological brains to evolve. These computer enhancements can be continually uploaded back into the human brain’s nanobots. At this point, the biological and silicon brains will be interchangeable and we will have reached the singularity. That will happen by 2045. The question for me is whether this can be considered our successor species or lead to it.

This is not science-fiction. Kurzweil has co-founded the Singularity University which sponsors legitimate research forums on artificial intelligence, nanotechnology and other other related science and technology topics. People who promote the concept of the singularity are referred to as singularitarians. Kurzweil became Google's Director of Engineering in 2012 and now has the backing of one of our largest companies in furthering his concepts.

Many others, besides Kurzweil, have written extensively on the singularity and have studied its possible consequences. There are many visions as to how this might play out. Nick Bostrom is a Philosophy Professor at the University of Oxford and founding director of the Future of Humanity Institute. He has studied AI extensively and describes what he calls the "crossover point" where the AI computer is able to begin re-programming itself and making itself ever increasingly intelligent. It then becomes a superintelligent computer. Bostrom warns that after the crossover point, we will be unable to control the AI and that it will become an existential threat to humanity. Therefore, we must plan now to attempt to embed software that will insure that the future superintelligent computer will be "friendly" to humans. He warns, however, that such attempts are likely to fail.

James Barrat, another writer on AI, describes the point at which AI equals human-level intelligence, called Artificial General Intelligence (AGI). As Bostrom has predicted, he states that the AGI will quickly cross over into Artificial Superintelligence (ASI) and warns of the same dangers.

Francis Heylighen is a Belgium cyberneticist (a person who studies complex control systems.) He has a somewhat different view of the singularity. Rather than being a single entity, he sees the future ASI to be the result of a gradual evolution to a network of connected humans and computers that collectively have superintelligence. Their super ability will be related to the fact that they are networked through the Internet so that they are virtually everywhere on Earth and are therefore all-seeing and all-knowing. His view is that this distributed model of superintelligence will not be a threat to humanity. This model is called the Global Brain.

There are now multiple groups that have formed organizations whose purpose, at least in part, is to study methods to prevent the existential threats from AI. These include the Machine Intelligence Research InstituteHumanity+The Future of Life Institute, Future of Humanity InstituteOpenAI and the Foresight Institute.

The singularity is discussed at length in the chapter on electronic evolutionone of four major paths considered for the answers. The other paths are catastrophe, natural selection, and genetic engineering.

The singularity surely has had an influence on my thinking about the answers. Its impact will be a surprise to you.

Click on links to other players in my journey below.