There is an abundance of excellent work documenting the very real opportunities and dangers of ‘AI’ (i.e. machine learning) in today’s world (for more on terminology, see: the term AI has a clear meaning). Nevertheless, we continue to hear wild sci-fi stories about AI eventually writing award-winning literature, developing consciousness or superintelligence, and even taking over the world and enslaving humanity.
Although there’s nothing wrong with this type of speculation when confined to the world of science fiction, problems do arise when such speculations are
So do we need to worry about superintelligent AI taking over? Or, from a more techno-optimistic perspective, will superintelligent AI come soon and solve all our problems?
What is superintelligence?
The first question to ask is “what is superintelligence?” It’s sometimes defined as simply exceeding human intelligence, but according to that definition, a basic calculator is superintelligent: it far exceeds any human benchmark for performing basic arithmetic. Superintelligent computers and software already exist, then, it’s just that they have become a routine part of everyday life.
If you extend the definition of ‘surpassing human benchmarks’ to a wide range of tasks, then you could even make the argument that dogs are superintelligent when it comes to tasks such as catching drug smugglers.
In his book, Superintelligence: Paths, Dangers, Strategies, Nick Bostrom notes precisely this issue, and proposes the following definition of superintelligence: “intellects that greatly outperform the best current human minds across many very general cognitive domains.” He further notes that this ‘outperformance’ could occur in a number of ways, such as through superior speed, by aggregating a large collective of smaller intelligences, or by having a much higher quality of intelligence (think of extremely fine sensor and error detection capabilities, for example).
It’s important to note here the distinction between general AI (or AGI) and narrow/tool AI. Broadly speaking, a machine could be considered to have achieved general AI when it could complete or learn to complete any task that a human can do. Human intelligence is
By contrast, progress in the field of AI has consisted of solving narrow tasks: playing chess or other rule-based games, identifying objects in images, etc. Such AI systems are basically tools developed to accomplish individual narrow tasks; moreover, they're typically useless for any other task, and even a minor change in rules (to which a human could easily adapt) is enough to render them useless (this is usally referred as the brittleness of AI systems).
The key thing to point out here is that we only have narrow AI. General AI is an aspiration of some researchers, but nothing of the sort has been developed. It’s worth looking into the history of the term AI here to understand this aspiration.
The term ‘artificial intelligence’ was coined in the funding proposal for the 1956 Dartmouth Summer Research Project on Artificial Intelligence, which assumed that “every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” Not only that, they assumed that this could be accomplished in one summer. Unsurprisingly, the 2-month, 10-man study was unable to accomplish this aim.
One key assumption underlying the Dartmouth proposal was that all aspects of human intelligence are replicable by machines because they are, at bottom, forms of computation. This is a debatable assumption, but if we accept it then in principle human intelligence could be replicated, and surpassed, by machines.
Since 1956, there has been remarkable progress in the development of narrow AI. Tasks which were considered impossible for machines, such as beating human champions at chess or go, have been accomplished. At the same time, most people are reluctant to take this as proof that such machines are truly intelligent.
This has been referred to as ‘the AI effect’ which is the phenomenon that when a computer can’t do something, people tend to think of it as a marker of intelligence, but as soon as a computer can do it, they no longer see it as a significant benchmark. On this issue, Larry Tesler has been quoted as saying that "AI is whatever hasn't been done yet."
AI today
To come back to Bostrom’s definition of superintelligence, it should be clear that he is not talking about narrow AI, since calculators are already a form of narrow superintelligence, but rather general AI. When discussing superintelligence and artificial general intelligence (AGI), we are therefore not talking about anything we actually have, or are currently close to developing.
Despite his enthusiasm for discussing highly speculative scenarios, even Bostrom is quite modest about a timeline for developing anything like a superintelligent system. He refuses to give a concrete date, but the ‘paths to superintelligence’ that he outlines in the book all require incredible technological advances such as whole brain emulation that are
Other pundits are far more willing to provide concrete timelines, however. Perhaps the most notorious AI-sci-fi speculator is Ray Kurzweil, the famous inventor, futurist, and director of engineering at Google.
In his 2005 book, The Singularity is Near: When Humans Transcend Biology, and in countless interviews and futurist-fireside-chats, Kurzweil has made two predictions that are worth analysing here: first, that AI will achieve human-level intelligence by 2029; second, that the fabled ‘Singularity’ will be achieved by 2045, “which is when humans will multiply our effective intelligence a billion fold, by merging with the intelligence we have created.” Let’s look at each of these in turn.
Human-level intelligence
Regarding the first prediction, it’s important to begin by asking what precisely Kurzweil means when he speculates that AI will achieve human-level intelligence by 2029. As already mentioned, a basic calculator vastly surpasses human-level intelligence already in terms of arithmetic, so clearly Kurzweil has something more expansive in mind.
He specifically says that “2029 is the consistent date I’ve predicted, when an artificial intelligence will pass a valid Turing test — achieving human levels of intelligence.” This prediction rests on the assumption that passing a Turing test equates to achieving human levels of intelligence, but is that assumption valid?
First of all, we need to understand what a Turing test is. The Turing test, also known as the Imitation Game, was introduced by Alan Turing in his paper Computing Machinery and Intelligence from 1950. The idea of this test is that a human tester can communicate via text with two participants: one human, and one machine. Both participants have to convince the tester that they're human. If the machine can convince the tester that it's human, then it has demonstrated intelligence.
Even if an AI system could be designed to pass the Turing test, in whichever form we care to conceive of it, this by no means guarantees that such a system has ‘human level intelligence’ (and
Kurzweil's claim that an AI system passing the Turing test would be proof of 'human level intelligence' is therefore nonsense. Beyond the inadequacy of just the Turing Test as a measure of intelligence, however, machine learning researcher François Chollet has pointed to the fact that we completely
According to Chollet, the conception of intelligence underlying speculations such as Kurzweil’s “considers 'intelligence' in a completely abstract way, disconnected from its context, and ignores available evidence about both intelligent systems and recursively self-improving systems.” He further adds that:
If intelligence is a problem-solving algorithm, then it can only be understood with respect to a specific problem. In a more concrete way, we can observe this empirically in that all intelligent systems we know are highly specialized. The intelligence of the AIs we build today is hyper specialized in extremely narrow tasks — like playing Go, or classifying images into 10,000 known categories. The intelligence of an octopus is specialized in the problem of being an octopus. The intelligence of a human is specialized in the problem of being human.
For Chollet, then, simply trying to increase the computational power of current AI systems will not lead to them becoming 'more intelligent,' as we have to consider what problem these intelligent systems are supposed to solve and how intelligence is embodied and situated in culture and an environment.
He notes that while human brains may be superior to octopus brains in terms of pure computation, a human brain transplanted into the body and environment of an octopus would likely fail miserably due to its lack of innate adaption to the specific body and context. As Chollet further notes:
an AI with a superhuman brain, dropped into a human body in our modern world, would likely not develop greater capabilities than a smart contemporary human. If it could, then exceptionally high-IQ humans would already be displaying proportionally exceptional levels of personal attainment; they would achieve exceptional levels of control over their environment, and solve major outstanding problems— which they don’t in practice.
Chollet's point here is that even if we developed some machine with an IQ of 3,000, it would be unable to realise the potential of that computational power within the infrastructure provided by our world. It is also not clear what problem such a machine would be fit to solve. The great challenges faced by humanity are complex social, political and ultimately human questions that cannot be solved by simply throwing huge computational resources at them.
Merely increasing computational 'brain power' according to some abstract measure is therefore unlikely to result in 'superintelligence,' as it focuses on one abstract and contentious conception of intelligence and ignores other important factors, such as how our embodied nature and our environment conditions that way that our intelligence is expressed and manifested.
The idea of AI achieving 'human-level intelligence' by the end of this decade, and from there outstripping humanity, is thus based on flawed conceptions of how intelligence really works. As Chollet notes, it's like thinking that "you can increase the throughput of a factory line by speeding up the conveyor belt."
Intelligence explosions
So what about Kurzweil’s second prediction, that 2045 will herald the coming of the Singularity? The basic idea of the Singularity is that technological growth and improvement will hit a point at which some kind of irreversible and revolutionary change occurs. For some thinkers, that means that humans will merge with AI, while others speculate that AI will simply overtake us to such an extent that our existence becomes irrelevant.
In addition to Kurzweil’s highly optimistic timeline, we also have a range of thinkers who believe that at some point, AI will reach a level of sophistication where it will become intelligent enough to design its own improvements, leading to a so-called ‘intelligence explosion,’ where AI gets exponentially more intelligent in a process of recursive self improvement.
This idea originated in a paper by Irving John Good, entitled Speculations Concerning the First Ultraintelligent Machine, from 1963, in which Good made the now famous speculation:
Let an ultraintelligent machine be defined as a machine that can far surpass all the intellectual activities of any man however clever. Since the design of machines is one of these intellectual activities, an ultraintelligent machine could design even better machines; there would then unquestionably be an “intelligence explosion,” and the intelligence of man would be left far behind. Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.
The basic idea is that once we create an AI system smart enough to improve itself, this would spark off a process that would lead to an exponential increase in the system’s intelligence.
This idea has been criticised by a number of people, including Francois Chollet, whom we mentioned above, in his piece The implausibility of intelligence explosion. In this piece, he argues that "the notion of intelligence explosion comes from a profound misunderstanding of both the nature of intelligence and the behavior of recursively self-augmenting system."
We've already seen above how Chollet criticises the abstract and disembodied conception of intelligence that underlies speculations about superintelligence. He further notes that speculations about intelligence explosions fail to account for how things outside of our mere computational ability impact, and slow down, our problem-solving abilities:
The basic premise of intelligence explosion — that a “seed AI” will arise, with greater-than-human problem solving ability, leading to a sudden, recursive, runaway intelligence improvement loop — is false. Our problem-solving abilities (in particular, our ability to design AI) are already constantly improving, because these abilities do not reside primarily in our biological brains, but in our external, collective tools. The recursive loop has been in action for a long time, and the rise of “better brains” will not qualitatively affect it — no more than any previous intelligence-enhancing technology. Our brains themselves were never a significant bottleneck in the AI-design process.
What Chollet means here is that the reason we haven't been able to solve all the problems facing us is not that we don't have high enough IQs: our lack of computational brain power is not stopping us from solving complex problems such as economic inequality or the climate crisis. There are many other sources of friction that get in the way of us tackling these problems, and just having super smart machines (or human-machine hybrids with astronomical IQ scores) would do little to remove these other sources of friction.
Chollet sums up his remarks with the following pithy phrase: "Exponential progress, meet exponential friction.” What this captures so well, and what speculations like Kurzweil's miss, is that any increase in computational power will face increased friction from the environment in which it operates. Of course, this is not to say that advances in machine learning and other branches of 'AI' will not lead to huge improvements. They more than likely will, but those improvements will happen gradually.
There are many more aspects to this problem which we could discuss here, so for those interested in exploring these ideas further, we've provided some resources (both critical, and more speculative) below.
Ultimately, speculations about superintelligent AI cannot be definitively refuted, as there is always the possibility that unprecedented technological leaps will surprise all the sceptics. It could also be that superintelligence already exists on earth, but is simply biding its time, playing the stock market, and waiting for its chance to strike. And we cannot definitively discount the possibility that aliens have already developed superintelligent AI, been overthrown by it, and that the superintelligent alien overlord is on its way to destroy us.
We can't say for sure that these speculations are false, of course, but hopefully we've managed to debunk some of the more obvious misconceptions around discussions of superintelligent AI.
Bibliography & Resources
Here are some resources on the topics of AGI & Superintelligence. We've provided some general discussions of the topics, some more speculative and optimistic pieces, and a list of critical pieces. We've also provided some resources on the Turing Test/Imitation game.
General discussions on AGI/Superintelligence
Everything you need to know about AGI
- Also check out the rest of Ben Dickinson’s blog series on demystifying AI
CNBC - How Britain's oldest universities are trying to protect humanity from risky A.I.
A beginner’s guide to the AI apocalypse:
A classic paper on the idea of artificial general intelligence is: Pei Wang - On defining artificial intelligence
- For an excellent and clear-headed discussion of the Wang's classic paper, and more broadly on aritificial general intelligence, check out the Journal of Artificial Intelligence' Special Issue “On Defining Artificial Intelligence” Volume 11 (2020): Issue 2 (Feb 2020).
- This special issue contains many insightful discussions of the topic, free of the type of hype that we find in many other places.
Roman V. Yampolskiy - Human ≠ AGI
Terms Artificial General Intelligence (AGI) and Human-Level Artificial Intelligence (HLAI) have been used interchangeably to refer to the Holy Grail of Artificial Intelligence (AI) research, creation of a machine capable of achieving goals in a wide range of environments. However, widespread implicit assumption of equivalence between capabilities of AGI and HLAI appears to be unjustified, as humans are not general intelligences. In this paper, we will prove this distinction.
Arguments/speculations for AGI/Superintelligence
For Nick Bostrom's classic discussion of the paperclip maximizer thought experiment, see this paper:
- Nick Bostrom - Ethical Issues in Advanced Artificial Intelligence
- or his book: Superintelligence: Paths, Dangers, Strategies
For a philosophically rigorous discussion of superintelligence, the Singularity etc., check out David Chalmers' papers on the topic:
- Chalmers' papers on AI & computation
- In particular, his paper: The Singularity: A Philosophical Analysis
K. Eric Drexler also has an interesting and relatively sober take on how superintelligence could emerge in this paper: Reframing Superintelligence: Comprehensive AI Services as General Intelligence
- Here is a podcast appearance with Dexler where he discusses the ideas from the paper: https://futureoflife.org/2019/04/11/an-overview-of-technical-ai-alignment-with-rohin-shah-part-1/
- And a review of the paper: https://slatestarcodex.com/2019/08/27/book-review-reframing-superintelligence/
Stephen Hawking, Elon Musk, and Bill Gates Warn About Artificial Intelligence
I.J Good, Speculations Concerning the First Ultraintelligent Machine
- This paper was the origin of the intelligence explosion idea
- Good example of the mixing of today's applications with sci-fi speculations leading to a confused article:
- "It is very difficult to predict how AI technology will develop in the future or whether it will ever become fully conscious like humans...Although artificial intelligence is becoming increasingly popular, no one knows how far it can actually evolve, and although some people believe that it will become fully aware of our super-species, others believe that this is not possible and that AI will only support humans in the future."
Daniel Dennett - Will AI achieve consciousness? Wrong question
The bitter lesson (it’s all compute)
- Response: A better lesson
Softbank CEO - The Singularity will happen by 2047
Kurzweil claims that the singularity will happen by 2045
AI Alignment Podcast - The Metaethics of Joy and Suffering
- An episode of the AI alignment podcast dealing with some of the more esoteric ideas about Superintelligence
Inside the first church of artificial intelligence
Three areas of research on the superintelligence control problem
Arguments against AGI/Superintelligence
Francois Chollet has a couple of excellent pieces critiquing the idea of Superintelligence and the intelligence explosion:
Artificial General Intelligence is Here, and it's Useless
The singularity isn’t here yet. Biased AI is.
Worry about present-day AI first, and far off AGI hypotheticals second
A Misdirected Application Of AI Ethics
- "The debate about robot rights diverts moral philosophy away from the pressing matter of the oppressive use of AI technology against vulnerable groups in society."
Why ML is not a path to singularity
7 deadly sins of predicting the future of AI
Superintelligence: the idea that eats smart people
AI Researchers Disagree With Elon Musk’s Warnings About Artificial Intelligence
Facebook's Head of AI Says the Field Will Soon ‘Hit the Wall’
The sheer stupidity of artificial intelligence
AI's struggle to reach understand and meaning
“Blessed by the algorithm”: Theistic conceptions of artificial intelligence in online discourse
Resources on the Turing Test/Imitation Game:
Stanford Encyclopedia of Philosophy article on the Turing Test
The Great Pretender: Turing as a Philosopher of Imitation
A conjecture for a better Turing test
Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data
- Discusses "the octopus test"