A daunting task faces anyone who wants to talk or write about artificial intelligence: defining what it is, and what it isn’t. The vagueness of this term has reached such a state of absurdity that we have people using the term AI to talk about everything from a robot that assembles pretty mediocre-looking pizzas to sci-fi fantasies about superintelligent AI overlords colonizing the universe in an epic battle against entropy.
Of course, many terms are vague, and it is often pointless and self-defeating to try to nail them down with absolute precision. However, a recent example of how this vagueness can lead to problems can be seen in the definition of AI provided in the European Union’s White Paper on Artificial Intelligence. In this document, the EU has put forward its thoughts on developing its AI strategy, including proposals on whether and how to regulate the technology.
However, some commentators noted that there is a bit of an issue with how they define the technology they propose to regulate: “AI is a collection of technologies that combine data, algorithms and computing power.” As members of the Dutch Alliance on Artificial Intelligence (ALLAI) have pointed out, this “definition, however, applies to any piece of software ever written, not just AI.” As we can see, this level of vagueness is untenable when we get down to the nitty gritty process of designing regulations for the technology.
Although we won’t be able to finally settle the problem of defining artificial intelligence here, we can make some progress in unravelling the vagueness of the term, and avoid certain confusions. In what follows, we’ll guide you through the terminological maze of AI and demonstrate the dangers of vagueness. Then we’ll point you to some good resources to help with defining it well, and even offer a tool to play around with different terminological alternatives.
The problem of artificial objectivity
Beyond the aforementioned difficulties in formulating regulations for a hard-to-define technology, unravelling the vagueness around AI is important in a broad sense, as there is also something quite dangerous about the futuristic mystique of the term AI when it blinds us to the banal and oppressive realities of certain technologies. Let’s take a simple example to illustrate this.
Imagine yourself as the parent of a 6-month old baby. You are starting to think about finding a babysitter, but you are, understandably, worried about putting your baby in the hands of a total stranger. While researching babysitting services, you come across a company, Predictim, that claims to use advanced artificial intelligence to screen babysitter candidates. You read an interview with their CEO who says that “current background checks parents generally use don’t uncover everything that is available about a person,” and notes that “a seemingly competent and loving caregiver with a ‘clean’ background could still be abusive, aggressive, a bully, or worse.” To solve this issue, they claim that their AI system can analyse a candidate’s social media profiles and generate a risk score for categories such as drug abuse, bullying and harassment, explicit content, and attitude.
Now, there are a huge number of red flags here for anyone who understands how AI systems like this actually work. Indeed, there are so many red flags that the pushback generated against this company thankfully led to it folding. For the average parent desperately searching for a suitable babysitter, however, Predictim’s pitch might have sounded reasonable: cutting edge technology uses hard data to make an objective assessment about candidates. Let’s try unpacking and rephrasing the basic pitch to highlight what the busy parent might miss:
Predictim is a company that makes highly serious accusations about the character of babysitter candidates by using flawed and inaccurate tools to scan imperfect data from their social media profiles. Predictim uses natural language processing (NLP), a subfield of linguistics and computer science, to scan posts from babysitters’ social media profiles to make judgements about their character which will determine whether they are employed or not. However, NLP technology is far from perfect: it cannot actually understand the text it scans, appreciate nuances such as irony, or grasp the context in which we use words or phrases. Predictim claims to detect ‘bullying,’ ‘harassment,’ and ‘attitude,’ but these are not clear cut objective categories that can be detected with any
Predictim also uses computer vision technology (CV) to scan babysitters’ pictures to detect ‘explicit content’ in a manner which is also not well defined, and so the results are fundamentally problematic. In one particularly ridiculous recent example, a computer vision system used by British Police to detect nudity was found to consistently mistake sand dunes for nudity. Such a mistake might seem funny at first, but as we see with the case of Predictim, mistakes like this can have serious consequences for people. Moreover, computer vision technology has been shown to
Unpacking Predictim’s claims about using ‘advanced artificial intelligence’ highlights that there’s nothing especially futuristic or advanced about the techniques they use. Calling it ‘advanced artificial intelligence’ adds a gloss of technological sophistication to what is ultimately quite a crass and dangerous method of making judgements about people’s character. With this example in mind, let’s look at how we can demystify the term artificial intelligence.
The history of the term AI
To understand the confusion around defining the term ‘artificial intelligence,’ it’s useful to look at the origin of the term. Although considered a pioneer of the field of AI, Alan Turing did not use the term ‘artificial intelligence’ in his seminal papers such as the 1948 paper, Intelligent Machinery, or the famous paper from 1950 that introduced the ‘Imitation Game’ a.k.a. Turing Test, Computing Machinery and Intelligence. The reason that Turing did not use the term in either of these papers is that it was only coined in 1955.
The first use of the term ‘artificial intelligence’ appears in a funding proposal for a workshop at Dartmouth College, in Hanover, New Hampshire, United States. In the proposal for the Dartmouth Summer Research Project on Artificial Intelligence, we find the following wildly ambitious proposal:
We propose that a 2-month, 10-man study of artificial intelligence be carried out during the summer of 1956 at Dartmouth College in Hanover, New Hampshire. The study is to proceed on the basis of the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it. An attempt will be made to find how to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves. We think that a significant advance can be made in one or more of these problems if a carefully selected group of scientists work on it together for a summer.
The original aim of the field of AI was clearly enormously ambitious: to simulate with a machine “every aspect of learning or any other feature of intelligence.” Unsurprisingly, the 2-month, 10-man study was unable to accomplish this aim in one summer, and it still hasn't been accomplished 65 years later.
Regarding the question of what the field of AI has accomplished, while it’s clear that we haven’t developed superintelligent shiny robots (despite what the illustrations accompanying most news articles would have you think, see: AI = shiny humanoid robots), we should also consider what is called “the AI effect.” In essence, this refers to the phenomenon that when a computer can’t do something, people tend to think of it as a marker of intelligence, but as soon as a computer can do it, they no longer see it as a significant benchmark. On this issue, Larry Tesler has been quoted as saying that "AI is whatever hasn't been done yet."
Chess is a good example of this. When the first proposals were made to build a machine that could play chess and would be capable of beating a grandmaster, they were considered impossible because chess playing was thought to be one of the pinnacles of human intelligence. Defeating a competent chess player, not to mention a grandmaster, would have been thought a sure sign of intelligence. Since DeepBlue’s defeat of Gary Kasparov, however, we no longer consider it remarkable that a powerful computer could defeat even the best human chess player. The same applies to a plethora of tasks that we now see as routine operations, but that in the past seemed like almost impossible achievements for machines and which certainly would have been taken as signs of intelligence.
What the field of AI has been successful at is building machines that can accomplish a variety of narrow tasks. These are sometimes tasks that are difficult for humans, but that are relatively easy for powerful computers. For example, a basic calculator that you can buy in any shop has ‘superhuman intelligence’ when it comes to arithmetic, but it’s unlikely that anyone would see it as an example of artificial intelligence. It has also been a constant that many tasks which are quite easy for humans (such as recognising multiple pictures as all being of the same person, although recently machine learning systems have excelled at this) have been quite tough for machines.
It’s important to note, however, that progress in such narrow tasks (what is often referred to as narrow AI), gives no guarantee that we are progressing to some sort of broad artificial intelligence (often called general AI or AGI) that could mimic or even surpass human intelligence across multiple or all domains. As Gary Marcus has pointed out, thinking that such incremental progress on narrow tasks will eventually ‘solve intelligence’ is like thinking that one can build a ladder to the moon.
An interesting consideration for our problem of defining AI is that even at the Dartmouth workshop in 1956 there was significant disagreement about the term ‘artificial intelligence.’ In fact, two of the participants, Allen Newell and Herb Simon, disagreed with the term, and proposed instead to call the field ‘complex information processing.’ Ultimately the term ‘artificial intelligence’ won out, but Newell and Simon continued to use the term complex information processing for a number of years.
Complex information processing certainly sounds a lot more sober and scientific than artificial intelligence, and David Leslie even suggests that the proponents of the latter term favoured it precisely because of its marketing appeal. Leslie also speculates about “what the fate of AI research might have looked like had Simon and Newell’s handle prevailed. Would Nick Bostrom’s best-selling 2014 book Superintelligence have had as much play had it been called Super Complex Information Processing Systems?”
It’s hard to say, of course, but it seems certain that the ‘marketing appeal’ of artificial intelligence contributes to the hype more than the term complex information processing would. If all of these ‘achievements of AI research’ were rather called ‘achievements of complex information processing,’ we might have a more sober and accurate opinion of what these technologies can do. On the other hand, the field might not have attracted the same amount of interest with a more prosaic name.
Beyond the alternative between the terms complex information processing and artificial intelligence, there are other options, however. To explore these, let’s now look at where AI is today.
AI today: the hype of AI-enabled everything
Everyone seems to be talking about AI today, and we seem to be right around the peak of the hype cycle. Since the Dartmouth workshop in 1956, the field of AI has gone through a number of boom and bust cycles, where periods of enthusiasm were followed by AI winters when the technology failed to deliver on its promises. The current boom, or AI spring, is essentially a boom in machine learning, an approach to AI that relies on training algorithms on large datasets so that they develop their own rules. This is an alternative to rule-based systems where rules have to be hand-coded in.
The current ‘AI spring’ is usually considered to have started in 2012, although it relied on two earlier developments: in terms of data, the creation of ImageNet, a dataset of more than 14 million hand-annotated images; and in terms of hardware, the availability of powerful graphics processing units (GPUs), originally designed for video games, that provided the computing power necessary for complex algorithms, such as neural networks, to analyse large data sets.
The event that is generally considered to have sparked the current boom was when Geoffrey Hinton and his team won the ILSVRC image classification competition (based on ImageNet) by using a deep convolutional neural network. This victory put machine learning, and particularly ‘deep learning’ and neural networks, on the map in a massive way.
It’s important to recognise that the current AI boom is actually a machine learning boom, because machine learning has some very particular advantages and some very particular limitations. What machine learning is good at is analysing very large datasets and spotting patterns in that data. This explains why advances in machine learning since 2012 have led to amazing improvements in predictive text, translation, image recognition, voice recognition, and other domains where finding patterns in huge amounts of data can solve problems.
However, machine learning approaches
If we are aware of the limitations of machine learning, then when we hear claims about an ML system that proposes to solve some complex social problem, we are likely to be sceptical. Having an understanding of ML allows us to ask the right questions to find out how the system works. However, if we are presented instead with claims that ‘advanced artificial intelligence’ is being used, we don’t know where to start with our questions.
To help you navigate through the dangerous vagueness of this problem, we’ve prepared some resources. First, we’ve created a tool that allows you to interact with some text and swap the term ‘artificial intelligence (AI)’ for other terms, such as ‘complex information processing,’ ‘proprietary software,’ or just plain old terms such as ‘computer program.’ You can even choose your own term to replace it with. As you will see, swapping out hype-laden terms for more sober ones already goes a long way to giving a more realistic picture of what’s going on. Once you’ve finished playing around with that, check out our curated list of resources that do some great work in defining what artificial intelligence means.
Alternative terms
We've already mentioned that the term 'complex information processing' was proposed as an alternative at the workshop that gave birth to the field of AI. There are, however, many other possibilities when it comes to naming this field. One clear option is always to simply move down a level or two of specificity. Instead of saying "we use artificial intelligence to determine personality type," it is more accurate and honest to say "we use natural language processing to determine personality type." Anyone writing the latter sentence would probably feel the need to explain more about how natural language processing works, which is a good thing for readers, whereas the vagueness of the term 'artificial intelligence' somehow lets writers off the hook.
Without getting into technical specifics, there are other options to replace the term AI with something more general but less loaded. On a simple level, we can usually just insert the term 'machine learning' wherever we see 'artificial intelligence.' If the sentence no longer makes sense, there's a high chance it didn't really make sense before the substitution either. We can also use terms such as 'computational statistics,' 'cognitive automation,' 'applied optimization,' or even say plain and simply 'a computer program.' A number of people also suggest speaking of 'automated decision (-making/-support) systems,' which AI Now define as:
a system that uses automated reasoning to aid or replace a decision-making process that would otherwise be performed by humans. Oftentimes an automated decision system refers to a particular piece of software: an example would be a computer program that takes as its input the school choice preferences of students and outputs school placements. All automated decision systems are designed by humans and involve some degree of human involvement in their operation. Humans are ultimately responsible for how a system receives its inputs (e.g. who collects the data that feeds into a system), how the system is used, and how a system’s outputs are interpreted and acted on
A term like automated decision system is far less prone to misunderstanding than the term aritificial intelligence, and alerts us to certain dangers. If we hear about a company that has developed artificial intelligence that can predict some complex social outcome such as whether two people would form a good couple, we might lend it some credence. But if we instead call it an automated decision system, we realise that it is automating human decision making, and humans are often not so good at predicting social outcomes, so why would an automated version be any better?
Swapping the term AI for other terms offers a handy shortcut to cut through some of the most obvious hype, so we've developed a little tool for you to play around with. The tool generates a sentence using the term AI, and let's you pick from a list of alternatives, or insert your own. Give it go!
Terminology swap tool
Bibliography & Resources
Knowing your N-grams from your Eigenvectors might seem daunting at first, but we’ve compiled a list of useful resources here to guide you through this maze of definitions and technical terms. In addition to some articles and glossaries for beginners, we’ve listed some more technical resources, clear explainers, interactive courses and interesting articles on the topic of defining AI.
If we’ve missed out on a great resource, please drop us a line and we’ll check it out!
Introductory
The Atlantic - ‘Artificial Intelligence’ Has Become Meaningless
- Free online course that provides an excellent introduction to AI
Google Machine Learning glossary
The A-Z of AI and Machine Learning: Comprehensive Glossary
Royal Society - Machine learning: the power and promise of computers that learn by example
New York Times - An AI Glossary
A People’s Guide to AI - by Mimi Onuoha and Mother Cyborg (Diana Nucera)
AI 101 - What is AI and where is it going?
Hackernoon article: Are you using the term ‘AI’ incorrectly?
What is Machine Learning? MIT Technology Review
Flowchart on ‘Is it AI?’ from MIT Technology Review
Automation is not intelligence
List of AI Glossaries by Alexa Steinbrück
Technical
Machine Learning for everybody
- Excellent explainer from Vas3k’s blog
Some Key Machine Learning Definitions - Joydeep Bhattacharjee
- Nice, clear definitions. Check out Joydeep’s Medium page for more technical explainers, with code, on a number of other fundamental issues such as Linear Regression, Unsupervised Learning, and an overview of Popular Machine Learning Algorithms.
Pei Wang: Four Basic Questions on AI
Dagmar Monett & Colin Lewis - Getting Clarity by Defining Artificial Intelligence—A Survey
- See also this talk from Dagmar Monett: Defining and Agreeing on Intelligence
Journal of Artificial Intelligence: Special Edition on Defining AI
Universal Intelligence: A Definition of Machine Intelligence
- p.52-62 has a good section with definitions of data, statistical inference, machine learning, artificial intelligence, algorithmic systems etc
DeepAI’s glossary is very comprehensive, if quite technical for absolute beginners.
The UK’s Information Commissioner’s Office (ICO), has an incredibly detailed and informative 3-part guidance on Explaining Decisions Made with AI. In part 1 of the guidance, there is a very helpful overview of different AI techniques that discusses their explainability:
History of AI:
Alan Turing and the beginnings of AI
Turing’s original paper “Intelligent Machinery”
Computing Machinery and Intelligence (the Turing test paper)
Wikipedia article on the AI effect
On ImageNet:
The data that transformed AI research—and possibly the world
Fei Fei Li et al. - ImageNet: A Large-Scale Hierarchical Image Database
- The original paper that launched ImageNet
ImageNet Classification with Deep ConvolutionalNeural Networks
- The original paper from Hinton et al. on the LSVRC-2010 competition
The viral selfie app ImageNet Roulette seemed fun – until it called me a racist slur
- The article that accompanied the ImageNet Roulette project