What Is AI?
Where did it all begin? The term AI was coined not by Alan Turing but by John McCarthy at the pivotal Dartmouth Workshop (1956). This workshop is considered the birthplace of AI as a research field. It lasted about eight weeks at Dartmouth College (United States, New Hampshire) and was essentially an extended brainstorming session about machine thinking involving eleven scientists (e.g., Marvin Minsky, John McCarthy).
The immodest goal of this workshop was to study the conjecture that every aspect of learning, or any other feature of any kind of intelligence, can in principle be so precisely described that a computer system can be made to simulate it. This workshop formally established AI as a sub-field of computer science. It’s important to note that the name AI for this workshop was mainly picked because of its neutrality. The organizers wanted this group of leading researchers of that time to avoid focusing too narrowly on specific topics (e.g., automata theory, complex information processing, cybernetics). That’s how AI got its name.
Definition
How is AI defined? Based on the core idea of the workshop, we can easily infer that the field of AI assumes that intelligence can be mechanized and thus simulated by computer systems. Therefore, the field of AI aims to build thinking machines with an intelligence (at least) comparable to that of the human mind. The degree to which an AI matches the actual functioning of the human brain is called biological plausibility. But note that biological plausibility is more a guide for AI research, not a strict requirement. In short, the purpose of AI is to mechanize intelligence. That’s our mantra, our sacred group of words to reflect on the purpose of AI.
Bottom Line
AI is a concept, and this concept is implemented by techniques (e.g., deep machine learning) to perform certain tasks (e.g., image recognition).
In other words, AI can be understood as the science behind these techniques, and these techniques are used to perform certain tasks. don’t confuse the techniques (e.g., deep machine learning) with the concept of AI. There are subtle but significant differences between them. Also, stop thinking of robots as AIs. A robot is a container for AI. The AI itself is inside the robot. AI is the brain, and the robot is its body (Tim Urban). That’s all you need to know.
Intelligence Space
It’s worth noting that we exclude the term “human” from the term “intelligence” on purpose, since although the human mind is the most advanced thinking system we know, human intelligence and especially general AI (defined below) need not be anything alike. For example, a fighter jet is not similar to a bird. Both have wings and both fly, but a fighter jet is a threat to you in a way that no bird is, so it’s not a useful comparison.
According to AI researcher Robert Miles (University of Nottingham), a computer system can be much more different (e.g., selfish) than we can ever imagine, since the space of all possible intelligences is too vast. Imagine that you have the space of all intelligences that biological evolution can produce, and within that, you have the space of actual intelligences that exist (much smaller but still huge), and within that, you have human intelligence, and so on. The space of human intelligence is a minuscule dot on a miniscule dot on a miniscule dot in the space of all possible intelligences. As such, the space of all possible intelligences is vast, and so is the space of all possible behaving systems. This means that an AI must not necessarily think like a human being thinks and thus behave like a human being behaves. Therefore, artificial intelligence might be completely different from human-type intelligence.
Bottom Line. AI research doesn’t just explore human intelligence; it explores the entire space of all possible intelligences.
Intelligence Definition
How do we define intelligence? The truth is that AI is hard to define because intelligence is hard to define in the first place. So, let’s tackle this, since just putting the term “intelligence” into play without any further explanation doesn’t help. We should (at least) try do define it. The problem is that we simply don’t know a single formal definition of intelligence. This truth may hurt for a little while, but please bear in mind that a lie hurts forever.
According to one research paper (Jun. 2007), the AI research itself is divided on this issue. In this paper, about 70 informal definitions of intelligence were collected from various disciplines (e.g., psychology, computer science). There seem to be almost as many definitions of intelligence as there were experts asked to define it (Robert Sternberg). Nevertheless, the authors of this paper also concluded that, in many cases, different definitions (suitably interpreted) actually say the same thing but in different words. That is why the authors put the key attributes of all the collected definitions together and argued that intelligence measures an agent’s ability to achieve goals in a wide range of environments. Features such as the ability to learn and adapt or to understand are implicit in this definition because these capacities enable an agent to succeed in a wide range of environments.
Bottom Line. There is still no standard definition of intelligence, and there might be none soon. It’s still a great debate, since it’s a definitional problem at its core.
Broadly speaking, the field of AI is split into two groups: narrow AI and general AI.
Narrow AI
What is currently labeled as AI is largely narrow AI (or weak AI, specialized AI). Narrow AI is focused on performing one task (e.g., playing chess) or working in one domain (e.g., playing board games) and becoming increasingly good at it. As such, a narrow AI can be understood as a computer system with the ability to apply intelligence to a specific problem.
General AI
General AI (or true AI, strong AI) can be understood as a computer system with the ability to apply intelligence to any problem. That’s the holy grail of AI research, the real new black. Although it’s been said for decades that it’s just around the corner, it is more than likely to be a while before we come face to face with general AI. According to Demis Hassabis (CEO, DeepMind), transfer learning (a sub-field of machine learning) might be the key to general AI. Transfer learning occurs when you transfer knowledge gained from solving one problem in one domain (e.g., playing chess) to another totally unrelated domain (e.g., weather forecasting) to solve a problem in that domain based on the knowledge acquired in the previous domain.
Francois Chollet (AI Researcher, Google) concluded that you can’t achieve general AI by just scaling up today’s techniques (e.g., deep machine learning). Geoffrey Hinton (University of Toronto), considered the godfather of AI, stated that we need to throw all the current techniques away and start again to achieve general AI.
Understanding
Roger Penrose looked at general AI from a much broader perspective by exploring the difference between meaning, understanding, and computation. Based on the nature of mathematical computation, he demonstrated that computer systems don’t understand and will never be able to understand. He showed that understanding doesn’t arise from mathematical (and computational) rules. Based on various examples (outlined below), Penrose concluded that understanding is not encapsulated by computational rules. As such, understanding is not rule-driven, it’s not algorithmic, and it’s not machine decidable. No worries; we won’t dive deeper into these philosophical considerations. Let’s just explore one example to clear things up.
Mathematical Truth
Roger Penrose asked how computer systems might ever be able to deal with the concept of infinity. Think of this statement: “Add two odd numbers together; then you always get an even number.” That’s a statement about an infinite number of things, and you don’t have to think too hard to realize that this is true for all numbers – and that’s an infinite number of numbers. Note that this statement is mathematically proven. It’s a mathematical truth. Now, imagine you put this statement (adding two odd numbers gives an even number) on a computer system.
The question then is, “Can a computer system prove this mathematical truth?” To prove this statement, we need a set of rules (axioms). These rules will then give you a proof. So, if you want to use these rules, you need to believe that each rule is true and that the set of rules you want to use is consistent. Consistency (in rough terms) means that you can’t derive nonsense (contradictions) from these rules (e.g., 2 equals 3).
In mathematics, these rules are incredibly simple. For example, imagine the natural numbers x and y. One rule (the Peano Axiom) says, “When x equals y, then y equals x.” That’s easy, so we believe that this rule is true. What does that mean? The Austrian mathematician Kurt Gödel showed (in rough terms) that if you have a set of rules, you can construct a statement (e.g., adding two odd numbers always gives an even number) within that set of rules that has the strange property that it is true (on the one hand) but there is no proof of that statement within this set of rules (no matter how big that set of rules is).
This is what is known as Gödel’s Theorems. These theorems place a hard limitation on what we might be able to know in mathematics. In simple terms, this means that mathematical truths cannot (always) be reduced to mechanical rules. This (in turn) implies that mathematical truths cannot (always) be checked by computer systems. As a result, there is a gap between truth and proof, since not all true mathematical statements can be proved. For this reason, Penrose (among many others) concluded that understanding is not rule-driven. He claimed that what goes on in our brains is not rule-driven. It’s not algorithmic, so it’s not machine decidable. Astounding.
Not convinced yet? Then watch this explanation by Marcus du Sautoy.
Conclusion
From all this, it becomes crystal-clear that narrow AI is the only form of AI that humanity has achieved so far. We’ve built extremely sophisticated narrow AIs that outperform humans at very specific tasks, such as playing chess, playing Go, making purchase suggestions, making sales predictions, making weather forecasts, recognizing patterns in images and in heaps of data, and driving cars. Nevertheless, we haven’t yet build one general AI that can do all of this (at least) as well as humans (general human-level AI) or even better (general artificial superintelligence). From that perspective, general AI (to overstate the case) can be understood as the science of how to get computer systems to do the things they do in the movies (Astro Teller).
So, if you are living on planet earth, you probably have already experienced some sort of AI. Unfortunately, it was only narrow AI (all too often, it’s just fake AI).
Bottom Line. There are two kinds of AI, and the difference is important.
Where do we draw the line between what’s AI and what’s not AI? The short answer is, “It depends!”
- Observation.
AI is hard to define because intelligence is hard to define in the first place. - Problem.
There is no standard definition of intelligence yet (see this paper). - Conclusion.
There is no universal border; it depends on how you define intelligence.
Finally, we conclude that AI is still an evolving term, since AI is defined differently by different communities, and its definition will continue to change with its future advances.
Next: AI Approaches Compared
Back to AI in Testing Overview