Learn

AI Approaches Compared: Rule-Based Testing vs. Learning

AI Approaches Compared: Rule-Based Testing vs. Learning

How can AI be put into practice? There are several approaches to implement AI. Broadly speaking, the field of AI distinguishes between rule-based techniques and machine learning techniques. As such, the entire universe of AI can be split into these two groups. A computer system that achieves AI through a rule-based technique is called rule-based system. A computer system that achieves AI through a machine learning technique is called a learning system.

Let’s start with the bird’s eye perspective by asking a question: Must a computer system necessarily have the ability to learn to simulate intelligence? Opinions are deeply divided on this issue. For example, Monica Anderson stated that if it’s not learning, it’s not AI. For others, this is a deep misinterpretation of AI. It’s claimed that rule-based systems (not to be confused with rule-based machine learning) simulate intelligence (at least to some degree) without having the ability to learn. No worries, we will define below what we actually mean by learning.

Rule-Based Systems

A rule-based system (e.g., production system, expert system) uses rules as the knowledge representation. These rules are coded into the system in the form of if-then-else statements. The main idea of a rule-based system is to capture the knowledge of a human expert in a specialized domain and embody it within a computer system. That’s it. No more, no less. Hence, knowledge is encoded as rules.

So, in this case, one could say that rule-based systems just fake intelligence because of the missing learning capability, in the same way that humans sometimes fake sympathy for others. It’s not really real. It’s a definitional problem whether one regards this as a type of AI or not. AI research is (unsurprisingly) divided on the issue. So, let’s regard rule-based systems as the simplest form of AI. Therefore, a rule-based system is said to be limited in its ability to simulate intelligence. It’s always limited by the size of its underlying rule base (knowledge base). It is said to have rigid intelligence. For this reason, rule-based systems can only implement narrow AI at best.

A rule-based system is like a human being born with fixed knowledge. The knowledge of that human being doesn’t change over time. This implies that, when this human being encounters a problem for which no rules have been designed, then this human gets stuck and so won’t be able to solve the problem. In a sense, the human being doesn’t even understand the problem.

That’s the dilemma of rule-based systems. Rule-based systems also cause other problems. For example, it’s tough (to nearly impossible) to add rules to an already large knowledge base without introducing contradicting rules. The maintenance of these systems then often becomes too time-consuming and expensive. As such, rule-based systems aren’t very useful for solving problems in complex domains or across multiple different but simple domains. Apart from that, in some situations (e.g., cancer detection in medical images), it’s not even possible to explicitly define rules in a programmatic or declarative way. That’s typically the showstopper for rule-based systems, and (usually) the point where learning systems get into the game.

Learning Systems

In contrast to rule-based systems, learning systems have a very ambitious goal. The vision of AI research, which turns out to be more a hope than a concrete vision, is to implement general AI through the learning capability of these systems. Hence, the hope is that a learning system is in principle unlimited in its ability to simulate intelligence. It’s said to have adaptive intelligence. The ability to learn causes adaptive intelligence, and adaptive intelligence means that existing knowledge can be changed or discarded, and new knowledge can be acquired. Hence, these systems build the rules on the fly. That is what makes learning systems so different from rule-based testing. A neural network is an instance of a learning system.

Bottom Line. Rule-based systems rely on explicitly stated and static models of a domain. Learning systems create their own models.

This sounds like learning systems do some black magic. They don’t. Let’s have a look at the following analogy by Kasper Fredenslund to demystify learning systems by figuring out what we actually mean by learning.

The difference between rule-based systems and learning systems just boils down to who (e.g., computer system, human being) does the learning. For example, imagine some human real estate agents forming a group. The name of this group is a rule-based system.

This group then tries to predict house prices based on some given information about the houses (e.g., location, build year, size). Furthermore, imagine that the agents already know the prices of some example houses upfront based on the location, build year, and size of each house. This means that the agents have already been trained. The agents then try to predict the price of new houses based on the given information. To do that, the agents would most probably start building a model; e.g., imagine they developed a model in the form of a simple linear equation:

price=location⋅w1+build year⋅w2+size⋅w3

We can easily imagine that the agents would then use a combination of their intuition and their experience (knowledge base) to come up with the values for the weights to approximate the price of a house they have never seen before. By doing this for more and more houses, the agents would get more and more data, and so the agents would then (most probably) start tweaking the model itself (e.g., the form of the equation) or the parameters (weights) of the model to minimize the difference between the predicted price and the actual price. So, the key to learning is feedback. It is nearly impossible to learn anything without it (Steve Levitt). That’s (in rough terms) how rule-based systems work.

In contrast, a learning system says, “Screw the agents; we don’t need human beings!” A learning system (in a nutshell) just automates the process of tweaking the weights to minimize the difference between the actual price and the predicted price, just like the human agents would do. As such, the difference between the actual price and the predicted price is what is known as the utility function (cost function) of learning systems.

The goal of a learning system is to minimize that function, and the system does this by tweaking the weights in such a way that the function is minimized. Hence, learning just means finding the right weights to minimize the utility function. In learning systems (e.g., deep neural networks), the process of optimizing (minimizing, maximizing) the utility function is called backpropagation, and this is achieved through traditional optimization techniques.

These optimization techniques (e.g., gradient descent, stochastic gradient descent) are indeed rule-based techniques because these techniques just compute the gradient that is needed to adjust the weights and biases in a neural network to optimize its utility function. How all this is done in practice varies greatly (e.g., supervised and unsupervised learning), but this example isn’t too far from reality.

Bottom Line. Optimization is at the heart of learning systems. Learning systems heavily rely on rule-based techniques (e.g., mathematical optimization).

Problems

What’s the problem with these systems? Although the learning process is deterministic (including statistical and probabilistic methods), it is almost impossible from a practical perspective to extract the model from the internal workings of a learning system because of its sheer complexity, caused by gazillions of dynamic parameters (e.g., weights, biases).

As a natural consequence, the models that have been learned can’t be interpreted, explained, and understood well enough by humans. For this reason, learning systems are often referred to as black boxes. It isn’t entirely clear how these systems make their decisions. That’s the dark secret at the heart of learning systems (Will Knight). According to Tommi Jaakkola (MIT, Computer Science), this is already a major problem for many applications; whether it’s an investment decision, a medical decision, or a military decision, you don’t want to just rely on a black box.

Bottom Line. There is no black magic related to learning systems. There is only knowledge (more or less) hidden in these learning systems (Gene Wolfe). For this reason, most AIs are not compliance-ready yet.

GDPR

It is worth noting that the European Union will probably create a right to explanation through GDPR whereby a user can ask for an explanation of an algorithmic decision made about them. Is GDPR a game-changer for AI technologies? It’s not abundantly clear yet what it really means. It remains to be seen whether such a law is legally enforceable. It’s not even clear if that law is more a right to inform rather than a right to explanation. Therefore, the impact of GDPR on AI is still heavily debated.

Black Box

In 2015, a research group at Mount Sinai Hospital in New York was inspired to apply a learning system (implemented through deep machine learning) to the hospital’s vast database of patient records. This data set features hundreds of variables on patients, drawn from their test results, doctor visits, and so on. The resulting learning system, called Deep Patient, was trained using data from about 700,000 individuals, and when tested on new records, it proved incredibly good at predicting disease. Without any expert instruction, Deep Patient discovered patterns hidden in the hospital data that seemed to indicate when people were on the way to a wide range of ailments, including cancer of the liver.

There are a lot of methods that are pretty good at predicting disease from a patient’s records, according to Joel Dudley, who led the Mount Sinai team. But, he added, “this was just way better.” At the same time, Deep Patient is a bit puzzling. It appears to anticipate the onset of psychiatric disorders like schizophrenia surprisingly well. But since schizophrenia is notoriously difficult for physicians to predict, Dudley wondered how this was possible. He still doesn’t know. Deep Patient offers no clue as to how it does this. If something like Deep Patient is actually going to help doctors, it will ideally give them the rationale for its prediction, to reassure them that it is accurate and to justify, e.g., a change in the drugs someone is prescribed. “We can build these models,” Dudley said, “but we don’t know how they work” (Will Knight, The Dark Secret of AI).

Decision-Making Bias

In a recent TEDx Talk (Dec. 2017) Peter Haas (Brown University) described some problems related with state-of-the-art AI technologies, especially in the field of image recognition. For example, Haas discussed a simple example where a learning system was asked to distinguish between dogs and wolves in images. He showed an example where a dog (image above) was misidentified as a wolf by a learning system. The researchers who developed the learning system had no clue why this happened.

They rewrote the entire algorithm so that the learning system could show them the parts of the picture it was paying attention to when making this decision. The result is depicted on the right-hand side of the above image. It turned out that the learning system paid attention mostly to the snow in the background of the image. This is a form of a bias in the data set that was fed into the learning system.

It was found that most of the images of wolves that were used to train the learning system pictured wolves in snow. Therefore, the learning system conflated the presence or absence of snow with the presence or absence of a wolf. The scary thing about this is that the researchers had no idea that this was happening until they rewrote the learning algorithm to explain itself.

You may not care about distinguishing between dogs and wolves in images, but you should. The bottom line is that learning systems are also used in real life by real judges to make real decisions about real people’s lives. These learning systems are used to decide whether or not you get a home loan, a job interview, and Medicaid. This list could go on forever. So, you should (at least) care.

Bottom Line. If you don’t need to understand the model that has been learned and you have enough training data and enough computing power available, then a learning system will provide great value at the speed of ideas.

Selecting an Approach

The decision whether to go for a rule-based system or learning system depends on the problem you want to solve, and it’s always a trade-off among efficiency, training costs, and understanding. As stated above, rule-based systems as well as learning systems are implemented by concrete techniques (algorithms). These terms are just umbrella terms, each representing a set of various specific techniques.

For example, learning systems are implemented by machine learning techniques, whereas the term “machine learning” itself is again a collective title for a variety of techniques, such as deep machine learning (which implements neural nets), reinforcement learning, genetic algorithms, decision tree learning, support vector machines, and many (many) more. So, there is no single machine learning technique. The rule-based category is also just a suitcase word for a bunch of techniques (e.g., optimization techniques).

We will skip the details of all these techniques so we don’t get lost in the details. By no means is this article meant to be an exhaustive resource regarding these techniques. The relation between rule-based and learning systems and the knowledge about how they interact with each other is sufficient to discuss their applications in software testing.

Suggestion. If you aren’t familiar with all these terms and techniques, we recommend some videos to learn about neural networks, gradient descent, and backpropagation.

Next: The business Impact of AI

Back to AI in Testing overview

Author:

Tricentis Staff

Various contributors

Date: Jan. 08, 2019

Related resources

You might be interested in...