Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Testing AI-based apps? Think like a human

Jess Ingrassellino Engineering Manager, InfluxData

Your testing of software that includes artificial intelligence (AI) components will be more sophisticated and robust if you just think in human terms. 

If you want to understand the testing requirements for things such as predictive analytics, you need to think about how AI "learns its world." For example, you'll want to know where—and how—predictions fall apart, as well as the potential weaknesses of an algorithm and how to find them.

Like people, machines have past experiences. But those experiences are provided by the programmers who create the training sets of historical data against which the system can learn.

So how can you work to select and create test datasets that are robust? How can you test for the reality of dirty data that you may encounter in the real world? And how can you avoid training the AI with your test datasets in a way that accidentally generates false predictions?

Before you can answer those questions—and start testing—you need a working knowledge of AI technologies and the three basic theories of learning. 

AI, machine learning, and deep learning in a nutshell

The terms "artificial intelligence," "machine learning," and "deep learning" are frequently used—and frequently confused. To understand how learning theories apply to these technologies, you must first understand what they are.

  • Artificial intelligence is the study and creation of intelligent machines that are designed to replicate human thinking as closely as possible. There are many sub-domains of AI, including machine learning, deep learning, and natural-language processing, each with its own complexity. 
  • Machine learning occurs when systems are created that can learn to perform in ways that are not directly programmed. They are self-updating, improving without direct human intervention. Recommendation engines on entertainment websites are a good example.
  • Deep-learning systems more closely model human neural networks, learning by processing extremely large datasets in nonlinear ways. By using multiple points of reference at different points along massive amounts of data, deep-learning systems can detect more nuanced relationships among various layered data points. Deep-learning networks keep semi-autonomous cars on the road and help home assistants seem more responsive to their users. 

AI-powered technologies are integrating rapidly with people's lives, becoming substantially more sophisticated with each passing day. Understanding their complexity from a technical or mathematical perspective can be difficult. 

Learning theories you should know

Human learning theory can greatly enhance your ability to understand what learning in machines looks like, and how to determine what to consider when testing complex, AI-powered hardware and software.

Many psychologists, educators, and sociologists have studied human learning over the past 150 years and have come to understand that human learning is a complex endeavor. At the same time, there are areas of human learning that we still don't understand.

To understand how the study of learning has evolved, you need a basic knowledge of behaviorism, cognitive, and constructive theories.


The behaviorism learning theory is based on the idea that knowledge is external and that learning occurs through repeated interactions that result in a changed outcome. Knowledge is an external and observable truth, as opposed to thoughts or emotions.

One of the most famous behaviorism studies was conducted by Ivan Pavlov, who conditioned dogs to salivate when he rang a bell. Behaviorist learning theories are most useful for understanding lower-level learning in humans; most behavioral studies are conducted on animals before they are attempted on humans.


Cognitive learning theory says that to understand and encourage learning, you should look beyond behavior and focus on its cause. In this theory, behavior, environment, and personal influences are all factors that affect a person's ability to learn and make decisions.

Cognitive theory is most observable as a state change. For example, someone becomes better at driving a car by thinking through a process, responding to their environment, and making decisions based upon all of those factors.

Both cognitive and behavioral models tend toward the more rote, memorized, isolated, and abstracted forms of knowledge that constitute learning.


In constructive learning theory, knowledge is constructed from how humans interpret their past experiences, their current state, and the knowledge within a given context. In the constructivist view, knowledge is not a single "truth," but rather a truth for the individual within a given context, relative to that person's space, time, and experience.

The expectation here is that learners' knowledge will become more nuanced over time, based upon how they use their knowledge to interact with the world, and how they incorporate that experience into their overall knowledge story.

How to use learning theories to ask better questions

Much of what currently qualifies as AI lives in the human tradition of behaviorism. Aaron Schumacher, senior data scientist and software engineer at Deep Learning Analytics, provides an excellent way to understand the deep connections between human and artificial learning.

Since 2017, when Schumacher wrote that article, there have been continued advances in robots learning to do things like use tools, but the very complexities that are so difficult to pin down in constructivist learning theory are the ones that also vex auto companies striving for a fully autonomous car.

If you want to understand the most complex possibilities that AI can throw your way, you need to anticipate all of the diverse randomness that is human behavior. While you can't predict every possible situation, you can ask questions that are informed by how humans construct knowledge.

Here are just a few that come to mind when thinking about how data is created and used to train machine-learning algorithms.

  • What assumptions were made when creating the test dataset?
  • What assumptions were made when creating the algorithm?
  • Who created the test dataset and algorithm?
  • Did the creators of the data or algorithm have any potential biases based on their experiences or lack of experiences in the world?

To learn more about the assumptions or biases that might exist in a dataset, testers need to talk with the data science team. Some of the answers to these questions may not be obvious, since people are frequently unaware of their own biases. However, testers can learn a lot by having conversations with the data scientists responsible for creating datasets.

Other issues to pursue

Beyond behaviorism, you want to know if and how machines are learning. What are the kinds of information that help the machine learn? What are the kinds of information that "break" an algorithm or cause it to respond unpredictably?

Are there ways you can force these conditions and learn from them? Are boundary-breaking conditions accounted for in the algorithm? What are the biases of the machine makers, and are these implicit biases tested or addressed? These are the kinds of questions that cognitive learning theory can help us address.

The human experience is perhaps most reflected in the constructivist learning theory. The idea of a sentient machine is widespread. But still, we wonder, when does a machine learn on its own? When does a machine need help learning? When does a machine know what it needs to continue learning, and how does the machine acquire that information? 

The answers to these questions are not available just yet. However, we can use them to reflect on the impact of increasingly complex technology in our lives; we can use them to think about the impact of these technologies on our customers.

Running while learning 

The difficulty of writing and testing AI is that the work you do every day is brand new. You are forging new paths and, as a responsible world citizen, you need to consider the deeper impact of your work. You are writing the "how to" manual while you're doing the work. 

Software developers and testers armed with a better understanding of learning—​in both machines and humans—​can ask critical questions throughout the software development life cycle to ensure that AI-powered technologies are operating effectively and responsibly.

Come to my TSQA 2020 conference presentation, "Black Holes and Revelations: Explore Applied Learning Theories," where I'll demonstrate how you can apply learning theory to software testing. The conference runs February 26-27, 2020, in Durham, N.C.

Keep learning

Read more articles about: App Dev & TestingTesting