Micro Focus is now part of OpenText. Learn more >

You are here

You are here

Choose the right approach: 4 competing testing techniques compared

Matthew Heusser Managing Consultant, Excelon Development
Golf green

Confusion reigns when it comes to the various approaches to testing. 

The context-driven school advocates an approach that de-emphasizes pre-planning, in favor of just-in-time response, flexibility, and creativity. Where context-driven testing is a set of principles about product development and testing, exploratory testing is an approach to software testing. And session-based testing and scenario-based testing are two ways to scope the work.

With that basic comparison out of the way, here's a deeper dive into these four approaches, plus when to consider a management framework such as session- or scenario-based testing and how to decide which one to use.

Context-driven testing

Created in 1999, the context-driven school of thought is arguably a less-known parallel to the Agile Manifesto and actually predates it. You can think of the “school” as akin to a school of psychology, one that holds on to certain key concepts that might differentiate it from others. Two of the four founders of the context-driven school were invited to contribute to the Agile Manifesto. Brian Marick attended the snowbird conference and became a co-author.  

Here are the seven basic principles of the context-driven school:

  1. The value of any practice depends on its context.
  2. There are good practices in context, but there are no best practices.
  3. People, working together, are the most important part of any project’s context.
  4. Projects unfold over time in ways that are often not predictable.
  5. The product is a solution. If the problem isn’t solved, the product doesn’t work.
  6. Good software testing is a challenging, intellectual process.
  7. Only through judgment and skill, exercised cooperatively throughout the entire project, are we able to do the right things at the right times to effectively test our products.

As you can tell, these principles do not tell the tester what to do. Instead, they define a preference to focus more on skill development and less on following predefined practices. This preference puts testers in the driver's seat, deciding for themselves the best approach for a project, a story, or this morning.

Context-driven testing is more about “what” than “how.”

That said, context-driven testers tend to have certain “exemplars”—examples of how to do testing. For agile testers, this might be test-driven development (TDD). For context-driven testers, it has historically been exploratory testing.

Exploratory testing

Dr. Cem Kaner, co-author of Testing Computer Software, the "bestselling book on software testing of all time," defines exploratory testing this way:

Exploratory Testing is a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the value of her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project.

There’s quite a bit to that definition. In my words, the basic idea is that having a human being jumping into the software and exploring it will yield more test ideas and better ones than a “test designer” who would read a document and plan the test activities up front. By exploring the product you learn about it, and the results of the last test informs what to do next.

Exploratory testers tend to talk in terms of skill, not best practices, and learn heuristics—imperfect guidelines to help drive testing. One common approach is to try quick attacks, combined with walking the happy path, to uncover defects quickly.

Once the first round of problems occurs, people can step back and talk about what to do next. While “dive in and quit,” described in Lessons Learned in Software Testing, provides one way to get started, focusing and defocusing provides a way to keep going when the initial energy wears off.

The more software to be tested, the harder it will be to manage with an exploratory approach. Sessions and scenarios are two ways to provide some management to an exploratory test process.

Session-based exploratory testing

Jonathan Bach began a presentation at STARWest in 2000 with this question:

Ad hoc testing (AKA exploratory testing) relies on tester intuition. It is unscripted, unrehearsed, and improvisational. How do I, as test manager, understand what’s happening, so I can direct the work and explain it to my clients?

His answer was to organize the work into sessions. A session is an uninterrupted block of test time with a particular mission. Testing that occurs within a session has a "scope," which is defined in a charter. Charters tie into risk management. The process of tracking session progress, producing reviewable results, and debriefing on those results is called session-based test management.

In her book Explore It! Elizabeth Hendrickson suggests this format to define charters:

Explore [target] with [resources, techniques] to discover [valuable information to the product]

So, for example, a charter might look something like this:

“Explore the front-end application while bringing down the database, web services, and third-party connections to discover how the web application behaves when dependencies are down.”

The process of creating charters is iterative. In a 30-to-60-minute session (called a “timebox”), testers will probably have notes on areas they would like to explore if they had more time, or ideas that were out of scope. Those will likely become new charters. It’s also likely that the charters will have a relationship to stories or requirements, providing traceability. Some charters will address cross-cutting concerns—issues that only exist because of the interaction of multiple requirements.

The core idea here is to put some structure and direction around what to test in a timebox. This is beneficial for management and coordination. It also allows managers to generate something like the traditional metrics for testing while giving maximum freedom to the people doing the work.

Scenario-based testing

Where charters are based on the tester’s expertise, scenarios are the opposite—they represent the common workflow that comes from the business. Organizations that produce data flow diagrams, use cases, true user stories, or happy path scenarios might give these flows directly to testers as the place to start with testing.

Kaner defines scenario-based testing as a world where “Tests are complex stories that capture how the program will be used in real-life situations.”

Scenario-based testing can be as simple as taking the scenarios as given and running them. Usually, it's not so simple because the scenarios will be written at a high level, with room for different interpretations, variation of input, and significant setup and comparison. This turns the scenario into a guide for testing while keeping it as a tool to prevent overlap, allowing progress reporting and visibility into who is testing what, and how.

Choosing between scoping methods

The core issues with switching between “pure” exploration, charters, and scenarios are issues of coverage, trust, and management. If the application is small and a single tester has a good handle on the risks, then that tester can probably test using the pure exploratory approach with no problems as long as he can test the application in a reasonable period of time.

Sometimes, though, there are too many risks to keep in mind at one time. Sometimes there will be more than one tester, and the group has a goal to test as fast as possible, so it wants to minimize overlap. Often there is simply too much to do, and the group would like to create a list of risks (charters), prioritize, and decide what to do and what to skip. Publishing this list provides transparency and helps management understand what the risks are and what is being skipped. It explains why testing is taking so long and can help build trust.

When charters are developed by testers as a census of risk, scenarios generally come from the business and tie directly to requirements and use cases. Scenarios test the happy path and tend to be easily explained with little variation. Scenario testing demonstrates the software can work under the 20% of conditions that will be used 80% of the time.

James Bach, another founder of the context-driven school, claims that all good testing is exploratory to some extent. Testers who find a bug need to stop, look around, and decide what to do in the moment, going “off script.” After describing the defect, they will find a test environment that is different than it should have been at that point, requiring another decision.

All those decisions are based on what the tester has learned to this point. In his recent work, Bach has dropped the term “exploratory,” arguing that it is redundant and that all testing is exploratory. He adds that following a perfectly defined checklist to the letter is checking, not testing, and, perhaps, best left to a machine (i.e., test automation).

Use the methods you need

These choices are not either/or. Exploratory testing is an approach where the tester is in charge of the process, choosing where to go next. The core questions for exploratory testing are “What do I know now?” and “Where should I go next?”

Context-driven testing is a wider concept that encourages testers to own their process. Sessions and scenarios are two ways to manage exploratory testing for large groups. Teams that have well-defined use cases with minimal risk should consider scenario testing. Those with more general concerns will want to manage the work with session-based testing. But be flexible and use what works for you.


Keep learning

Read more articles about: App Dev & TestingTesting