Micro Focus is now part of OpenText. Learn more >

You are here

You are here

A tester's guide to overcoming painful dependencies

public://pictures/ash_winter.jpeg
Ash Winter Consulting Tester and Speaker, Diagram Industries
 

We did it: We had built a testable system. We achieved high observability through monitoring, logging, and alerting. We instituted controllability by using feature flags, and we understood the build through pairing and cross-discipline-generated acceptance test automation. And we had aimed for decomposability by using small, well-formed services.

But our pipeline was still failing—and failing often.

As it happened, we had a horrible external dependency. The service suffered from frequent downtime and slow recovery times. But most painful was the fact that test data creation required testers to create a manual request through a ticketing system.

We were dependent on a system with low testability, which undermined our own testability. And this had consequences for our flow of work to our customers. 

Here are three ways to address such dependencies and engage with the teams that maintain them.

How testability affects flow

Testability has a tangible relationship with the flow of work. If a system and its dependencies are easier to test, then work will usually flow through its pipeline. The key difference is that all disciplines are likely to get involved in testing if it is easier to test. But if a system and its dependencies are hard to test, you're likely to see a queue of tickets in a "ready to test" column—or even a testing crunch time before release.

To achieve smooth flow, treat your dependencies as equals when it comes to testability.

Adjacent testability: How testable are your dependencies?

But what is adjacent testability? The term refers to how testable the systems are that you depend on to provide customer value. For example, consider systems with which you need to integrate to complete a customer journey. If your system relies on a payment gateway that suffers from low testability, your end-to-end tests may fail often, making release decisions problematic for stakeholders. Most systems have integrations with other, internal and external systems. Value generation occurs when those systems work together in concert, allowing customers to achieve their goals.

When considering flow of work, Eli Goldratt discusses two types of optimizations that apply to testability in his Theory of Constraints (ToC):

  • Local—Changes that optimize one part of the system, without improving the whole system
  • Global—Changes that improve the flow of the entire system

If you optimize your own testability but neglect your dependencies, you have only local optimization. That means you can only achieve a flow rate as fast as your biggest bottleneck.

In the horrible dependency I described above, new test data from the dependency took a week to create. This resulted in other challenges. For example, we had to schedule the creation of the data in advance of the work. And when that work was no longer the highest priority, we found that we had wasted our time and energy.

How to improve adjacent testability

Establishing that you may have an adjacent testability challenge is one thing; determining what to do about it is another. On the one hand, you could argue that if a dependency is hard to test, it's not your problem. External dependencies might have contractual constraints for reliability, such as service-level agreements. Contracts and reality can be far apart at times, and service-level agreements are not very effective change agents, so try engaging in the following ways:

Enhance observability and information flow

In this way you can provide feedback about your interactions with dependencies, rather than logging only your system events. Interactions with dependencies are part of a journey through your system. Both internal events and interactions should log to your application logs, exposing the full journey.

Replicate this pattern for both production and test environments. The key benefit: You'll provide context-rich information that the people who maintain that dependency can act upon.

For example, after integrating with an external content delivery API for an internal application, we had issues triggering our request rate limit. We believed the rate limit block triggered too early, since it should have only triggered for non-cache hit requests. So we added the external interactions to our internal application logs, noted that certain more frequent requests needed a longer cache expiry, and worked with the external team to solve the problem.

Emphasize controllability and collaboration

Controllability is at its best when it is a shared resource, which encourages integration between services early and a dialog between teams. Feature toggles for new or changed services allow for early consumption of new features, without threatening current functionality. Earlier integration testing between systems addresses risks earlier and builds trust.

For example, when upgrading a large-scale web service by two major versions of PHP, our test approach included providing a feature toggle to redirect to a small pool of servers running the latest version of PHP for that service. Normal traffic went to the old version, while our clients tested their integrations on the new. We provided an early, transparent view of a major change, and our clients integrated with that while we also tested for changes internally.

Bring empathy and understanding

Systems are not the only interfaces that need to be improved in order to improve adjacent testability; how you empathize with others teams you depend on needs attention as well. Bringing the monitoring, alerting, and logging they receive into your own monitoring, alerting, and logging setup helps a great deal.

For example, during a platform migration project I worked on, a database administrator was obstructive, insisting that we issue tickets for every action. In response, I added myself to the service disruption alerts email list for that team. Batch jobs we had set up often failed, since we had not considered disk space for temporary files. The resulting alerts woke the database administrator up at night. This was a small fix for us, but eliminated a huge annoyance for him. After that we never had a problem getting data moved or created.

Get started: Follow these three principles

So what are you waiting for? Take a collaborative approach to improving the testability of your dependencies, and you'll see a significant testability improvement for your own systems. And as you move forward, remember to follow these three principles:

  • Observability and information flow: The whole journey is your aim—including dependencies.
  • Controllability and collaboration: Encourage early integration and risk mitigation.
  • Empathy: Try to understand the problems and pain of those who maintain your dependencies.

As a first step, reach out to build a relationship with the teams that you depend upon. When you truly understand their challenges and how you might be able to assist them, you'll unlock large testability gains for your own team.

Keep learning

Read more articles about: App Dev & TestingTesting