Micro Focus is now part of OpenText. Learn more >

You are here

You are here

The No. 1 unit testing best practice: Stop doing it

public://webform/writeforus/profile-pictures/aaeaaqaaaaaaaaytaaaajgewnde0owvmlti1ntctngm2mc05zme2lwuwotlkmwe2yjdhoq.jpg
Vitaliy Pisarev Senior System Architect, HPE
Stop sign painted on a wall
 

It always happens the same way: You write code and then run the unit tests, only to have them fail. Upon closer inspection, you realize that you added a collaborator to the production code but forgot to configure a mock object for it in the unit tests. The result: NullPointerExceptions everywhere.

We've all been there. At this point you slap your head, curse at your own stupidity, and mock out the new collaborator. Tests are green, and all is good. 

There's got to be a better way, I thought. And there is.

 

Looking for a better way

As time passed, my own feelings of shame and stupidity at this state of affairs slowly began to be replaced by annoyance. I felt like a slave. Each line of code that involved another collaborator required that I fix unit tests in lock-step. Each time I changed code, I had to change the test. 

I figured that I must have been doing something wrong. So I looked at other unit tests in my team, in the group, in other products, in other companies, only to find the same situation everywhere. 

This got me thinking. My workplace had a policy requiring developers to write system tests in addition to having good unit test coverage. Without diving into the testing taxonomy, the term "system tests" refers to tests that are almost end-to-end. They start at the service API level (REST request, in my case) and go all through to the database. There was little to no mocking involved.

While I was enslaved by unit tests, I couldn't help but notice that the system tests treated me with more respect. They didn't care about implementation details. As behavior remained intact. they didn't complain.

I could refactor and enhance as I pleased and never change a single test. When a test failed, it always happened because of an actual regression or because the behavior was supposed to change. And as a bonus, I did not have to mess up the test code base with thousands of cryptic mocking expectations. 

No unit tests? Blasphemy!

This set me thinking some rather blasphemous thoughts. 

If you're like me, you were trained to believe in and embrace the testing pyramid. The first layer is lightning-fast unit tests. The second is fast integration tests, the third slow system tests, and finally, those very slow user interface tests. 

But my experience revealed a different picture altogether. I found that not only were unit tests extremely brittle due to their coupling to volatile implementation details, but they also formed the wider base of the regression pyramid. In other words, they were a pain to maintain, and developers were encouraged to write as many of them as possible.

 

Later, when working on a new product as the system architect, I decided to take the leap. I set forward a simple guideline for the developers: Always give preference to integration/system tests over unit tests. I didn't ban unit tests—that would be stupid—but reserved those for special cases. 

To be sure, brittleness is far from the biggest problem with unit tests. This was merely what triggered me to question the ROI of this practice and to consider alternatives. A thousand words are not enough to cover all of the aspects, nor is it my intention to try. But I highly recommend reading James O. Coplien's article, "Why most unit testing is a waste," and its follow-up on this topic.

 

Coplien's first article has been analyzed and discussed extensively. I agree with every word in it, but after you read it you'll probably ask yourself, "If I double back on unit tests, doesn't that leave me with slow integration and system tests that run forever?"

 

That's a fair question, and one that I had to address in my own software, an enterprise-scale product with 150 developers, 20 automation engineers, ridiculous amounts of money involved, and high-profile customers.

Our general approach was to act on the understanding that the continuous integration (CI) pipeline, where all the tests are being executed, requires optimization and grooming, and that automation code needs to be treated like a first-class citizen: It should be treated in the same way as one would treat production code. This meant that: 
 

  • Tests ran on strong and stable hardware, including a very strong database server. We use Jenkins in our CI, and most of its slaves are physical machines.
  • Developers and testers were guided to write integration/system tests in a way that allowed them to run in a "dirty" environment, without interfering with one another. That meant that the team spent no time on cleanup, and we could easily run the tests in parallel, using the maven failsafe plugin. Parallelism can easily take an existing suit and reduce its execution time from 60 to 15 minutes.

The combination of the two points encouraged us to:

  1. Conduct profiling sessions directed at finding bottlenecks in test execution, including changes to production code if needed to speed up tests.
  2. Redesign existing tests to be more efficient when consuming expensive resources, without expanding quality.
  3. Carefully plan the automation tactics for each feature by deciding the optimal mix of UI, system, and integration tests.
  4. Deal with a variety of irrelevant testing input by pruning irrelevant parameter values in parameterized tests—that is, tests that iterate a large set of data points.

Give up unit tests and get results

We are now three years into our product and have been getting tremendous value from our automation approach. Try it yourself. Not only will giving up on unit tests not hurt development experience or product quality, but it will also allow your team to focus efforts on the system and integration level tests that provide you with the biggest bang for the buck.

 

Keep learning

Read more articles about: App Dev & TestingTesting