Micro Focus is now part of OpenText. Learn more >

You are here

You are here

10 best practices for QA teams to deliver quality software, fast

public://pictures/Karim-Fandaka-QA-Team-Leader-HP.png
Karim Fanadka QA Team Leader, HP Software
Go up and never stop!
 

As a quality assurance (QA) team leader, I have to sign off on the quality of a major release every six weeks. Each major release normally includes two new big features and three smaller features, such as a change in user interface (UI) or a new report, as well as stability issues and bug fixes. I have eight QA engineers working on code developed by 30 developers.

That's a tall order to manage. So, to avoid having to spend nights and weekends at work, our team adopted these 10 best practices to make the workload manageable while ensuring that the releases we approve maintain the highest standards of quality.

1. Break free from the classical roles and responsibilities of QA

We have breached boundaries in both directions. We are a customer-facing unit, and we hear from our customers about issues they experience and what features they would like to see in our product. On the other end, we actively participate in design discussions, offering the input we receive from customers.

In addition, our code testing knowledge and experience helps us identify designs flaws before anyone spends time coding, which significantly reduces development cycles and helps us meet customer expectations as we adaptively release new versions.

2. Choose your release criteria carefully

You can't test everything in an enterprise product for every release, and fortunately, you don't need to. You can still be confident in the product you approve if you focus on areas of your code where the most significant changes were made. Before a new release cycle begins, our team sits with all the stakeholders to understand which parts of the product will be touched by new or updated code. We use that information to prioritize our testing efforts. We focus on those parts of the code and use existing automation tests to handle other parts. If you know something worked in the last release and you're not touching it in this release, then you don't need to spend too much time testing. So base your release criteria on new code that is being added.

3. Prioritize bug fixes based on usage

Fixing bugs is an integral part of any new release, but on which bugs should you focus your efforts? Our answer is usage data. We use Google Analytics to see how end users interact without load testing tools. This gives us a wealth of vital information. For example, if we know that one area of an application is rarely used, a bug in that part of the code gets lower priority. If less than one percent of our users are on a particular browser, issues specific to that browser get less attention. But we also listen to our customers. The last thing we want is for our users to experience bugs. If something did get past us and users discover bugs, those bugs get priority for fixes in the next release.

4. Adopt a two-tier approach to test automation

If a commit that a developer makes to the main trunk breaks the build in any way, we inform them as quickly as possible. That said, we can't run exhaustive system tests for every commit. That would take too long, and by the time an issue could be found, the developer might have moved on to something else. So, we adopted a two-tier approach to test automation. Tier one is triggered by every commit to the code base and provides rapid validation of developer changes, with sanity tests that complete within several minutes. Tier two runs more exhaustive regression testing and runs automatically at night, when we have more time to test changes. Deciding how light or exhaustive each tier should be is an art. But once you start working like this, you quickly learn how to balance between daytime sanity testing and nighttime regression testing.

5. Stay close to the relevant environment

Every QA team has heard the developer comment, "...but it works on my machine." How do you avoid that situation?

Our QA and our development teams run exactly the same environment. As our builds move through the development pipeline, however, we must test the code under production conditions, so we build our staging environment to simulate our customers' production environments.

6. Form a dedicated security testing team

Because customers consume our products as a software as a service (SaaS) offering, we store all data on our servers, and we need to perform security testing before each release. Security vulnerabilities on SaaS platforms tend to be discovered by users, and those issues can quickly drive away customers. To prevent that, we formed a dedicated testing team that performs a full week of penetration testing on stable versions of soon-to-be-released products and updates. Before they begin testing, we brief the team about new features in upcoming releases and product environments. The team uses that information to test for security vulnerabilities to attempt to penetrate the system. These team members undergo rigorous security training and are familiar with relevant corporate and ISO security standards, with a specialization in cloud apps.

With their help, our team recently discovered a security vulnerability, created by one of the top cloud environment providers, that would have allowed malicious hackers to obtain valuable information. We quickly updated our infrastructure on Amazon's cloud to prevent a breach.

7. Form a dedicated performance testing team

Have a dedicated performance team run tests as soon as a product is stable, and brief the team about new versions and features so that they can assess the performance risks. When the developers introduce a new feature that has no effect on performance, such as a button on the screen, we only run our regression tests. But if we suspect that a feature might affect performance, we also write and execute new performance tests.

Always update your security and performance teams with all pertinent information and provide them with an environment as close to production as you can. In one of our recent releases, the performance engineers discovered a significant bottleneck in an internal, third-party SaaS environment because of a new configuration in that provider's database. If the performance team hadn't tested the environment, a crash would have resulted. This step is vital. If you don't have the means to form your own dedicated performance team, train a few QA team members to take on performance testing.

8. Run a regression cycle

We run our regression cycle in the final phase of product stabilization, and it is that process that triggers the green light to go to production. Since very little changes in development at this point, you have an opportunity to validate the entire product. We conceptually model our product as a tree with a hierarchy of module and component branches to help us understand the product from the customer's perspective. When any branch is modified, the hierarchy shows what branches below it will be affected and will need additional QA testing.

Our regression cycle uses the traffic light method. If every branch receives a green light (passes all tests), the product is considered ready for delivery. If a branch receives a yellow light (all tests passed but with one or more reported warnings), we discuss the issue with our stakeholders. Finally, if a branch receives a red light (one or more tests failed), we stop and address the issue. We also automate our regression cycle, so it only takes a few days to run.

9. Simulate customer accounts on production

Since we maintain customer data in our databases, we must ensure that it remains compatible with any new versions that we release. Eating our own dog food is crucial, so when the QA team runs data migration testing, we create a test account that's managed on our production systems. We use this account to continuously generate data and populate our databases.

When we release a new version, we run updates to check that no data was harmed, and if we find any data-corrupting bugs, those become our highest priority. We also spend a day or two on manual backwards compatibility testing while we take steps towards finding an automated and more efficient approach. However, you still need to do some manual testing, as this is one of the last phases before production.

10. Perform sanity tests on production

We perform post-release sanity tests on our production account to validate that everything works as expected, including all third-party systems. We first perform tests using our existing production account but then create a new account to validate that the process will continue to work correctly as new customers sign up. We conduct sanity testing for half a day, where part of the team tests the old account and the other part tests the newly created one. Finally, we test third-party components, such as the billing system, to ensure version compatibility.

Performance engineering has changed the traditional roles and processes of QA engineers. Today, you must have highly specialized and dedicated teams, as well as a continuing QA process through production and beyond. In addition, to perform your role thoroughly and satisfy your customers, you have to be willing to be a customer yourself.

To maintain product quality while keeping up with the demand for frequent product releases, QA testers must break traditional molds. You must develop new skills, such as software design and development, so you can be more involved in different stages of the development process. Following these 10 best practices is a win-win for your team and the business. Do it right, and you will shorten development cycles and make the work of your QA professionals more engaging.

Keep learning

Read more articles about: App Dev & TestingTesting