Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How to leverage user feedback to improve software quality

public://pictures/Yoav-linkedin.jpg
Yoav Weiss QA Manager, Hewlett Packard Enterprise
People with emoji masks
 

Many software products—possibly including yours—have a small checkbox somewhere asking users to help make the software better by sending data about how they use the application. But are you making the most of that user activity data? As a quality assurance manager, ask yourself these three questions:

  • Are you handling top common exceptions?

  • Are you updating your application appearance according to most popular user flows?

  • Are you measuring average execution time of various operations to better them?

By taking full advantage of the data you've collected, you can make the user experience for your product better, and the R&D investment more efficient.  

How we exploit user data

Here's what my QA team does with the user data we collect, and how we use the data we collect to improve our decisions and concentrate on what matters to our users. These are our best practices for handling user feedback and incorporating it into your software development lifecycle.

First, we use our anonymous user (big) data to handle all the points above. But we also: 

  • Gradually fade out obsolete and unused features.

  • Learn how our users work with the tool and use that information to decide on which areas we should pay more attention.

  • Understand better the different profiles of customers, from those seeing it for the first time to those who have many years of experience with it.

The bottom like is that leveraging user data allows us to make products that better suit our customers, and do so with higher precision and much less guesswork.

As a developer, consider implementing this practice if you’re not already doing so. Participating users will find that all of their preferences go to the R&D team planning board and can influence it in their favor.

Coming up with a strategy

As we began researching how to implement these practices, we soon discovered that there's not much information out there on user data gathering. While there are some discussions about using customer data, no one seems to want to share their own processes. 

We had to come up with a way to do this on our own, so we started by approaching our colleagues in the SaaS applications team. But we soon found that our challenge is quite different from theirs, since my team is in charge of a desktop application, an IDE, and we do releases less frequently, every two to four months. 

We also wanted to have details that are anonymous, because our customers are typically large enterprises with users who are tech-savvy, but not keen to share private information.

So we decided to give each user/installation a unique universally unique identifier, or UUID, to gather usage statistics and patterns. The UUIDs are anonymous—we don't know which user/company is associated with each UUID—but we can learn how people use our product's features.

We felt we reached the right balance with this approach, but before moving forward, we ran it by our legal team. 

Our next challenge was reporting. Each installation of our product reports incrementally small chunks of data to a cloud-hosted server we maintain for this task. That server uses the SQLite format. We then aggregate data acquired over time into one big data set. As for data analysis, as of today, we are still reviewing various tools. 

Some examples of the data we gather include:

  • Operating system

  • Machine hardware specification

  • General application usage metrics including:

    • Dialogs

    • Menus

    • Project type in use

  • Invocation methods (Do they use shortcuts? Menus?)

  • Various features' loading times

How not to drown in your own data

Once we unlocked access to the data, the risk of suffering from data overflow—being overwhelmed by too much data—was upon us. We had to remain very focused and carefully prepared our questions before approaching the data, to avoid drowning in it.

Here are some of the questions we asked: 

  • What usability features should be included in our next product version?

  • What are the most used features?

  • Has there been a degradation of execution between versions?

  • Should we do X feature? (Ad hoc questions when we research new functionality)

Listening to the silent majority

Finally, the user activity data you collect can't provide all of the answers. To that end, we continue to use our annual customer survey, cross-referencing the results with our collected user data.

This approach reduces the risk that our data-driven decisions might fail to take into account the needs of customers who chose not share what they do with our product.

For example, before if we decide to drop an operating system version from our support matrix, we want to make sure that such a decision won't harm our "silent" customers.

Better data, better decisions

With this new approach, how I manage my day-to-day work and make decisions that affect customers has changed. I also discovered that my love of data and data analysis is a passion I share with many of my colleagues, and this has had a positive effect on the product by making our collaboration even better. 

So come up with your strategy, carefully define what questions you want to answer when analyzing your data so that you don't get lost in it, seek out feedback from users who don't participate in your user feedback program, and you'll be well on your way to providing the best possible product to meet your customers' needs.

Image credit: Flickr

Keep learning

Read more articles about: App Dev & TestingTesting