Micro Focus is now part of OpenText. Learn more >

You are here

You are here

How to build performance requirements into your user stories

public://pictures/Todd-DeCapua-CEO-DMC.png
Todd DeCapua Executive Director, JP Morgan
Index cards
 

Too often, organizations make time for performance testing only as an afterthought. The work that’s needed comes too late in the game to improve results for the customer or end user. performance engineering changes all that.

So how soon should performance issues be addressed? performance engineering best practices say you should start at the earliest phase of development, when you are first planning and describing features and functions. Here's how you can incorporate performance engineering into your user stories so that each sprint delivers quality that goes far beyond functional requirements.

Performance at the user-story level

How do your user stories get described? Mike Cohn and others who write about agile development have long advocated the use of an index card, or a sticky note, for capturing user stories—the idea being that if you can’t fit you user story on a 3-by-5 index card, you haven’t honed your story down to its essentials.

Writing a good story in agile development involves things as simple as defining who the actors are and how they will interact to get expected results. But for performance engineering, there are additional considerations.

For example, if you’re designing an online game, performance considerations include:

  • “How many people concurrently will be entering a new name and optional description?”
  • “How many people will be inviting others?”
  • “What types of devices will they be using?"
  • "What are the network conditions, and in what distributed world geographies?”

When teams start breaking down epics into stories, they also need to think about the acceptance test criteria. It’s just as important.

That's why you should use the back of the card also to capture performance acceptance criteria. This is what performance engineers mean when they talk about “building performance in” at a very early stage.

Here's how I implement performance at the user-story level.

Example 1: Vehicle remote start … or not! 

On a cold day in Detroit, I visited the CTO of a major auto manufacturer, who was upset with the auto-start feature his company had installed on new cars. “Watch,” he said. He launched the app on his phone, and tapped an icon to remote-start his car. “Now follow me,” he said. We proceeded down the hallway, down three stories in the elevator, then about 200 yards to where his car was parked. The trip from his office to his vehicle took exactly 12 minutes. I timed it.

When we got to his car, his vehicle wasn’t running. “Okay, just wait…” he said. We stood in the cold for another few minutes, and—the engine turned on! Only the button had been pressed 14 to 15 minutes earlier! “I’m afraid this is our customers’ everyday reality,” he said. 

By some metrics, this unacceptable delay wasn’t the fault of the software itself. It had actually been well designed, according to the user story that I’ve captured below.

 

The user story reads: "As an automobile driver, I want to be able to remotely start my car so that it will be warmed up by the time I get to it."

Okay, but this description captures several very subjective ideas. What does “by the time I get to it” mean?

Does it mean “user should walk two to three minutes more slowly than usual” to ensure that the vehicle is on when they arrive? Or does it mean “user should press auto-start 15 minutes prior to arrival” to ensure that the vehicle is actually “warmed up” upon user arrival?

None of the above. Most users would expect a car to start within five seconds so that it would be warming for at least five or ten minutes before they’re ready to drive.

What if you captured those subjective, human requirements that describe the desired outcome on the back of the same user story card? Here’s what that might look like:

 

The back of the card reads: 

  • Users connective over networks: 45% 4G, 25% 2.5G, 20% 3G, 10% WiFi
  • Instrumentation of app to capture flows 
  • App launch time of 1 second or less
  • Must handle 100,000 concurrent users
  • Plan for peak (4x) across time zones in US at 8 am and 6 pm

That afternoon, as we hustled back into the warmth of his office, he asked, “So, how do we fix that app?” I started explaining how to capture performance-related acceptance criteria early in the development cycle, and I sketched out something close to what you see above on his whiteboard. Next thing you know, I had a team of six tech leads sitting in front of me. 

Performance focus: Get the team to understand the impact on your users

His team has now adopted several new performance engineering practices, that glitch has been fixed, and user satisfaction has improved.

Here are some of their new performance-related practices:

  • Use the back of a user story card to capture the performance acceptance criteria.
  • Retool the organization’s “definition of done” to include both front- and back-of-card details.
  • Leverage capabilities to gather automated performance results from the initial build, and automatically run these with deploys into the first environments.
  • Establish performance accountability and ownership for all stakeholders of a user story, factored in from the beginning.

The adoption of these capabilities did not require a large purchase of software or hardware, but rather a change in thinking and focus, along with some minor adjustments in the culture of the organization. 

Example 2: Will you be on time for your next airline departure?

When we’re on our way to catch a flight, we need to rely on mobile devices to stay updated regarding the actual wheels-up time. If you can receive reliable, up-to-the-minute flight status on your mobile device, you might be able to avoid additional waiting time in the airport if you're notified about delays far enough in advance.

Here’s a simple user story for this need.

 

The user story reads: I want to be able to check flight status on my mobile app so I can make my flight.

When flights are running on time, these apps tend to be worthwhile. But when things go wrong—say, when one flight is delayed and causes further delays for a network of interconnecting flights—these apps are frequently inaccurate or altogether wrong. A huge part of the problem is the massive influx of frustrated customers, all hitting the app at the same time to learn the status of their flights.

Compared to the first example, this user story now includes requirements around another important aspect of performance—user load. Here’s what I suggest for the back of the card for this story.

 

The back of the card reads:

  • Must handle 10,000 concurrent users
  • Users connecting over networks: 60% 4G, 20% 2.5 G, 15% 3G, 5% WiFi
  • Use mobile data encryption to ensure secure data transmissions
  • App launch time of 2 seconds or less
  • Screen to screen response time of 3 seconds or less

This example shows that there is a variety of performance information you need to think about in the feature design phase of development. How do you plan for concurrent user requirements in a user story? Load testing.

Performance focus: Use load testing to understand sudden bursts in usage

Getting to the level of 10,000 concurrent users is a challenge for many organizations. If your typical performance scenario is 1,000 virtual users, then you should increase the transactions per second (TPS) to see how system performance will respond with 10,000 users. 

To prepare for this scenario, you'll need a solid load testing tool with the ability to quickly perform burst tests. I write about load testing tools in further detail here

Example 3: Online banking info for deployed soldiers

At one time, my cousin deployed with the US Army at Camp Buehring in Kuwait. Among the things he talked about was the difficulty soldiers have using mobile banking in the field. “What has been your experience?” I asked. “What should developers and business folks take into consideration when designing apps for people in your situation?”

“Money is an international commodity,” he told me. “If two banks are competing, the app that can connect with a representative via cellular voice, data voice, or text, and have solid encryption is the one I'll choose, especially if it has VPN built in. But I have yet to see all those options in a banking app.” Then he told me he often had trouble simply seeing his checking account balance, so he had no idea whether his family had the money to pay the monthly bills on time.

The front of the card for a user story addressing this concern would look something like this.

 

This user story reads: As a deployed soldier, I want to be able to get my checking account balance so I can ensure my loved ones have enough money to pay the bills back home.

At one point in our discussion, my cousin texted me: “No network = No communications = Cannot survive.” So, given everything I learned in our conversation, here are the kinds of performance requirements that should be taken into consideration on the back of the card.

 

 

The back of the card reads:

  • Nearly 100% of target proof of concept users in the Middle East region
  • Users connecting on networks: 80% SATCOM, 20% WiFi
  • All SATCOM over secured links with 1800 msec latency maximum one-way
  • App launch time 5 seconds or less
  • Screen to screen response time 8 seconds or less

This last example shows how performance criteria in users stories also need to address the diverse network conditions of your customers. There are always going to be glitches and exceptions, and perhaps the US Army is already working to ensure that soldiers get the performance they need. If not, I have a few suggestions.

Performance focus: Use network virtualization to understand various conditions

The solution for this example is to use network virtualization in your testing suite. Network virtualization tools let development teams see how their website performs over different network conditions. Some tools provide automated optimization recommendations, so teams can understand what to fix at a code level to increase performance speed.

Inject performance into your DNA

The user story and other agile methods have helped development teams stay on track as they complete sprints and work toward product delivery. I’ve seen the integration of performance into these user stories work in the trenches. Before you know it, your extended team members and stakeholders start thinking like performance engineers.

Image credit: Flickr

Keep learning

Read more articles about: App Dev & TestingTesting