The 3 categories of performance testing metrics you need to measure

Picture of Brian Borg
Posted by Brian Borg

QA performance test engineers like categories. They might like them a bit too much, in fact. When they start talking about metrics, you tend to get a laundry list ranging from thread counts to number of private bytes.

You need to be able to make sense of what has and hasn't been tested and report on that work to stakeholders, who may not all be technologically inclined. Some people love monstrous spreadsheets. Most, however, want summarized or visualized data that answers a few key questions.

So, think about how to plan, execute and report on your performance testing efforts in a meaningful way. Ask yourself these three questions:

1. Can it go faster?

The efficiency of any software application is key to its success. According to Neil Patel and Kissmetrics: 'If an e-commerce site is making $100,000 per day, a one second page delay could potentially cost you $2.5 million in lost sales every year.'

These performance testing metrics include:

  • Average load times
  • Response times
  • Hits/connections per second
  • Network bytes total per second
  • Network output queue length
  • Throughput for received requests
  • Garbage collection

Remember: don't just focus on averages. An average is only useful when taken in conjunction with the standard deviation across datapoints, or even better, as a percentile. For example, 'average page load time is less than 0.1 seconds, 99 percent of the time.'

New call-to-action

2. Can it go farther?

How many resources does a certain function use? Can the number of users scale up or down? Think of it as if you were buying a kid new shoes - does the application have room to grow, and at what point will you reach catastrophic failure (and the shoe seams burst!)?

These performance testing metrics include:

  • Top wait times for retrieving data from memory
  • Bandwidth (bits per second)
  • Memory/disk space/CPU usage
  • Amount of latency/lag
  • Concurrent users
  • Private bytes

You're looking for bottlenecks that slow or halt performance, usually caused by coding errors or database design, although it's worth noting that hardware may contribute to these as well. Running endurance tests reveals how these issues appear over time, even if they don't turn up initially.

3. Can it go forever?

Consistency is key. A software application should work the same way every time. You want to measure how stable or reliable it is over time, especially in the face of spikes in usage or other unexpected events.

These performance testing metrics include:

  • Page faults (per second)
  • Committed memory
  • Maximum active sessions
  • Thread counts
  • Transactions passed or failed
  • Error rate

All types of software testing are, really, about finding breaking points. This is never more important than if you experience rapid demand, such as an increase in popularity. Will your software be able to cope with a surge of new users, such as those seen by Netflix and TikTok in 2020? If not, you risk missing out on a big opportunity.

Start going faster, farther, forever... earlier?

Performance testing should not be the last stage in development. By anticipating issues early on and planning ahead, you save yourself the headache of fixing performance problems uncovered during testing. That's why you'll want to engage a quality assurance engineer at the planning phase of your next new build or feature.

'Only conducting performance testing at the conclusion of system or functional testing is like conducting a diagnostic blood test on a patient who is already dead.'

Scott Barber, Performance Architect

New call-to-action