Can your web app cope with a surge in users? 11 QA metrics for testing adaptability

Picture of Brian Borg
Posted by Brian Borg

If you’re running a small business, a minute of software downtime can cost anywhere between $137 to $427. Multiply that by an hour, and a buggy application could set you back over $25,000.

Software needs to be adaptable, or it’s going to end up costing you more than it makes. While some have identified human error as the top cause of downtime, performance-related issues are another key contributor – especially if you consider one such human error to be 'forgetting essential tests'.

Avoid outages and ensure that your web app can cope with a surge in users by tracking a prioritized group of QA metrics.

New call-to-action

The essential adaptability-tracking QA metrics

Priority metrics will depend on your application and what it’s designed to do, but it’s a good idea to cover a few general bases. Load testing ****your software with expected usage levels, for example, is non-negotiable.

Track baseline throughputresponse timesmemory usagelatency and thread counts. This will give you a ‘business-as-usual' look at how your application is performing under normal circumstances. If it’s not meeting basic expectations, then it’s certainly not going to respond positively to an increase of usage.

Averages are a must-measure, but for the purposes of testing adaptability it’s more about *deviation* from the average. For many, stress testing for things like peak response times and maximum active sessions is a valuable way to find out how their web app will perform in the event of a usage spike. This is especially important to be done before the application’s **release into the wild.

When performance testing, the most insightful metrics are often the ones that tell you how badly you’re failing. It's time to face the music – find out exactly where your application is falling short by looking for things like page faults per second and error rate, and you’ll get a clear picture of how much work needs to be done on the code.

People working in an office

Tailoring your focus

While testing for those more extreme metrics is always going to be a necessity, you may be at a later stage in your application’s development, and confident in its resiliency under pressure. No one is exempt from performance testing though, just because their application is well architected and perhaps even currently weathering heavy loads.

Instead, determine the metrics that align most closely with your goals for the software. You’ll be using your testing budget far more wisely if you move beyond the ‘one-size-fits-all' approach.

For more mature software, metrics that identify areas for refinement are just as valuable. Look at things like private bytes to get a better idea of inefficiencies in your web app’s processing. And, track hit ratios to flag caching issues that’ll be exacerbated when there’s a surge in users.

Investing in performance

Whether you’re just discovering the limits of your web app’s capability or are on the hunt for microscopic refinements (also know as 'marginal gains'), testing for adaptability represents a direct contribution to your bottom line. As more people than ever are relying on ‘the digital workspace’, more major applications are experiencing surge-related outages. Those outages are expensive, but thoroughly-tracked QA metrics can ensure they’re avoided.

New call-to-action