STOP! 15 things you should not do when running functional tests

Picture of Brian Borg
Posted by Brian Borg

'Testing is an infinite process of comparing the invisible to the ambiguous in order to avoid the unthinkable happening to the anonymous.'

James Bach, software testing consultant

Functional tests are types of software testing that check whether an application works correctly for the end consumer. After all, their opinion is the only one that matters. And when running these tests, you absolutely do not want to:

1. Do it all manually

Instead, you want to find opportunities to automate your tests, and make sure you have dedicated resources for developing automation testing.

2. Automate everything

When dealing with disorganized legacy systems or features that are more time-intensive and complex to automate, simply don't waste your efforts.

3. Have a single-use mindset

Improve your efficiency by building simple test scripts that can be reused. Make sure to use a test case management tool and keep test maintenance in mind. Templates help, too.

4. Bother with tests that take too long

Ideally, you're looking for a rapid feedback loop where tests can be turned around quickly to inform the development team's work. This is vital to a good continuous integration and development cycle.

5. Run tests sequentially (if you can avoid it!)

As much as it makes sense to do so, run tests in parallel to reduce the overall time spent in QA and prevent queuing.

6. Just run positive tests

Negative tests check for the unexpected. If you only test how something is meant to function, you may miss some unexpected user pathways and behaviors.

7. Forget to test across browsers/devices

Does it work on mobile? Does it work on Internet Explorer? For example, in the case of web apps, don't just test on the latest versions of browsers.

8. Prioritize the wrong tests

Focus on 'does it work?' over 'does it work well?' - you can always make improvements, but fix what's broken first. And, calculate which tests are best suited for automation.

New call-to-action

9. Test without goals and timeframes

Never start testing without an idea of what you're trying to achieve and by when. Exploratory tests in particular can be a rabbit hole, so set your parameters in advance.

10. Overlook testing tools

Testing tools, like Selenium or Cypress, empower QA engineers to test efficiently, avoid user errors and minimize 'busywork'.

11. Ignore project management tools

We like JIRA for test management. It helps to keep track of progress, statuses and priorities. And, it offers a way for multiple stakeholders to collaborate in real-time.

12. Track results manually

Set up an alert for test completion or bug-finding; use a webhook that works with your testing platform or try a Zapier integration into your messenger of choice.

13. Omit documentation

90 percent of testing is testing. The other 90 percent is reporting (hat tip for the joke). Without clear documentation you won't have a record of what you've already checked or a roadmap for future tests.

14. Leave testing until late in the process

The 'shift left' mentality is gaining momentum. This means testing early and often, involving testing as soon as possible in the software development cycle, including before code is even written.

15. Fail to collaborate with stakeholders and users

Keep internal stakeholders and users front of mind. Think about how different personas will interact with the product. Where possible, have user groups test the applications themselves.

BONUS: Don't keep all your testing in-house

Yes, test as much as you can, but getting an outsider opinion somewhere along the way will save you time and expense. Doing all your own functional testing is like DIY plumbing: you'll spend hours on it, break something and have to rely on experts in the end. So if your development pipeline's blocked, give us a call.

New call-to-action