OnPath Testing Blog

The QA lens on developing software with diverse personas in mind

Written by Brian Borg | Oct 14 2021

“Factors used in the scoring process of an algorithm . . . are mere stand-ins for protected groups, such as zip code as proxies for race, or height and weight as proxies for gender.” - Barocas and Selbst

Algorithm bias. It's when software takes on very human prejudices that are downright offensive and often harmful to users.

But that doesn't mean the coders behind it are in any way racist or sexist - this can easily occur from incomplete word and image datasets, or flawed information based on historical inequalities.

When the data is incomplete or incorrect, so is the model the algorithm applies. The result can be non-inclusive software that makes damaging assumptions.

So, how can you avoid this prejudice?

Avoiding prejudices in software

With AI and deep learning, such mistakes can become systemic errors that replicate and repeat themselves. As a result, the algorithm becomes more biased over time.

Some of the biggest names in tech have alienated their diverse base due to biased algorithms:

  • Google Image's racist facial recognition software offensively tagged Black users.
  • Amazon's AI recruiting program downgraded applicants for using the word 'woman' or 'women' in their resumes. It had learned from Amazon's own data that men were preferable.
  • And as much as we would like to forget Tay, Microsoft's Twitter bot, we cannot. Her foul-mouthed Holocaust-denials are forever seared into our minds as an example of how deep learning can go so very wrong.

So, how can you avoid these blunders in your own software development?

A QA provider will identify possible issues long before your software reaches the user. Avoiding these issues will help you maintain your brand image and protect your customers. Not only that, but you'll save time on correcting any potential mistakes.

Exposing biases with QA testing

Software developers have a responsibility to their users and their brand. To ensure their end product is successful means making certain it's not biased. The right QA provider will do this by being:

  • Empathetic towards diverse user bases
  • Aware of the latest trends and issues in AI prejudice
  • Up to date with a range of software testing tools
  • Thorough in the test approach and execution

Manual testing will run your software through usability tests with diverse customers in mind.

But good manual testing won't just identify a bias. Whether the root cause is unrepresentative or flawed data, it'll dig deeper to provide a full picture.

Setting up the right CI/CD framework will mean you can run code through automation tests for immediate and ongoing feedback that is available at every stage of the development process. And when this testing flags a bias, an experienced QA engineer can capture the defect in detail for developers to eliminate.

Developing software for diversity

'The broader one's understanding of the human experience, the better design we will have.' - Steve Jobs

The right QA provider will become a core part of your team during the development process, with dedicated testers on hand to flag any worrying results.

They won't just identify biases; they'll dig into the root cause to discover why they're occurring to begin with, as well as recommending mitigating processes.

This approach allows you to care for a diverse user base by combating algorithm bias. Making your brand more inclusive, responsible, and superior in the best possible way.