What Is Algorithmic Bias?

 

noun_algorithm_2932271.pngnoun_algorithm_2932271.png

 

What is Algorithmic Bias?

Modern algorithms are unfair and often biased. In the US, two people who commit very similar crimes can get starkly different sentences because an algorithm says so.  What’s more they have no right to appeal and nobody can explain how the machine works. It’s a case of “computer says no”.

You might be surprised at this, but it can be difficult to ascertain how algorithms have made certain decisions.... Modern machine learning algorithms identify patterns in data, and use those patterns as rules - as heuristics - so rather than being explicitly programmed the algorithm has experiences, learns, and this changes its behaviour in the future.  

Heuristics are techniques the human brain uses, as an imperfect but fast method to speed up the process of finding the solution to a problem - they are mental shortcuts that make it easier to make a decision.  

Other ways to describe these heuristics include rules of thumb, educated guesses, stereo-types or just common sense. We need these shortcuts, because the amount of data we are producing is for want of a better word ‘insane’ - there are now 327 new megabytes of data produced per day, per person!

The problem with using heuristics like this in artificial intelligence is that humans have a set of moral values, and those values can’t yet be easily added to an algorithm.  Even if they could, values change relatively quickly and they vary across cultures.

Another thing about machine learning algorithms, is that they infer attributes about your identity without being explicitly told them.  

For example, a Harvard Professor showed that Google adverts for things you would need if you had been arrested or had a criminal record, were significantly more likely to be shown after searching for black-sounding names.

Nobody at Google sat down and made a decision to build it that way, the algorithm simply learnt some unjustified associations.  

In 2010, a judge at the ECJ decided that taking gender into account when pricing insurance and pensions products was unlawful. The industry argued that life expectancy was linked to gender, but the judge determined that gender was far less important than personal lifestyle choices and economic background. Effectively the insurance industry was using gender as a proxy, instead of making associations based on data they should have been collecting.

So bias can emerge when datasets inaccurately reflect society, but it can also emerge when datasets accurately reflect unfair aspects of society.  

This risks repackaging existing bias with the appearance of neutrality. It has been referred to as the equivalent of money-laundering for prejudice.

This challenge is of great importance where data privacy meets algorithms. Machine learning combined with personal data, creates some unique risks in terms of unjustified algorithmic bias.

Until we can figure out a way of building an ever-changing set of moral values into artificial intelligence, we need to be careful about the way we set up machines, and the information and power we give them to have effect on the world.

At Piccadilly Labs, we have started a project that looks at how we can give UK testers the tools to detect unjustified bias. If you are interested in contributing please follow me on twitter and drop me a note to introduce yourself! 

Adam Smith is Piccadilly’s chief technology officer and leads the company’s technology innovation. Adam also has extensive experience leading, driving and solutioning across a range of testing disciplines, including test automation, performance and penetration testing as well as the traditional functional testing.

  

Piccadilly Group logo.gifPiccadilly Group logo.gif
Previous
Previous

Management Consultancy And Motherhood

Next
Next

London Meetup: ‘Intelligent World: Building Better Software’