Are we teaching AI to be prejudiced?

Credit: Unsplash

Jack Mitchell
Writer

Jack Mitchell discusses the ways in which our societal bias is inherited by artificial intelligence.

Artificial Intelligence (AI) is becoming ubiquitous. Social media platforms use AI to quickly dissect your personality, likes and interests, and to decide what you see on your discover feed. Archaeologists use AI to examine aerial footage to look for ancient settlements in inaccessible areas. Soon, AI may even be driving you to work. But it seems that in our haste to embrace these clever little programmes, we may have overlooked some of the systemic flaws these algorithms inherit from their flawed creators. 

One of the most prominent of these issues, and rightly so, is the issue of racial bias. Artificial intelligence works by using datasets to discern the differences and statistical links between individual points. For example, the facial recognition AI on smartphones learns from a dataset of pictures of people. It uses the similarities in people’s faces to tell when it is seeing a face, and their differences to tell when it is seeing the right face, in order to unlock the phone. This seems simple enough, but in practice there is a serious problem: research has shown that facial recognition AI more frequently misidentifies the faces of those with darker skin, predominantly due to the original datasets containing a disproportionate number of white participants. This problem then manifests itself in any application that uses facial recognition — in July, VICE reported on a smartphone app promising to turn your face into a renaissance painting which chillingly whitewashed people of colour. 

However, this problem goes far beyond gimmicky smartphone apps. In fact, when the BBC initially reported on this issue earlier this year, they gave the example of CCTV cameras with facial recognition potentially misidentifying an innocent civilian as a terrorist, whom counter-terrorism police may then “neutralise” in a misguided attempt to prevent a terror incident. This technology is not just some far-fetched “What if?” from a potential dystopian future — it was trialed this year by police forces across England and Wales. And in case you are trying to console yourself with the thought that cases of misidentification would be statistical anomalies, the facial recognition technology in these initial police trials misidentified faces a (quite terrifyingly) 80% of the time. 

Facial recognition isn’t the only branch of AI technology that has been found to contain racial bias. A study published in the journal Nature in October found that black patients in the US were not being referred as frequently to higher-level care facilities as equally sick white patients, by an algorithm used by hospitals and health-insurers to “help” manage care for US patients. This case is not a hypothetical scenario, this is happening now, with real consequences for real people. The difference was not marginal either; the researchers found that a mere 17.7% of patients selected to receive extra care were black. If the algorithm had been unbiased, researchers predicted that number would be 46.5%. Worryingly, Nature reports that studies such as this one are rare, as researchers struggle to gain access to the sensitive health records that are used by the algorithm to make these decisions, limiting our ability to create public discourse around this troubling trend.

If you are reading this article and starting to wonder what is being done to address this issue, the news is depressingly grim. Google has hired a subcontractor, Randstad, to gather more data on the faces of those with a darker skin tone, in order to try and reduce the algorithmic bias in their facial recognition technology. However, employees from Randstad have alleged that they were advised to specifically target vulnerable sections of society, such as the homeless, who are more likely to be tempted by the measly reward of a $5 Google gift card. The sub-contracted employees also allege that they were urged to rush the “participants” through the consent forms, and obscure the true purpose of this data collection, characterising it as a “selfie game”, when in reality they were collecting and storing in-depth data about the subject’s facial features. 

In light of these allegations, and the media reports on the matter, Google has suspended the programme. But it is too little, too late. The solution to systemic racial bias in the new technological infrastructure we are, as a society, building is not to take advantage of vulnerable people of colour. One of the often touted advantages of AI is that it does not contain human biases and so it is a fairer system to employ in societal infrastructure than one based on human judgement. However, as it is built on biased datasets, AI is simply holding a mirror up to us and reflecting back our systemic flaws. If we are to truly make society fairer for all, we need to be aware of and adequately address these problems. Otherwise, through our own complacency, we risk transferring these biases into the new age.

Author

Share this story

Follow us online