With artificial intelligence now being used to interview and select job applicants, what’s next for us in the face of technology?
Last month, multinational corporations (the Frankenstein-like conglomerations of all your favourite brands, that have brought us such wonderful societal developments as; climate change, sweatshops, CEOs who can afford a fleet of yachts, etc) have taken us yet another step closer to living inside the Blade Runner.
The Telegraph reports that for the first time in the UK, artificial intelligence (AI) has been used to screen potential candidates for jobs. Unilever, the parent company to a plethora of brands from Dior to Lynx to Pot Noodle, has started using the technology to analyse the facial expressions and speech patterns of job applicants via videos filmed and submitted by the applicant themselves.
Now don’t get me wrong, I’m as excited as the next guy to have a soulless cog in the corporate machine (really, is this any different to the current process?) decide whether I’m worthy enough to work 40 hours a week to be able to afford food. But I must admit, I have a couple of doubts about this new system that even years of pro-tech “Woo! Look at this shiny new smartphone!” propaganda can’t fully suppress.
First off, there’s a serious flaw in any AI that analyses our faces: racial bias. Now, you would think that one of the benefits of having a robot with no emotions decide between job candidates would be that it would allow us to eliminate racism in the job application process, but research done by computer scientists at MIT Media Lab have found that facial recognition technology much more frequently misidentifies those with a darker skin tone than those with lighter skin. Now, although this specific form of AI does not employ full facial recognition in the same way that your creepy iPhone 11 does, it does still necessitate the ability to read our facial expressions. If people with darker skin tones are more frequently having their facial expressions misread by an imperfect algorithm, then they are experiencing an unfair application process. Considering that one of the main purposes of introducing AI into this field is so that applicants are treated more fairly, this seems to me to be a pretty glaring issue. As if BAME individuals don’t face enough hardship with human racism, soon they’ll need to cope with artificially intelligent racism as well.
However, technological racism isn’t the only way in which this technology won’t create a fairer application process. Which applicant gets the job will be decided by a set of parameters dictated by the company, however, if this technology becomes widespread, then just like the application process to elite universities, a few crafty and astute individuals will cotton on to what attributes create a successful applicant for what jobs. Whether it’s through trial and error, interviewing successful applicants, or some other means, this information will eventually get out. I believe this could create a market similar to the tutoring services promising to help applicants perform well at the interview process for getting into Oxbridge, and I fail to see how this makes the application process any fairer. Instead of the job going to the rich kid whose uncle used to work in banking with the interviewer, the job will go to the rich kid whose dad paid for an expensive coach to help him make his interview video. Different process, same outcome; those from a less favourable socioeconomic position have a significant disadvantage.
But more fundamentally than the practical issues outlined above, I think any incorporation of AI that analyses our faces is a step in the wrong direction on the societal level. Although this technology does not utilise full facial-recognition, each little piece of our lives that has this technology incorporated into it is another metaphorical link in the virtual chain by which we are imprisoned. Soon 5G and its uber-fast connectivity speeds will allow this technology to be linked to everything from your smartphone to CCTV. Do you trust the government to use this wealth of data for purely benevolent purposes?
The infrastructure that could potentially allow an authoritarian government to oppress us in a way previously reserved for science fiction is being built today by the so-called “tech bros” of Silicon Valley, who live in a socially insulated environment and do not necessarily know what is best for different communities, cultures, individuals. What we need is an intragovernmental cooperative organisation, in a similar vein to the World Health Organisation, to monitor and analyse the development of new technologies and the ethical, physical, and societal problems these technologies might create. Instead, we have the all-mighty Facebook, selling our data to morally bankrupt companies such as Cambridge Analytica. If there’s an afterlife, George Orwell is laughing, and telling his doubters “I told you so”.