This article appears in the Witness section of the Spring 2019 issue of the New Humanist. Subscribe today.

If you have one of the latest iPhone models, you will probably be familiar with facial recognition technology: rather than entering a PIN, there is an option to have the phone scan your face in order to unlock. Although compared with other biometric technologies – iris or fingerprint recognition – facial recognition is less accurate, it is already being widely adopted because it is contactless and non-invasive. Another highly visible use of the technology is on Facebook, which uses facial recognition to suggest which friends to tag in your photos.

Facial recognition technology is now a major commercial enterprise – and its uses go far beyond social media. A recent report by the Center on Privacy & Technology at Georgetown Law found that half of all American adults – some 117 million people – are enrolled in unregulated facial recognition networks used by state and local law enforcement agencies.
But is this technology fit for purpose? There is growing concern that many of the algorithms that make decisions about our lives – from the ads we see on the internet to how likely we are to become victims or perpetrators of crime – are trained on data sets that do not include a diverse range of people. This can result in an inbuilt racial bias.

A study published in January by the Massachusetts Institute of Technology (MIT) compared tools from five companies, including Microsoft, Amazon and IBM. The study found that Amazon’s tool Rekognition had an error rate of 31 per cent when identifying the gender of images of women with dark skin. This compared with a 22.5 per cent rate from Kairos, which offers a rival commercial product, and a 17 per cent rate from IBM. By contrast Amazon, Microsoft and Kairos all successfully identified images of light-skinned men 100 per cent of the time.

“Keep in mind that our benchmark is not very challenging. We have profile images of people looking straight into a camera. Real-world conditions are much harder,” said MIT researcher Joy Buolamwini.

The poor results from Amazon’s tool are particularly notable: the company has aggressively marketed its facial recognition technology to police departments and federal agencies as a service to help law enforcement identify suspects more quickly. (Microsoft, by contrast, has called for regulation, saying the technology is too risky for companies to oversee on their own.) Given the well-documented racial bias in many countries’ criminal justice systems, it is crucial that facial recognition tools are avoided for as long as they can’t be trusted.