Unchecked facial recognition tech is a terrifying future we'll likely have to live with

A recent report has highlighted that a number of retail chains are using unchecked facial recognition technology in its stores. Stores are utilising the technology to spot those previously apprehended for shoplifting.

It’s one thing to use facial recognition technology anyway, but to do so without alerting people to its use is unethical. Not only does it mean that the AI technology will probably store each person’s facial information, but it is doing so without consent. 

The report by The Verge unveiled just part of a wider trend which sees security firms, private companies and even the government relying on AI facial recognition technology. As the population continues to increase and our reliance on AI grows, it’s a technology that will become even more commonplace.

There’s a few problems though. The current-day technology is not very good, and in its current state could lead to both illegal and unethical results. 

Facial recognition is crap

Today's facial recognition technology is terrible. That’s not just me being dramatic, there are tests which proved how dreadful it is in action. The Metropolitan police tested facial recognition technology back in 2016, placing it in effect during the Notting Hill carnival. In theory, the test was supposed to show how the technology could aid the police. However, in action, the results were not great. 

The report, which you can view here, is detailed indictment of how dreadful the technology is. Firstly, the facial recognition tech was picking up people who had already served or settled their crimes. Which meant the police were essentially harassing people who had done nothing wrong. It also wasted police time, time that should have involved tracking genuine criminals. 

Police officers were also unclear in warning the local population about the trials, and people had limited choices in refusing to take part. The police fined one man £90 after becoming aggressive after they questioned his face covering. Arguably, the man was well within his right to cover his face if he was uncomfortable taking part in the trial. If he wasn’t on the run for a crime, he was simply expressing his right to privacy. 

Read More: Predicted societal and industrial collapse “on track” for this century, says new study

Racial bias 

Along with being under developed and legally questionable, facial recognition technology has a track record of racism. The error rates are already pretty high amongst white people, but when the rates increase, when scanning people of colour. It is particularly poor at distinguishing women of colour as well. Only last week, it barred a black teen in America from going ice skating due to facial recognition technology not working. Before we use a technology that has legal implications, we should probably make sure it isn’t racist. 

PSX 20210720 102300
expand image
This tweet actually contains two images of Mitch McConnell and Barack Obama. However, Twitter's racist facial recognition tech only pushed white faces as the focus when auto-cropping tweets.

The likes of IBM and Microsoft have committed to improving data collection and both want to solve the issue. The only problem is that the institutions who will use facial technology already don’t have the best track record when it comes to the treatment of people of colour. A National Academy of Science study found that African American males are 50% more likely to be killed during their lifetime than a white American. A racially biased technology in the hands of an institution steeped in systemic racism can’t be good.  

Facial recognition technology's failure rate could place innocent people in harm's way. We’ve seen time and time again how a simple stop-and-search can turn fatal. The last thing we need is even more people being wrongly flagged by an AI system that is fundamentally racist.  


AI quality aside, the legality surrounding facial recognition technology is iffy at best. Many people don't realise their faces are already sat in a database. To take a picture of a person’s face without their permission is morally ambiguous at best. Private companies have rushed the product to the market, and governments have not been quick enough to create any real legislation regarding the tech.

There‘s also the other debate around data protection. Plenty of governments across the world have leaked data accidentally through negligence and hacking. How long until a hacker steals a database containing millions of faces? Our mobile phones, banking details, contacts and more are all unlocked with facial recognition technology these days. Soon, our digital identities are going to be incredibly valuable to thieves. So, it may not be a great idea for governments to store millions of faces.

Facial recognition technology is supposedly the future, but it could remove our freedoms and cause more damage than good. Ultimately, we need to decide where the line is. 

Read more: Facebook abandons brain-powered typing for AR glasses

This Article's Topics

Explore new topics and discover content that's right for you!

Have an opinion on this article? We'd love to hear it!