What is Facial Recognition and Why is it Dangerous?

Surveillance technology has become increasingly common in public spaces, but is facial recognition software safe? Let’s explore what facial recognition is, how it works, and why you should be aware of its use.


What is facial recognition?

Facial recognition software uses deep learning to analyze a person’s facial features and create a faceprint. The objective of facial recognition software is to confirm a person’s identity, such as when you scan your face to unlock your smartphone (Gillis, 2022). 


How does facial recognition work? 

To create a faceprint of an individual, facial recognition software measures various features on a person’s face, such as the shape of their lips or the distance from the tip of their nose to their browbone. By mapping out these facial features, the software can create a holistic scan of the individual’s facial characteristics — a faceprint — and then compare them to images or videos of a person to determine their identity. 


This technology has a variety of applications: Facebook utilizes facial recognition to tag users in photos, smart advertisements in airports use facial recognition to identify the demographics of passersby and serve them tailored ads, and MasterCard uses facial recognition in its biometric authentication app, known as “selfie pay” (Gillis, 2022). 


Why is facial recognition dangerous? 

Despite making our everyday lives easier, facial recognition software has a few risks to be aware of. Here are some of the dangers to be wary of regarding facial recognition: 


  • It enables stalking and abuse

In 2020, the American Civil Liberties Union (ACLU) sued facial recognition company Clearview AI for unlawful surveillance of Illinois citizens. In particular, the ACLU took action against Clearview AI to protect vulnerable populations like domestic violence survivors and undocumented immigrants who could face severe harm due to facial recognition technology. Facial recognition enables stalkers, abusers, and predators to track victims by misusing their biometric data. This kind of surveillance presents real-life harm to vulnerable communities, especially considering that Clearview AI’s construction of a massive database of biometric information is happening without the consent or knowledge of the individuals the company is collecting data from (ACLU, 2020). 


  • It leads to the targeting of marginalized groups

In the United Kingdom, privacy organization Big Brother Watch is working to end surveillance via facial recognition technology through their “Stop Facial Recognition” campaign. Recently, the organization brought attention to the use of live facial recognition cameras in Southern Co-op supermarkets. While the majority of the chain’s locations are in wealthy areas, Big Brother Watch found that facial recognition cameras were most often installed at locations in poorer neighborhoods. Thus, the technology is being used to unfairly monitor and track marginalized groups, which is a massive violation of privacy (Das, 2024). 


  • It isn’t always reliable

Though it is used widely, facial recognition technology can be inaccurate, leading to faulty identifications. In 2022, Harvey Eugene Murphy Jr. was unjustly arrested for shoplifting after a facial recognition program misidentified him. After the facial recognition program mistakenly identified Murphy Jr. as one of the suspects in an armed robbery of a Sunglass Hat store, he was arrested and sent to jail. Eventually, authorities were able to confirm that Murphy Jr. was not related to the crime, but not until after Murphy Jr. had been assaulted while in jail, resulting in permanent injuries (Speakman, 2024). 


How to Protect Your Facial Rights


  • Support facial rights and biometric data initiatives

Globally, human rights organizations like the ACLU and Big Brother Watch are fighting to end the unjust surveillance of citizens through facial recognition technology. Visit their websites to learn more about how you can support their initiatives and protect your biometric data. 


When sharing photos of your family online, you may not realize how harmful this seemingly mundane action can be. Once a photo makes it online, anyone can share it elsewhere and even manipulate the content. ImageShield is a photo monitoring service that allows you to track whether the photos you’ve shared on Facebook, Instagram, and elsewhere are being misused.


Get your free ImageShield report today on the security of the photos you’ve shared on Facebook and Instagram. Visit our blogs for more information on media literacy and how to protect yourself and your family from photo abuse.



ACLU Sues Clearview AI

Facial recognition cameras in supermarkets ‘targeted at poor areas’ in England

Faulty Facial Recognition Technology Led to Man's False Arrest. Then He Was Sexually Assaulted in Jail: Suit

What is Facial Recognition?

Photo by cottonbro studio from Pexels


Leave a Comment