How AI Deepfakes are Targeting Children

As artificial intelligence (AI) technology continually improves, so too does the ability to create realistic synthetic photos. While this technology can be used in harmless and fun ways, it can also be used to cause damage. One of the most harmful uses of this technology is deepfakes.

 

While AI deepfakes are a relatively new phenomenon, they’re becoming more and more widespread each day; the proportion of deepfakes created in North America more than doubled from 2022 to 2023 (Sumsub, 2023). Unfortunately, deepfakes present a danger to anyone who posts content online, as well as children whose parents post content about them.

 

What are deepfakes? 

Deepfakes are synthetic photos or videos created through the use of machine learning and AI. Deepfake technology manipulates existing photos, videos, or audio recordings of a person to produce fake media that copies the person’s appearance (University of Virginia, n.d.). 

 

How are deepfakes made?

Deepfakes utilize deep learning, a form of machine learning in which an algorithm is given examples to study to create outputs resembling the examples (Coursera, 2023; University of Virginia, 2023). For instance, if the algorithm is provided with photos or videos of the singer Taylor Swift, it can learn to produce artificial photos, videos, and audio clips that mimic Taylor Swift’s physical appearance and voice. As the algorithm is fed more data, it is better able to produce a more accurate representation of the example it is prompted to recreate (University of Virginia, 2023). 

 

Why are deepfakes dangerous? 

Photo manipulation technology has existed for years now, and you may even encounter it daily through Snapchat filters that alter your face or apps that show you what you would look like at 80 years old. These applications are made for lighthearted amusement, and it’s typically easy to tell that these photos have been altered (University of Virginia, 2023). 

 

On the other hand, deepfakes are much more dangerous; the use of AI technology allows for the creation of such lifelike photos and videos that people aren’t able to tell if they’re real or fake. Deepfake technology has been used to make synthetic pornographic content including revenge porn, sexually explicit videos of celebrities or the average person, and child pornography (University of Virginia, 2023). 

 

It can also be used to spread misinformation by creating audio or video clips of politicians, celebrities, or other public figures who appear to say or do things they haven’t said or done. Thus, deepfake technology has the potential to seriously damage a person’s reputation and cause serious mental and emotional harm resulting from being the victim of deepfake content (University of Virginia, 2023). 

 

How can deepfakes be used to harm children? 

Deepfake technology has also been used to make synthetic child sexual abuse material (CSAM). Cybercriminals can collect photos of your child that you’ve posted to social media and use them to create realistic pornographic photos or videos of your child. Though deepfake technology is somewhat new, many cases of deepfake CSAM have already been reported. In a 2023 public service announcement, the FBI warned against malicious actors who were creating deepfake CSAM to extort victims for money and sexual content. 

 

Deepfakes can harm children of all ages. In October of 2023, several female students at Westfield High School in New Jersey became the victims of sexually explicit deepfakes. Reportedly, nude photos with the girls’ faces were shared amongst a group of their peers on Snapchat (Hadero, 2023). This is only one of many such incidents, highlighting the need for parents to be aware of this issue. 

 

How to Protect Your Children Against Deepfakes

 

  • Limit what you post online

Once you post photos, videos, and any other information about your children online, that information is available forever, even if you take down the posts. Therefore, be wary about what kind of information you share on the internet, even if it seems harmless. When in doubt, don’t post anything. 

 

  • Restrict public access to your social media accounts

If you don’t want to remove yourself from social media completely, you should utilize the security features offered by the social media platforms you post on. Most social media platforms allow you to set your account to private, meaning only approved followers can view your profile and posts. 

 

  • Exercise caution when communicating with strangers online

If you don’t know someone in real life, it’s safest not to interact with them online. People can be deceitful and hide behind made-up identities on social media to get information out of you, so you should always think twice before engaging with a stranger online. 

 

The evolution of AI is making it easier to create realistic-looking manipulated photos every day. Once you put photos of yourself or your child on the internet, it’s difficult to know whether or not cybercriminals are taking these photos and using them for malicious purposes.

 

With ImageShield, a photo monitoring service that allows people to track the photos they share online, you can find out whether the photos you’ve shared on Facebook, Instagram or elsewhere are being misused.

 

Get your free ImageShield report today on the security of the photos you’ve shared on Facebook and Instagram. Visit our blogs for more information on media literacy and how to protect yourself and your family from photo abuse.



Resources:

Deep Learning vs. Machine Learning: A Beginner’s Guide

Teen girls are being victimized by deepfake nudes. One N.J. family is pushing for more protections

Malicious Actors Manipulating Photos and Videos to Create Explicit Content and Sextortion Schemes

New North America fraud statistics: forced verification and AI/deepfake cases multiply at alarming rates

What the heck is a deepfake?

Photo by Phil Nguyen from Pexels

 

Leave a Comment