Photo abuse runs rampant in our world today, yet is not a new phenomenon by any means. Manipulating media has its origins as early as ancient Romans removing portraits and names off of stone records in an attempt to remove a person’s identity from history altogether. Throughout history, the altering of media and information continued in different formats–from the censorship and photo editing of government officials like Joseph Stalin in the early 20th century–to our modern-day technology with Photoshop capabilities at our fingertips (Somers, 2020).
With just a few taps on a smartphone, the average person now can remove acne spots, change the shape of their body, or erase people from photos entirely–and that is just the basics of what a person is capable of manipulating nowadays. The average person can now doctor photos, videos, and audio to reflect a false perception of reality–or in other words, something that never even happened can be edited in content to seem like it did. These can be alarmingly difficult to detect. One type of media manipulation that can create a false perception of reality is deepfakes.
What are deepfakes?
A deepfake refers to a form of synthetic media, also known as media that has been altered or fabricated to reflect something that is not real (Somers, 2020). Typically, a deepfake will be a doctored video or solo audio created with artificial intelligence (AI) software (Cook, 2019). This term is believed to have been coined in 2017 after a Reddit user by the name ‘deepfake’ shared fake pornographic videos using face-swapping technology (Somers, 2020). The term also refers to deep learning algorithms, the technology used to create deepfakes (Johnson, D. & Johnson, A., 2023).
These deep learning algorithms are a form of technology that can teach itself how to solve problems when presented with a large amount of data. With this capability, the technology can be trained “to create fake content of real people,” with content or data that already exists (Johnson, D. & Johnson, A., 2023).
The process of creating a deepfake sounds complicated, but the program actually does all the work. To create a deepfake, AI technology requires a base video and a collection of videos to analyze. The program will then analyze how a person looks in the set of videos provided to guess how they may look in certain conditions and angles. Then, the program will insert the person into the target video by analyzing common features (Johnson, D. & Johnson, A., 2023).
Deepfakes vs Cheapfakes
It’s important to note that not all media manipulation is a deepfake. Another type of media manipulation is cheapfakes, which is content that has been altered with human input, as opposed to AI. This can be something as simple as slowing down the playback speed of a video, which happened to U.S. House of Representative Nancy Pelosi in an attempt to make her appear inebriated.
What separates this example of a cheapfake from a deepfake is whether there was human input in the final product. If the content was only AI-generated with no further human manipulation, then it is a deepfake. However, both have the potential to rapidly spread misinformation before they are flagged and removed from social media platforms.
How are deepfakes used and what are the dangers of deepfakes?
Now, you may be wondering how all of this even relates to you. After all, many people who are the common targets of deepfake incidents are rich and famous. Yet, as deepfakes become more prevalent in our media, it’s increasingly affecting even the average person. Some common uses for deepfakes are revenge pornography, blackmail, reputational harm, false evidence, fraud, misinformation, and political manipulation.
- AI-generated Pornography: The use of AI-generated programs to create non-consensual pornography and revenge pornography is becoming an increasing danger to the public, so much so that the FBI released a public service announcement in June 2023 on the matter. The FBI warned the public on the increasing dangers of AI-generated programs for uses of “Explicit Content Creation,” “Sextortion,” and “Harassment.” Since a Reddit user by the name of “deepfakes” created deepfake pornography featuring the faces of prominent celebrities in 2017, AI-generated, non-consensual pornography has skyrocketed. In 2019 it is estimated that 96% of the deepfake videos online had pornographic content, with the majority of those videos targeting women, according to a report by Deeptrace (Johnson, D. & Johnson, A., 2023). That same report also found the deepfake video count online amounted to more than 14,000 videos, which was a 100% increase from the previous year in 2018 (Somers, 2020).
- Blackmail and Reputational Harm: Deepfakes can cause serious harm to a person’s reputation and be used as a form of blackmail. For example, a deepfake can portray a person doing or saying something inappropriate or illegal, such as lying or participating in illegal activity (Barney, 2020). A deepfake could mimic the voice of a child as a way to threaten and extort a parent, which happened to at least one family in early 2023. Or, it could portray a person saying racist and threatening statements they didn’t say, which happened to one principal in New York and was created by students of the school (Johnson, D. & Johnson, A., 2023).
- False Evidence: Deepfakes can be created with the intent to fabricate videos and audio during legal proceedings to portray a false reality. This fabrication of evidence can significantly impact a legal case as it alters factual information used to determine a court’s ruling on guilt, innocence, and the severity of punishment (Barney, 2020).
- Fraud: Deepfakes are also used as an impersonation tactic to commit fraud. By using deepfakes, people can obtain personal information such as credit card numbers, bank account information, social security numbers, and more. This can directly impact your finances by depleting your accounts, ruining your credit, and filing your federal and state taxes to steal your return (Barney, 2020).
- Misinformation and political manipulation: All other uses aside, deepfakes likely affect the average person the most on a day-to-day basis with misinformation and political manipulation. Deepfake videos are also commonly used to rapidly spread misinformation to sway the public opinion of politicians and trusted news sources, and are sometimes referred to as “fake news.” Deepfakes have the potential to meddle in elections and even warfare by spreading misinformation and propaganda (Barney, 2020).
In 2018, a political party from Belgium released a speech given by Donald Trump requesting “Belgium to withdraw from the Paris Climate Agreement,” (Johnson, D. & Johnson, A., 2023). However, this video wasn’t real, nor was the speech–the video was a deepfake. In more recent events, a deepfake circulated online in 2022 of Ukrainian President Volodomyr Zelenskyy which portrayed the country’s political leader asking his troops to surrender. It was later revealed this video was a deepfake, yet it still caused confusion and put many lives at risk during a time of warfare in Ukraine (Barney, 2020). Deepfake videos can also cause non-political chaos, such as portraying a CEO of a major company announcing company wide layoffs which leads to a stock market crash (Somers, 2020).
Unfortunately, deepfakes don’t even have to be made well to incite harm. Graphics that are decent enough for viewers to identify a person saying or doing something can leave a lasting impression–whether it’s a deepfake or real. Aside from the other dangers that come with deepfakes, another concern is an increase in plausible deniability. When people know there is a lot of misinformation and deepfakes on the internet, it opens up the possibility of people dismissing real events or factual information as fake (Somers, 2020).
Deepfakes are even beginning to impact live settings via Zoom meetings and phone calls. Using deepfake technology, a person could mask their identity during remote job interviews, college exams, and even visa applications. Reporters at Insider have even dealt with AI-generated scams disguised as real sources (Johnson, D. & Johnson, A., 2023). The possible dangers of deepfakes are endless and continue to develop more every day.
Despite all the dangers surrounding deepfakes and media manipulation in general, deepfakes are still technically legal. To be considered illegal under current laws, a deepfake would have to violate defamation and hate speech laws, or contain child pornography (Barney, 2020).
Ultimately, with very few legal protections against deepfakes or any other form of media manipulation, it is up to the individual to analyze the content they encounter. This makes detection extremely difficult as the average person can struggle to keep up with their media literacy as technology rapidly advances.
How to spot deepfakes
Spotting deepfakes can be difficult if you don’t know what to look for. Here are some details to look out for when determining if a piece of media is a deepfake or not:
- Look for blurry parts and other irregularities in a photo or video: Deepfakes are good at editing, however, they’re not perfect. Look for details that may seem off in a person’s face, body, or hair or excessive blurry parts of the visual that appear unnatural.
- Check for unnatural lighting in a photo or video: One clear giveaway of a deepfake is the lighting. Deepfake algorithms often do not match the physics of lighting since they keep the lighting of the original content and it can be difficult to imitate in media manipulation.
- Determine if the audio aligns with the visuals: A deepfake that was not given careful attention will sometimes have audio that does not align with the visuals. In a deepfake video, this can look like a sound that does not align with the movement of a person’s mouth. If the lip-synching is off, the media may have been manipulated.
- Check your sources: An easy way to determine if a piece of media is reliable is to vet the source. Often, a quick online search can provide evidence of a source’s validity or if a specific media was manipulated. A tool used by researchers and journalists to find the original source of a photo is reverse image searching. Once finding this information, it’s important to discern whether the source makes sense (Johnson, D. & Johnson, A., 2023).
Safeguard your photos with ImageShield
Photo abuse is rampant and will only continue to negatively impact our lives if we don’t take the necessary steps to ensure our privacy. At ImageShield, we believe everyone should have the fundamental right to protect their photos online from abuse or unconsented use.
Barney, N. (2020, July 28). Deepfake AI (deep fake). WhatIs.com. Retrieved August 28, 2023, from https://www.techtarget.com/whatis/definition/deepfake
Cook, J. (2019, June 23). Here's What It's Like To See Yourself In A Deepfake Porn Video. HuffPost. Retrieved September 11, 2023, from https://www.huffingtonpost.co.uk/entry/deepfake-porn-heres-what-its-like-to-see-yourself_n_5d0d0faee4b0a3941861fced?guce_referrer=aHR0cHM6Ly93d3cudGhldmVyZ2UuY29tLw&guce_referrer_sig=AQAAAHuKVPAcObek2OAuA2wzhr76fbIZFo9iS0D9Bvbd_iiLxahV8W1cRjZ86eiG4bbzWWBFMPLN1sMnosd0GCOQCM7mOnBFpZqTMzJbKUyUp8oWViuc5SpIc9duVFaLU9Wprgur8VQkKjhiYBCcXOPGjyKjzhoOwx1zq-augk2DYVUJ&guccounter=2
Johnson, D. & Johnson, A. (2023, June 15). What are deepfakes? How fake AI-powered audio and video warps our perception of reality. Business Insider. Retrieved August 28, 2023, from https://www.businessinsider.com/guides/tech/what-is-deepfake.
Somers, M. (2020, July 28). Deepfakes, explained. MIT Sloan School of Management. Retrieved August 28, 2023, from https://mitsloan.mit.edu/ideas-made-to-matter/deepfakes-explained
Photo by OTAVIO FONSECA: https://www.pexels.com/photo/photo-of-computer-setup-4665064/