As technology develops and becomes more sophisticated, the ability to manipulate our experience of the world becomes more layered, detailed, and believable.
Virtual reality, augmented reality, 3D, and computer-generated films serve to enhance the way we engage with art and entertainment, and largely in a positive way. The flipside of this coin occurs when the world we experience can be altered in a negative way via technology. When, for example, statements from political leaders can be altered or even completely invented, or footage of actual events can be doctored to reflect certain agendas. In these instances, it’s not hard to see the complications and questions that arise from using technology in this way.
Enter deepfakes, a relatively new way technology can manipulate the world around us and those in it for specific political, social, or cultural gain. Let’s first define what a deepfake is and then move onto the issue of how today’s tech professionals can help deter this malicious use of technology.
The dictionary defined deepfakes as: “an image or recording that has been convincingly altered and manipulated to misrepresent someone as doing or saying something that was not actually done or said.”
In other words, a deepfake is a digitally created person, place, or thing represented in a visual manner that either alters or completely invents a moment in time to alter the audience’s perception of reality.
Sounds heavy, right? Perhaps a better way to understand deepfakes is to compare them to caricature drawings: exaggerated, though accurate enough portraits that capture the essence of an individual in such a way that the drawing itself becomes believable enough to cement an understanding in the observer’s mind.
Perhaps the best way to truly understand and define a deepfake is to briefly look at a few popular examples in recent years:
- Tom Cruise deepfake: This viral video published on TikTok features Tom Cruise working as a custodian and eschewing those with unsanitary behaviors and habits. Is this a deleted scene from an upcoming film? An online goof pulled by Cruise himself to promote an upcoming movie? No and no. This in fact was a deepfake created by the TikTok profile deeptomcruise that was engineered using a variety of imaging, sound capture, and animation technology that appeared so genuine it sparked a small investigation and report by
- Jordan Peele as Barack Obama deepfake: In 2018, then President Barack Obama appeared on video with a stern warning about misinformation and misdirection in the news media and the potentially damaging ramifications. The video of course went viral and it wasn’t long before technology and authentication professionals noticed a few oddities with how the Commander-in-Chief sounded and appeared on-screen. It wasn’t long before comedian Jordan Peele announced the video was in fact a deepfake and something of a PSA to educate the public on the ease in creation via technology and the power such misrepresentations can possess in shaping public opinions.
- Everyone is Dr. Phil deepfake: This version of a deepfake paints with broader strokes and is a bit more silly compared to the other two examples, but what it demonstrates is that the technology to create deepfake is so powerful and pervasive that examples like this painted with a broader brush can still populate public consumption. This video, which was published in May 2020, has 3 million views and signals that there is something wholly watchable, intriguing, and entertaining about deepfake technology.
Dangers of deepfake misuse
Our definitions and examples of deepfake technology were relatively benign in nature. They were used for entertainment purposes or to expose how misinformation is dangerous in informing public opinion. But what about the potential for deepfake technology to be used in a harmful or dangerous way? How can this technological rendering inflict real hurt on individuals, cultures, or societies?
A May 2020 article in Forbes addressed these exact questions. The thesis of the article is that deepfake technology (and many technologies in general) are perhaps evolving at a rate that surpasses human ability to distinguish fact from fiction, the real from the unreal. Or, in other words, that it’s difficult for a human being’s internal deciphering mechanisms to keep pace with how valid a world or experience technology can create:
“While impressive, today’s deepfake technology is still not quite to parity with authentic video footage—by looking closely, it is typically possible to tell that a video is a deepfake. But the technology is improving at a breathtaking pace. Experts predict that deepfakes will be indistinguishable from real images before long.”
The article elaborates on several potentially damaging scenarios of the permeation of deepfake technology. But perhaps the most dire stems from the fact that technology and social media and the ability to share information with a swipe or click is at its apex compared to any other point in history. This enhanced level of communication and information sharing is most ungated (unchecked) as it is often crowd-sourced: videos from witnesses, observers, or commentators responding in real-time. For deepfake technology and those looking to exploit it in a negative way, this is essentially open field according to the Forbes article:
“It does not require much imagination to grasp the harm that could be done if entire populations can be shown fabricated videos that they believe are real. Imagine deepfake footage of a politician engaging in bribery or sexual assault right before an election; or of U.S. soldiers committing atrocities against civilians overseas; or of President Trump declaring the launch of nuclear weapons against North Korea…”
It’s not hard to recognize the catastrophic consequences of deepfakes such as these outlined above.
How deepfakes work
Before we can examine how and what today’s technology professionals can do to combat deepfakes, let’s take a quick look at how technology affords the creation of deepfakes.
Using a combination of AI and existing video and audio recordings, deepfake technology deploys generative adversarial networks (GANs) in which two machine learning models essentially engage in a dueling banjos competition, going back and forth where one model creates a digital forgery and the other attempts to identify and thwart said forgery. This process continues until the second model can no longer detect whether the first one is in fact a fake.
According to a January 2020 article in Towards Data Science, researchers at Stanford University, Princeton University, the Max Planck Institute for Informatics, and Adobe Research conducted a study to demonstrate how elementary deepfake technology current is, but also how relatively simple it is for someone with audio, visual, and animation experience to create a fairly authentic video or image:
“First, they scanned the target video to isolate the sounds that make up words spoken by the subject. Then they match these sounds with the facial expressions that accompany each sound. Lastly, they create a 3D model of the lower half of the subject’s face.”
The final step toward creating a passable deepfake involved editing the final text transcript of the video and then engaging the software to combine all information collected in each step to construct new footage and match the text entered ascribed to the subject (s) in the footage.
Technology professionals fighting deepfakes
So, now the question becomes how can today’s programmers, coders, and developers combat the propagation of deepfakes? What can today’s motivated techies do to fight this potential scourge of the digital age? As it turns out, quite a lot.
A November 2019 article in Science Focus laid out a roadmap for how governments, large corporations, and other global entities who may be vulnerable to deepfake technology are exploring whether entire task forces dedicated to identifying deepfakes and sniffing out deepfakers before their videos go viral is a worthwhile proposition, and experts in digital forensics at some of the country’s most renowned universities are laying the groundwork for such an eventuality.
Professor Hanry Farid teaches at digital forensics at the University of California Berkeley and according to the article in Science Focus, he’s betting big on the future of techies as an important combatant in the malicious use of deepfake technology:
“Farid is so concerned about a world leader being deepfaked that he and his team are developing a system for recognizing deepfakes of specific politicians. They’re currently using automated software to analyze the head and face movements of a handful of leaders around the world, including Donald Trump, Theresa May, and Angela Merkel, to identify unique patterns. A suspected fake video of one of these leaders can then be analyzed to see whether it matches their real-life movements.”
In Amsterdam, another team of technology professionals called Deeptrace is working on a powerful software system that essentially turns deepfake technology on itself in order to detect whether a video or image is fraudulent. Deeptrace’s software uses an algorithm found in most deepfakes to determine whether a video or image has veered from reality and to identify subtle flaws or variances in deepfake videos.
While the technology to create more authentic deepfakes is likely to evolve, the ability to spot and debunk deepfakes is likely to evolve as well. For techies looking to get in on the ground floor of an exciting platform in coding and development but also make a positive impact on the world by sorting fact from fiction, combatting deepfake technology shows immense promise.
Start Your New Tech Career Today
With our digital badge program, your path toward a software development career is more accessible than ever before. Designed for those who want to advance but can’t commit to a three-month, nonstop bootcamp, the digital badge program offers the same quality content and rigorous training in a flexible schedule designed to fit your life.View Details