Are we ready for deepfakes? | Inquirer Opinion

Are we ready for deepfakes?

/ 04:20 AM October 25, 2019

A YouTube clip titled “You Won’t Believe What Obama Says In This Video!” has racked up over 6.8 million views since it was posted a year ago. In it, Barack Obama is warning viewers about the dangers of misinformation on the internet. The message starts out fine enough, until a few sentences in, the former US president utters some pretty astonishing words such as “President Trump is a total and complete dipsh-t.”

Incredibly, before our eyes, these words are coming out of Obama’s mouth in his distinctive baritone, with his recognizable head tilts and hand gestures. But the truth, as the video quickly reveals, is that this uncanny Obama address is fake.


A “deepfake,” to be exact. The term refers to the “deep learning” artificial intelligence (AI) technology that is able to seamlessly superimpose a video image on top of another video, creating an edit that is almost impossible to spot on the fly. It’s like photoshopping someone’s face on top of another person—but on video. In the Obama clip, it is really actor-director Jordan Peele who is talking, but with deepfake software, Obama’s face is “applied” onto the original footage and makes a chillingly convincing imitation.

Deepfakes are mind-boggling and even entertaining at first glance. But since the technology has spread from visual effects studios to the hands of any amateur with a deepfake app, serious concerns about these manipulated media have surfaced.


For one, a vast majority of deepfakes are pornographic videos showing women who weren’t involved in the original recordings. A cybersecurity company called Deeptrace found 14,678 deepfake videos online as of October 2019, and 96 percent of them are pornography. The most common are those where the face of a female celebrity is placed on another woman’s body in the footage.

It’s not just famous people who get victimized by deepfake edits. Last June, HuffPost interviewed “women who have been digitally inserted into porn without their consent.” The report revealed that there are porn forums where users can pay to get fabricated videos featuring their “coworkers, friends and exes.”

Other articles from as early as 2017 raised the horrifying possibility of deepfakes being used for revenge porn and nonconsensual porn—and it is clearly happening now.

As hair-raising as it is to know that anyone can be edited into pornography now, this is not the end of what deepfakes can do. They are also prime for political weaponization.

At a time when the voting public is highly influenced by what they see on social media, without even pausing to doubt its authenticity and context, realistic fake videos can be enough to sway or deceitfully solidify opinion. Just this May, US President Donald Trump tweeted a manipulated video of US House Speaker Nancy Pelosi in which she appears drunk and stammering. The video has been viewed 6.3 million times so far.

Technology experts also warn that deepfakes can be used to conduct highly effective phishing and other cyberattacks. In July, the BBC reported that deepfake technology was used in multiple cases to imitate the voice of chief executives, tricking senior financial managers into transferring money.

Deepfakes are like a sci-fi nightmare from dystopian movies, except these forgeries are very real now and proliferating by the minute. Is the Filipino public ready to combat them?


It’s hard to believe so, considering that so many of us are still fooled by badly photoshopped pictures and fake headlines that are too painfully obvious. Even in my age group—the generation that came of age just as Facebook was born—people are still quick to hit “share” on bizarre health claims and novelty stories from dubious websites.

But even technology scientists and governments across the globe are still scrambling to address the dangers of deepfakes. Tech author and NYU professor Kristina Libby posits that combating deepfakes “has to be driven by individuals until governments, technologists, or companies can find a solution.”

Government initiatives are needed to protect us from dodgy content creators and deepfake platforms. Such fraudsters need to be held accountable.

But at the same time, it’s our gullibility and lack of caution that feed these fakes. So once again, it comes down to that old adage: Think before you click. Or, more precisely, think before you believe. Question what you see, hear and read. Verify from reliable sources. As humans with functioning brains, we’re only one Google search away from debunking false content. Even if it is a convincing video made by powerful AI.

[email protected]

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

Read Next
Don't miss out on the latest news and information.

Subscribe to INQUIRER PLUS to get access to The Philippine Daily Inquirer & other 70+ titles, share up to 5 gadgets, listen to the news, download as early as 4am & share articles on social media. Call 896 6000.

TAGS: artificial intelligence, youtube
For feedback, complaints, or inquiries, contact us.
Your subscription could not be saved. Please try again.
Your subscription has been successful.

Fearless views on the news

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

© Copyright 1997-2023 | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.