Deepfake era
Undercurrent

Deepfake era

/ 04:20 AM February 12, 2024

When actor Paul Walker died in a car crash in 2013, many wondered how his character in the long-running “Fast & Furious” franchise would be continued. Instead of canceling the movie, the production team scanned the faces of Walker’s brothers as point of reference, asked them to perform his scenes, and then replaced their faces with computer-generated imagery (CGI). “Resurrecting” Paul Walker turned out to be a good idea: Fans felt as if they were given the chance to give a proper goodbye and the movie became a box office hit.

The technology that made such a tribute possible has evolved, becoming more accessible and, in many ways, more problematic. In March 2023, Pablo Xavier, a 31-year-old construction worker from Chicago, made the news when an AI-generated photo of Pope Francis wearing a white puffer jacket went viral on social media worldwide with many people thinking that the image was real. Xavier explained that he was just playing with the artificial intelligence (AI) tool Midjourney to create something funny. He shared his worry and shock at how easily people believed the image was real “without questioning it,” and how quickly it was appropriated for various agendas, including using the fake image to criticize the Catholic Church’s supposed extravagance.

Widespread fake news has blurred our reality enough, but now we find ourselves contending with an even more challenging problem to address. Deepfakes are synthetic or falsified media using AI and machine learning technologies to make it look like someone is saying or doing something they did not actually say or do. These could be images, audio clips, or videos that have been manipulated and replaced with the target person’s likeness—their appearance, voice, and mannerisms.

ADVERTISEMENT

Currently, the biggest victims of deepfakes are women. Research shows that around 96 percent of deepfakes on the internet are fake nonconsensual pornography featuring celebrities and ordinary women. Just recently, Taylor Swift fans worldwide clamored to take down AI-generated lewd images of the singer circulating online. The explicit posts received more than 45 million views before X (formerly known as Twitter) was able to take it down.

FEATURED STORIES

Deepfakes can also pose serious security threats by impersonating public figures or key officials. Earlier this month, a finance employee in Hong Kong transferred over $25.6 million to fraudulent accounts after being instructed by their company’s supposed CFO via video call. Hong Kong police officials confirmed that the employee had actually met with scammers using a sophisticated deepfake.

Convincing and hyperrealistic deepfakes are still quite difficult to make but technology always improves at an accelerated pace. Experts warn that in as early as two to 10 years, everyone might have the capacity to use their smartphone to create a believable deepfake. This rising threat has spurred calls for legal, technological, and societal solutions, including better detection techniques, legal guidelines for using synthetic media, and increased public awareness. Recently, the Department of Information and Communications Technology alerted to a potential surge in deepfakes ahead of the 2025 mid-term elections, urging lawmakers to create laws regulating AI-generated misleading videos.

This is an urgent and necessary step. However, there seems to be a lack of clarity on how to go about it. In the United States, for example, congressional leaders had proposed the idea of mandating watermarking to identify which videos are real and which ones are AI-generated. Unfortunately, professor Soheil Feizi, a leading figure in AI detection from the University of Maryland, debunked this suggestion, saying there is no effective technology to do this. Experts have called on social media companies to continue investing in their content moderation capabilities and to change the way algorithms are incentivizing the spread of sensationalized fake content.

It is also worth noting that while deepfakes could be used to falsely incriminate a politician, they could also be used as a convenient excuse to deflect and cast doubt on real incriminating videos. In a world where everything could be faked, every person could also have plausible deniability. For instance, a politician caught on video buying votes during campaign season could claim the clip was a deepfake. This presents another challenge for the media and election watchdog groups in addressing disinformation.

It is crucial to educate both ourselves and others about deepfakes. This should happen alongside training initiatives focused on basic digital literacy practices like verifying sources and refraining from sharing a post unless we are absolutely certain of its authenticity. More importantly, we need to keep reminding and educating the public how not to be passive receivers of online information. Critical thinking is an active skill that needs to be developed and constantly strengthened. At a time when the line between fact and fiction is not easily discernible, it is our most effective weapon.

[email protected]

Your subscription could not be saved. Please try again.
Your subscription has been successful.

Subscribe to our daily newsletter

By providing an email address. I agree to the Terms of Use and acknowledge that I have read the Privacy Policy.

TAGS: fake, opinion

© Copyright 1997-2024 INQUIRER.net | All Rights Reserved

We use cookies to ensure you get the best experience on our website. By continuing, you are agreeing to our use of cookies. To find out more, please click this link.