Taking deepfakes seriously
Early this year, I wrote about how the world has entered the deepfake era but could not find a lot of local examples to make my article more relatable for Philippine readers. Just seven months later, I find myself revisiting the issue—this time with an abundance of local cases, illustrating how quickly deepfakes are infiltrating the online space.
For context, deepfakes are images, audio clips, or videos that have been manipulated using artificial intelligence (AI) to misrepresent people or events. Just last week, a doctor friend of mine discovered advertisements that feature falsified videos of herself selling bee venom products as a cure for gout. This kind of deceptive marketing is not only unethical, it is also quite dangerous, given the unregulated nature of these “snake oil” treatments. Unsuspecting individuals could face serious health risks from consuming these unvetted products.
Even President Marcos has had his share of deepfake-related challenges in the past few months. The most notable incident was a video clip supposedly featuring the voice of Mr. Marcos urging military action against China in response to the ongoing tensions in the South China Sea. This was immediately debunked by the Presidential Communications Office, stating that no such directive was ever issued by the President. After investigating its origins, the government said it was most likely the work of a “foreign actor” seeking to undermine the President’s foreign policy.
Article continues after this advertisementWomen, however, continue to be the most vulnerable victims of deep fakes. In my previous article, I shared that 96 percent of deep fakes being created were non-consensual pornographic content featuring female celebrities and ordinary women. What is even more disturbing is how some producers of deep fake content show no remorse at all and are bragging publicly about what they have done. One X user boasted about having made lewd content featuring himself and a member of the BINI girl group. Star Magic has since released a statement saying it will take legal action against these malicious content creators.
But a woman who does not have the same influence and powerful backing may not be so lucky. In a Reddit chat group, a disgruntled ex-boyfriend shared how he had used AI to manipulate pictures of his girlfriend to make revenge porn—saying no one will question its authenticity because the images came from him.
With the growing accessibility to the technology for creating deep fakes, experts have also warned individuals and businesses about the surge of identity theft, and how certain facial recognition systems can be easily fooled by deep fakes to access someone’s personal and financial information.
Article continues after this advertisementFor those who find themselves featured in deepfake content, civil society groups fighting against digital disinformation recommend some key steps to take: 1) Document all evidence, including screenshots of the manipulated content and any takedown requests; 2) Report the post to the social media platform administrators (inform the people in your network to do the same) as well as the authorities. Another friend who has been victimized recommends getting in touch with the National Bureau of Investigation Cybercrime division through the hotline and email indicated on their website; 3) Consult a lawyer and/or an advocacy group to see what civil or criminal actions you can pursue. The Department of Information and Communications Technology (DICT) has acknowledged that the Philippines currently lacks specific regulations addressing deepfakes but that Article 154 of the Revised Penal Code—a law penalizing any form of media purporting false news—can be applied to these cases for now.
However, band-aid solutions will not last long. As AI-driven technologies evolve and become more accessible, its use for unethical means will also become more sophisticated. Consider for a second the speed at which AI is growing: Instagram took about two and a half years to reach 100 million users. In contrast, ChatGPT reached 100 million monthly active users in just two months.
While DICT’s AI road map is a good start, we need more proactive government directives that will foster enabling conditions for urgent and coordinated action from policymakers, industry leaders, academia and civil society to ensure we have policies, ethical frameworks, and proper guardrails that will balance promoting innovation with responsible use. A good source of best practices is Estonia, which appointed a chief data officer to drive data-informed AI adoption and policies. Under the leadership of data governance and data science expert Dr. Ott Velsberg, Estonia introduced key initiatives like integrating data literacy from kindergarten through higher education and building the capacity of every public sector organization to have its own AI and data strategy.
The question is not whether we will adapt to AI, but how well we will manage its risks while harnessing its potential. The proliferation of deepfake cases highlights the urgency for action.