In mobilizing for the COVID-19 vaccination campaign, Indonesia’s health workers faced not only the logistics challenge of reaching 270 million people spread across the world’s largest archipelagic nation; they also had to contend with vaccine hesitancy driven by a torrent of misinformation, known as the “infodemic,” which had spread in parallel and as fast as the coronavirus. More than 2,000 COVID-19-related hoaxes that Indonesia’s government detected between 2020 and 2022 make up just a small portion of the dangerous content now flooding online spaces in Indonesia and around the world.
Hate speech — which undermines social cohesion and tolerance — is a growing problem as malicious actors seek to exploit digital tools to spread racism, violent misogyny, anti-Semitism, anti-Muslim or anti-Chinese sentiments, and other forms of intolerance.
Increased screen time during the pandemic means that more of us are exposed to these currents than ever before. The proliferation of hate speech online — and the complexities involved in policing it — is one of the factors behind the adoption of the United Nations General Assembly resolution 75/309 in 2021, proclaiming June 18 as International Day for Countering Hate Speech, to be observed annually.
Stopping the spread of hate speech is not only the responsibility of governments, regulators, and the tech companies behind social media giants. It falls on all of us to confront the scourge of online and offline hate.
“Hatred is a danger to everyone,” UN Secretary-General António Guterres said in a recent press release, “so fighting it must be a job for everyone.” Hate speech represents a rising threat to the Sustainable Development Goals (SDGs), particularly SDG 16, peace, justice, and strong institutions. It constitutes an extreme form of pejorative and discriminatory language or behavior and attacks a person, or a group of people based on their identity.
Historically, malicious actors have spread hate speech to create distance between vulnerable populations and the potential perpetrators of atrocities, which helps reduce the psychological barriers to atrocities being committed. Indeed, hate speech is not a new phenomenon. The Holocaust in the 1940s, the 1994 genocide against the Tutsi in Rwanda, and the 1995 genocide in Srebrenica were all preceded by hate speech and incitement to violence. What has changed is the ease and speed at which hate speech can spread online. Last year, the UN Educational, Scientific and Cultural Organization (Unesco) estimated that every second, more than 10,000 posts are shared on social media. And while tech companies use a mixture of human moderators and artificial intelligence to stamp it out, their efforts have sometimes fallen tragically short.
The hate speech and violence directed against Myanmar’s Rohingya people on Facebook, and the manufacturing of Islamophobia via WhatsApp groups in India are two recent examples. Ethnically and religiously charged hate speech was also a factor during recent presidential and regional elections in Indonesia. One challenge is that automated moderation systems often lack the nuance to detect cultural context. For instance, the word for dog can be used as a pejorative in Indonesian, whereas it is innocuous in many other languages. Furthermore, tech companies’ training of human moderators in developing countries lags behind investment in training in the Global North—that’s especially the case for less commonly spoken languages, making it easier for hate speech to go undetected or be misinterpreted.
Another challenge is that overzealous content moderation has erased evidence of human rights violations and war crimes in Syria and other countries. Meanwhile, authoritarian governments increasingly use vaguely worded cybercrime and terrorism laws to crack down on political dissent and free speech that are fundamental to democracy. To address some of these challenges, Unesco’s SocialMedia4Peace project was launched in January 2021 in three pilot countries: Bosnia and Herzegovina, Indonesia, and Kenya. In each country, Unesco strengthens the resilience and response of societies to harmful content spread online — in particular hate speech and disinformation — while protecting freedom of expression and promoting peace through digital technologies. Research is essential to understand the existing legal frameworks for addressing online harmful content, and to define effective solutions at a local level.
Indeed, all of us — not just academics and regulators — must contribute to stopping the spread of hate. Effective responses include pausing and fact-checking social media posts before sharing them, reporting discriminatory or pejorative language, challenging hate speech, or offering support to its intended targets.
—THE JAKARTA POST/ASIA NEWS NETWORK
* * *
Valerie Julliand is the United Nations resident coordinator in Indonesia.
The Philippine Daily Inquirer is a member of the Asia News Network, an alliance of 22 media titles in the region.
MORE ‘COMMENTARY’ COLUMNS
Bali G20: Last best chance to fix climate crisis
Reclamation and scarcity of land