AI and ethics in journalism
Kathmandu—Technology has been a timeless force shaping the landscape of journalism, from the telegraph in the 1840s to the recent surge of various generative artificial intelligence (AI) tools, such as ChatGPT, Zapier, DALL-E, Bard, and more.
In recent years, media organizations have been experimenting with AI to automate news production, personalize content delivery, and enhance data-driven insights. In other words, AI tools signal a new era of journalism with efficiency, speed, and content diversity.
Vast and diverse. The scope of AI in journalism is vast and diverse, ranging from automating news production to exploring archived content. Using natural language generation algorithms, AI can generate news articles based on structured data, such as sports scores, financial reports, and weather forecasts. This can save time and resources for journalists and enable media organizations to produce more content with limited means.
Article continues after this advertisementHowever, using AI in news production raises ethical concerns, such as the risk of plagiarism and editorial bias. The Los Angeles Times, for example, used an AI system called Quakebot in 2014 to generate a news article within three minutes of an earthquake that occurred in Southern California. While it enabled the paper to provide timely and accurate news, it also raised concerns about the lack of human supervision in news production.
Audience preferences. AI can also be used to curate news content based on audience preferences and needs. Using machine learning algorithms, it can personalize news delivery for individual users, such as recommending articles based on their reading history, location, and interests. While this enhances user engagement and loyalty, it also raises concerns about the risk of echo chambers where users are exposed only to content that confirms their existing beliefs and biases.
To improve news coverage, AI analyzes user behavior and feedback. By using sentiment analysis and semantic analysis algorithms, it can identify patterns and trends in user engagement and feedback, such as the most popular topics, most effective headlines, and most shared articles. This can enable media organizations to optimize their news coverage for user preferences and needs. The Washington Post used an AI system called Heliograf in 2018 to cover the midterm elections in the United States, generating news articles based on structured data such as election results and voter demographics. Again, there were concerns about the loss of human supervision in the news production process.
Article continues after this advertisementEthical concerns. AI in journalism also raises ethical and editorial concerns, such as the risk of misinformation, editorial biases, and loss of human touch. To address these, media organizations must develop clear policies and guidelines for using AI and ensure that these systems are transparent, accountable, and ethical. These systems can be programmed to generate and distribute news content that is sensational, misleading, or outright false. This can undermine the credibility and trustworthiness of news content and erode public confidence in journalism.
AI in news production and curation can also perpetuate editorial biases, such as political, racial, and gender biases that can reinforce existing stereotypes and prejudices, and marginalize underrepresented voices and perspectives.
In 2018, Amazon was criticized for its AI-powered recruiting tool as it was found to be biased against women. The tool was trained on resumes submitted to Amazon over a 10-year period, which were mostly from men. As a result, the tool learned to penalize resumes that contained words or phrases associated with women, such as “women’s” or “female.” While Amazon has since discontinued the tool, the incident highlights the need for media organizations to ensure that their AI systems are free from biases and discrimination.
AI in news production and curation can also result in losing the human touch, such as empathy, creativity, and critical thinking. AI systems can be programmed to generate and distribute news content that is formulaic, repetitive, and lacking in nuance. This can reduce the quality and diversity of news content and limit the role of journalists as storytellers and watchdogs.
To conclude, AI has the potential to transform journalism by automating news production, personalizing content delivery, and enhancing data-driven insights. However, its use in journalism must address ethical and editorial concerns, such as avoiding misinformation, controlling editorial biases, and increasing human supervision on content enhanced by AI tools. With clear guidelines to ensure that AI systems are transparent, accountable, and ethical, media organizations can enhance their news coverage and engage with their audience in new and innovative ways. The Kathmandu Post/Asia News Network
—————-
Bhanu Bhakta Acharya is affiliated to the University of Ottawa, Canada.
—————-
The Philippine Daily Inquirer is a member of the Asia News Network, an alliance of 22 media titles in the region.