Ethical AI policymaking can rebuild public trust in journalism



This is part of an ongoing GenAI op-ed series, Arts Perspectives on AI (Artificial Intelligence), that features student and faculty voices from the UBC Faculty of Arts community.

This image was generated with the assistance of Artificial Intelligence.

The debates raised by the the advent of Artificial Intelligence (AI) range all the way from AI signalling the death of journalism to journalism entering a new age with AI and everything in between. The stubbornness of AI-fuelled disinformation, fake news, and biases in reporting have started eroding the public’s trust in journalism. Fear of losing jobs to AI has only compounded this problem. Today, journalists are increasingly finding it difficult to respond to the dramatic rise of AI. 

UBC’s Centre for the Study of Democratic Institutions (CSDI) recently published a report titled “The Peril and Promise of AI for Journalism” that draws insights from a November 2023 workshop on AI. The report, written by Nishtha Gupta, Jenina Ibañez, and CSDI’s Director (Interim) Dr. Chris Tenove, highlights how nuanced policymaking and clear guidelines within the profession can help journalism navigate the complex world of AI. 

Here, Dr. Tenove takes the conversation further by sharing his thoughts on the risks and promises of using AI in journalism and highlighting the complex terrain of AI policymaking.


What are the major risks and opportunities that AI poses to journalists and journalistic ethics?

There are three general categories of risks that AI poses to journalism. First, generative AI in particular can be used to supercharge disinformation campaigns, making them faster, more targeted, and possibly more persuasive using “deepfake” video and audio. Second, AI tools can introduce errors and biases in journalists’ own work. For example, the tech site CNET published AI-written stories that included dozens of errors. Third, journalists could lose their jobs if they are replaced by AI systems, some of which were trained on content created by journalists. 

All of these developments could undermine public trust in journalism. An insidious example is the Doppelganger disinformation campaign, in which Russia-aligned actors used AI to create fake versions of journalism sites to spread propaganda about the war in Ukraine. Such actions misinform audiences in the short-term, and in the long-term could lead to more distrust of content that looks like journalism.

“Beyond using AI appropriately, journalists need to collectively organize to prevent AI systems from displacing them and exploiting their previous work, not unlike the aims of the recent Hollywood writers strike.”
Director (Interim), Centre for the Study of Democratic Institutions

How are journalists responsibly utilizing AI in their work?

Journalists and their newsrooms are experimenting with AI and finding lots of compelling uses. Like other industries, journalists hope to use AI to automate time-consuming parts of their jobs, from transcribing interviews to creating multiple versions of content for different platforms. There are lots of uses of AI tools for investigations, particularly for data analysis. 

One creative use that really caught my eye was a short documentary on the Russia-Ukraine war by Semafor. They interviewed Ukrainians about their experiences, and used those interviews as prompts for the AI engine Stable Diffusion to create animated images. These animations ran along with the real interview audio. It helped make those experiences more accessible and powerful for audiences, while also clearly not pretending to be eyewitness video. 

Beyond using AI appropriately, journalists need to collectively organize to prevent AI systems from displacing them and exploiting their previous work, not unlike the aims of the recent Hollywood writers strike.

What are journalists doing to avoid misusing AI technologies, or having their industry decimated by their introduction?

There’s a really active conversation about the ethical use of AI in journalism, like there is in many fields. One of the most important responses by journalists has been to develop guidelines for themselves, and make those guidelines public. Matt Frehner, one of our workshop participants, is the Head of Journalism for the Globe and Mail, and he explained why his newspaper created and published their newsroom guidelines. CBC has also done so.

At the international level, Reporters Without Borders convened prominent journalists from 20 countries to create the Paris Charter on AI and Journalism. The commission, chaired by Nobel Prize winner Maria Ressa, proposed guidelines that emphasize human agency over machine-generated journalism, transparency and accountability for AI use, and critical engagement in AI governance issues. 

“In many respects, the fundamental issues are pretty consistent, such as the extraction of our data by huge, profit-driven corporations, and risks that new gizmos will distract us from our core commitments as learners and citizens.”
Director (Interim), Centre for the Study of Democratic Institutions

Tell us about your approach to teaching AI practices to students.

I’m currently teaching a course on digital technologies and global affairs, which is for Master of Journalism and Master of Public Policy and Global Affairs students [PPGA 580/JRNL 520M (dual-listed)]. We’ve been examining topics like the use of AI in Canadian politics, and Canada’s pivotal role in recent advances in AI. 

When it comes to AI policymaking, we are discussing the many different elements that can be the focus of regulation. These include human rights assessments of bias in model outputs, content moderation to limit issues like hate speech and false medical advice, labour protections for gig workers who code training data, and environmental regulations for the huge electricity and water demands of data processing centres. It’s a very complex policy area. However, that means there are lots of positive steps that can be taken, too.

What do you believe are the biggest AI-related opportunities or challenges facing faculty and students?

I think the biggest challenge is the speed of AI-related developments. For instructors and students, not to mention journalists and policymakers, it feels like we need to learn about a new model or a new AI-related risk every week. This is partly a result of the tech hype cycle, however. In many respects, the fundamental issues are pretty consistent, such as the extraction of our data by huge, profit-driven corporations, and risks that new gizmos will distract us from our core commitments as learners and citizens.


Image of Dr. Chris Tenove Director (Interim), Centre for the Study of Democratic Institutions

Dr. Chris Tenove would like to acknowledge that he currently lives and works on traditional, ancestral, and unceded territories of the xʷməθkʷəy̓əm (Musqueam), Sḵwx̱wú7mesh (Squamish), and səlilwətaɬ (Tsleil-Waututh) Nations.


About the featured image

The featured image was generated with the assistance of Adobe Firefly which is a generative machine learning AI model.

We provided the prompt: “Artificial intelligence robot and human journalists standing against a desolate futuristic landscape discussing ethics.”


Related content