“Asking the right questions” – On AI’s untapped potential for journalism



This is part of an ongoing GenAI op-ed series, Arts Perspectives on AI (Artificial Intelligence), that features student and faculty voices from the UBC Faculty of Arts community.

This image was generated with the assistance of Artificial Intelligence.

Journalism and technological innovation are not strange bedfellows. The field of journalism has gone from strength to strength by harnessing the power of new technology, be it embracing computer-processed type setting over linotype, or switching over to the internet from radio and telegraph, to name only a few such adoptions.

However, the rise of Artificial Intelligence (AI), and more crucially, the rise of generative AI (genAI), has complicated this relationship in unexpected ways. The alarming risks associated with AI’s adoption are unlike the usual pitfalls that accompany any new tech adoption. Yet, according to Dr. Alfred Hermida, Professor at UBC School of Journalism, Writing, and Media, and award-winning online news pioneer and digital media scholar with more than two decades of experience in digital journalism, all is not lost.

We recently spoke to Dr. Hermida to learn more about AI’s role in journalism, using ChatGPT for coursework, and why there aren’t many social media influencers who are journalists. Below, Dr. Hermida shares with us his thoughts on how AI holds huge potential for journalism, only if we learn to ask the right questions.


What are the major risks and opportunities that AI poses to journalists and journalistic ethics?

This question already assumes that AI poses risks. There are risks involved in any new tech adoption. Instead, we must focus on the questions that journalists need to ask themselves when they are talking about AI in their work or using genAI in the media. Am I exploring the actual logics of the platform? Is my use of this specific genAI ethical? It is the use of AI, rather than AI itself that poses risks.

As genAI is transformative, the biggest risk is not imagining what you can do with genAI that you are not doing now. Instead of approaching a new technology through our past experiences we need to get ahead of the curve. If journalism does not want to be sandboxed in with other players using AI, then it must experiment with the media logics of the platform itself.

 Automated reporting

Automating stories has been part of journalism even before genAI’s recent arrival. For example, my article with Mary Lynn Young on the Los Angeles Times’s Homicide Report showed how the news agency was using automated journalism even in 2007. Automated reporting has its own pros and cons that AI can accentuate. For example, a major risk associated with automated journalism is flooding the zone. This happens when too much content is produced and it becomes overwhelming for the consumer. It becomes difficult to find what you are looking for. So, audiences instead receive automated articles suggested by AI, based on search criteria. Now, with new search engines like Perplexity, genAI is producing aggregated news summaries that completely take away the need from having to visit a news website ever again. Unless journalism finds a way to arrest this early on this can be catastrophic for the field.

 Scaling and costs

Algorithmic recommendations on news websites have already been successfully implemented. Globe & Mail uses Sophi, an AI bot that runs 99% of its properties to drive subscriptions, including strategic news and paywall placements based on user behavior. AI can manage a whole series of websites at scale. It can analyze data (scanning multiple sources) much better. At the same time, you risk losing jobs in the newsroom.

“If journalism does not want to be sandboxed in with other players using AI, then it must experiment with the media logics of the platform itself.”
Professor, School of Journalism, Writing, and Media

What can we learn about AI from how journalism has adapted to digital technologies in the past? 

Largely, AI has been adopted in terms of replicating what has worked before. Unfortunately, this tried and tested method is holding back journalism. Putting a national stream on Facebook is not digital journalism. We need to go back to the drawing board and ask new questions, not just adapt. Our decisions to use AI should be strategic, not reactive.

We have seen what good adoption looks like for journalism. When BBC Africa realized that they were struggling to make an impact with their audiences in west Africa it started asking the right questions. Their surveys showed that while their west African audience consumed news on their mobile phones, the promoted articles were all written for desktop and as a result were not being successful. Immediately, they made a strategic decision to produce shorter stories with healthier picture-to-text ratios. This successful decision resulted in an increased audience engagement even though it went against the tried-and-tested ‘length equals depth’ journalistic strategy.

We should be asking similar questions when working with new technologies like AI. For example, why do we ignore all Canadians under 35 who get their news from social media? At a time when trust in media has eroded, why don’t we have more journalists as social media influencers? Can we ethically leverage AI to help us reach this audience? These are questions worth asking. 

Can you tell us about your approach to teaching students about AI practices?

I introduced a new assignment involving a critical assessment of AI in the Journalism Research in Practice course [JRNL 502Z]. Instead of writing a research paper on AI, I asked students to ask ChatGPT to write the research paper for them. Their role was to be the instructor, to critique and grade, fact check, what was accurate, what were the gaps, etc. This learning-by-teaching assignment taught students the value of different prompts, how phrasing changes outcomes, and how instead of ignoring AI we can integrate it in an intelligent way to highlight its value as a tool. As such, writing prompts for AI is one of the things we need to consider teaching in a course on working with AI tools.

“Current AI-tools are typically made with western lenses. Educators need to be aware and critical of the narratives and discourses their use of AI is arguing for.”
Professor, School of Journalism, Writing, and Media

What can we learn from your research on AI practices?

There’s potential for good AI practice. AI can be used as a resource in schools and other educational places where it can automate the production of age-appropriate news articles for kids. A well-developed AI chatbot could also potentially limit chatGPT to refer to only specific sources. Naturally, this also carries the potential for gross misuses. We must be careful.

What do you believe are the biggest AI-related opportunities and/or challenges facing faculty and students?

Faculties should be asking how to integrate AI meaningfully to support learning. Faculty and students should be open to taking AI on board and using it as a collaborative partner. This includes learning about the risks of using AI. My chatGPT coursework shows that it can be done. In general, AI can also help universities overcome language barriers that some international students face. For example, chatGPT can provide on-the-go simplified translations for students. This does not take away students’ critical thinking skills, but helps them learn better.

We also need to be aware if/where AI is appropriating Indigenous knowledge. This feeds into the larger issue of ‘copyright’ – which might not even be the correct word to describe this new phenomenon. Is the appropriation of aggregated content a new type of violation? Finally, current AI-tools are typically made with western lenses. Educators need to be aware and critical of the narratives and discourses their use of AI is arguing for.

 


Dr. Alfred Hermida
Professor, School of Journalism, Writing, and Media

Dr. Hermida would like to acknowledge that the land on which he lives, works and learns is the unceded traditional territories of the Skwxwú7mesh (Squamish), Səl̓ílwətaʔ/Selilwitulh (Tsleil-Waututh), and xwməθkwəy̓əm (Musqueam) Nations and he is grateful for the opportunity to do so. He would also like to acknowledge that you may be reading this from many places, near and far, and acknowledge the traditional owners and caretakers of those lands.


About the featured image 

The featured image was generated with the assistance of Artificial Intelligence on You.com.

Dr. Hermida provided the prompt: “A newsroom in Vancouver with mountains in the background and a Deathstar with Google branding on the horizon.”


Related content