

As generative AI tools become part of students’ everyday academic work, many instructors are rethinking longstanding assumptions about how learning is taught, demonstrated, and assessed.
When does AI support learning, and when does it undermine the very skills a course is meant to build? How do you design meaningful assessments when polished writing can be generated almost instantly?
Across UBC Arts, those questions are playing out in different ways, from in-class assessments to AI-aware assignments and renewed attention to writing as a site of learning.
In this interview, Associate Deans Laurie McNeill and Andrew Owen discuss how the Faculty is responding, and how We’re Only Human, a Teaching and Learning Enhancement Fund–supported project, is helping instructors navigate uncertainty without relying on one-size-fits-all solutions.
At a high level, how is the Faculty of Arts thinking about generative AI in teaching and learning?


Dr. Laurie McNeill, Associate Dean, Students, UBC Faculty of Arts
Laurie McNeill: Arts recognizes both the opportunities and risks AI presents, especially in disciplines where writing is central to learning. A key question guiding our work is: what foundational skills and forms of disciplinary knowledge do students need in order to become experts in their field, and to use AI effectively and with integrity?
Instructors are responding in different ways. Some are actively adapting their teaching and assessments to explore how AI might support learning. Others are understandably concerned about AI being used to replace learning rather than deepen it. Many are rethinking assessment design — for example, incorporating in-class, oral, or process-based components — to ensure students build the core “muscles” of thinking, reading, and communicating for themselves.


In some courses, students are explicitly taught how to use AI critically and ethically. For example, in Sociology 280: Data and Society, taught by Prof. Laura Nelson, students train a small language model using historical novels to see how data and design choices shape AI output, and experiment with larger models, such as detecting propaganda in social media. By linking these exercises to real-world applications like hiring algorithms and facial recognition, students learn to assess AI’s biases and social consequences.
What is We’re Only Human, and why was this project created?
Laurie: We’re Only Human is a Teaching and Learning Enhancement Fund–supported project focused on generative AI and writing in the Faculty of Arts. It emerged in response to the rapid spread of tools like ChatGPT, which have disrupted long-standing assumptions about the relationship between student writing and student learning.
Rather than defaulting to enforcement or suspicion, the project takes an educative approach grounded in dialogue, shared responsibility, and clear expectations tied to learning goals. The project is designed to support instructors at different comfort levels with AI—from those actively experimenting with these tools in their classrooms to those who are cautious, selective, or prefer to limit AI use in writing-intensive courses.


Dr. Andrew Owen, Associate Dean, Academic, UBC Faculty of Arts
Andrew Owen: The project has three core components: building a network of faculty and staff responding collaboratively to generative AI; developing classroom-ready teaching materials, sample AI policies, and assessment strategies; and sharing regular “pulse checks” on technological change and student perspectives. Our goal is to reduce the burden on instructors by offering concrete, adaptable resources that give instructors a place to start, rather than asking them to reinvent everything from scratch. Anyone interested is welcome to look at, use, and adapt the resources available.
What skills remain essential in an AI era and how does an Arts education support them?
Laurie: Even as AI reshapes how we write and work, the most essential skills remain deeply human. Reading, writing, and thinking are not just products — they are ways of developing ideas and judgement. Writing, in particular, is often how thinking itself happens. When students outsource that work too early, they lose opportunities to build confidence in their own ideas, which is why we’ve created classroom activities and discussion prompts that promote dialogue about why we write and that clarify expectations for what constitutes permissible AI use.
An Arts education also emphasizes synthesis: the ability to move across ideas, disciplines, and perspectives; to recognize patterns; and to navigate complexity rather than rush to easy answers. These capacities matter even more in an AI-saturated environment, where tools tend to flatten nuance and reproduce dominant viewpoints.
Arts also cultivates critical engagement and responsibility. That awareness is going to be essential as we, as a society, learn how to live with and hopefully not be under the thrall of AI. There are critical discussions happening now: Who will shape AI policy? Whose voices are being represented and whose are left out? Arts graduates are well positioned to engage with the social, political, and economic implications of AI.
Andrew: In Arts, we’re uniquely positioned to help students understand the destabilizing impact of new technologies, ethical considerations, the power relations and consequences in play, and can provide them with the tools to find their own answer to a new question: When AI systems can increasingly create ‘output’ that previously only human beings could, what capacities will always be uniquely human? How can we develop and foster these capacities?


In Writing, Research and Discourse Studies, students learn to use AI tools alongside library databases and Google Scholar to locate and evaluate scholarly sources. By comparing AI-assisted search with traditional research platforms, students build digital literacy and learn to integrate AI into academic research while understanding its strengths and limits.
What concerns are you hearing from students about AI?
Laurie: Students are not uniform in their views. Some use AI because it makes their work easier. But we’re also hearing from students who want more control over their learning — seeking fewer screens, returning to handwritten practices, and worrying about what it means to have their ideas “uploaded” into machines.
Many students feel anxious and confused about expectations around AI use, especially when policies vary across courses. Some worry that asking for clarification might put them under suspicion. That anxiety signals how quickly trust can erode when expectations aren’t clear.
Andrew: It’s a very fluid environment – everything’s changing so fast and that can create some stress for students. But while it can be challenging for students to encounter different expectations across courses, that variation is also good preparation for the professional world, where expectations around AI will vary across teams, organizations, and clients — and will continue to shift.
In one of your surveys, you found that 73% Arts students thought that unrestricted AI use would make their learning worse. Did that surprise you?
Andrew: I am pleasantly surprised that students recognize learning involves effort and discomfort and that, if it’s frictionless, something’s being lost. There’s the metaphor of going to the gym: You can bring a forklift to move weights around, but you won’t get the benefits. The debate is challenging because you can use AI to foster learning. And you can use AI to replace learning. And frankly, students, and even me, can have trouble telling the difference in the moment.
Laurie: We also hear from students who believe unrestricted AI use harms learning but worry about being disadvantaged if others take shortcuts. That tension underscores why clear, shared expectations matter. If we want students to choose learning over convenience, those choices need to feel fair and supported.
How are Arts instructors revising assessments in response to AI, and what advice do you have as they navigate these changes?
Laurie: There isn’t a single “right” approach and that’s intentional. Instructors are choosing approaches that fit their discipline, class size, and learning goals. Through We’re Only Human, we describe three broad approaches: AI-secure assessments such as in-person written or oral responses; AI-open or AI-encouraged assessments, where students are explicitly taught how to use AI critically and ethically; and AI-resistant assessments, which are designed to make sure students are still doing the core thinking themselves.
Across all approaches, the goal is the same: assessments that are fair, transparent, and focused on learning—not just a polished final product.


The rise of generative AI has also renewed interest in face-to-face learning unmediated by screens. Above: Prof. Tara Mayer with History students at the Museum of Anthropology.
Andrew: In practice, many changes are modest but meaningful. Some instructors are pairing take-home writing with short in-class synthesis tasks, reflections, or brief oral components, allowing them to see how students are thinking — not just what they submit. Others are placing more emphasis on drafting and reflection over time to make learning visible. One simple but effective activity asks students to research the same topic using UBC’s library search feature, Google scholar, and an AI tool, then compare the results. Students quickly see that each system has biases and limits, building the habit of not taking any output at face value.
We’re also seeing renewed attention to face-to-face learning. In one Political Science course, an instructor replaced a final exam with short in-person conversations with students and found it meaningful for both learning and connection. That won’t scale everywhere, but it highlights the value of learning moments that aren’t mediated by a screen.
In its second year, We’re Only Human will continue expanding its network, refining resources, and supporting instructors who are cautious or selective about AI use—especially in writing-intensive courses.


