Home/News/When robots promise love: What people really want from AI and smart tech
When robots promise love: What people really want from AI and smart tech
October 15, 2025
From robotic pets to wearable health devices, UBC psychologist Jill Dosso explores how people experience these tools and why user perspectives must guide their design.
Marketing for social robots, genAI, and wearable health devices often promises companionship, emotional support, and improved well-being, but do these claims match what people actually need?
Dr. Jill Dosso, a Lecturer in the Department of Psychology, explores this question by studying how users, particularly children with anxiety and older adults experiencing dementia, engage with these emerging technologies. Through co-creation workshops and interviews, she digs into the ethical, practical, and human questions these tools raise.
We spoke with Dr. Dosso about her research findings, what they mean for designing ethical technology, and how students and researchers can approach this fast-moving field.
Can you briefly describe your research and what you’re hoping to uncover?
As a Lecturer, my main focus is on teaching and training the next generation of researchers and scientifically literate professionals, but I’m engaged in research as well. My research examines people’s experiences with smart and social technologies—things like social robots, health wearables, and generative AI. These tools have really interesting potential to support our brain health through personalisation and simple, intuitive interfaces. But asking people to engage with a technology socially and emotionally raises a whole suite of ethical considerations—including deception, equitable access, and privacy.
Historically, a lot of the literature in this space has come from the technical perspective—literally figuring out how to build the hardware and software. But focusing only on this side of things can result in big mismatches between what companies develop and what users actually want. Many seemingly-promising technologies flop when it’s time for implementation, and that’s a huge waste of time, talent, and resources. So, my research is focused on understanding user/patient priorities, experiences, and values when it comes to these emerging technologies.
There’s been growing discussion and marketing around the potential of AI tools, social robots, and wearables to support mental health, offer companionship, reduce anxiety, or provide emotional support. From your research, how do these claims line up with what people with lived experience actually want or need?
Of all these devices, I’ve spent the most time thinking about social robots. Manufacturers make a real range of claims: that the robots will customize themselves to your personality, that they will support children’s learning, and even that they have internal states. One website we found stated that the robot “loves you back”! It can sometimes be very hard, as a consumer, to tell what functionality the product will really have.
As a postdoc with Dr. Julie Robillard, I worked on a project where we looked at this directly. It is one of my favourite papers. We found a lot of variation in what different manufacturers wrote, with some being quite cautious and accurate and others being quite misleading. Overall, the online environment as seen from the child’s or parent’s point of view was far from the scientific consensus on what is plausible from these products right now.
One group of potential users that I think is particularly misunderstood is older adults. Younger people often assume the biggest issue with social robots for older adults will be deception. That is, they are worried that older people will be tricked into thinking that the robot is alive. But older adults in our research talked primarily about more practical concerns: cost, performance, tripping hazards. This finding highlights the importance of actually talking to the people you’re designing for. Our intuitions about what others will value are often wrong.
“Focusing only on the technical side of things can result in big mismatches between what companies develop and what users actually want. Many seemingly-promising technologies flop when it’s time for implementation, and that’s a huge waste of time, talent, and resources.”
Dr. Jill Dosso
UBC Psychology
You use co-creation workshops and qualitative interviews in your research. Can you explain what these methods look like in practice, and why they’re important?
I’m so passionate about these methods! These are great ways to get deep, high-resolution information about someone’s perspective—something that can be hard to do using a pre-written survey, for example. My own training didn’t include them until I was a postdoctoral fellow, and I make a point of including them when I teach Research Methods (Psyc 217).
Co-creation involves including members of the community you are studying in the development of a tool or a program of research. If you want to know about the experiences of kids with anxiety, you probably shouldn’t create a whole study about them without checking first: does this way of asking this question make sense? Would you even sign up for this? What could make you feel comfortable sharing your real point of view? Are we missing something fundamental by posing the research question in this way?
Qualitative interviewing is a method of data collection that takes the form of a guided but flexible conversation between an interviewer and a research subject. Some people are surprised to hear that doing this well actually demands a huge amount of preparation on the part of the interviewer! You need to get very clear, in your own mind, what it is that you really want to explore when you talk with your subject. Paradoxically, this then allows you to be more flexible and spontaneous in the moment as things come up. Often, someone is being very vulnerable with you and sharing painful or emotional health experiences, and it is important to be fully present in that while still doing good research.
Dr. Jill Dosso with Moxie, an AI robot intended for kids aged 5-10 that uses play-based conversational learning to teach emotional and social skills. The company that makes Moxie no longer supports it, showing the volatility of this industry.
The ethics of using AI in mental health care is a growing conversation. What ethical tensions stand out most in your research?
I spend a whole day on this in Psyc 301 (Brain Dysfunction and Recovery) because I think it is so important. It’s both good and bad for me personally that the AI field is moving so quickly. I have to do a big update to my slides every time! One thing that I really appreciate is that psychology students tend to be quick to grasp that there’s no such thing as a neutral, bias-free tool. They’ve thought a lot about this in previous courses when they learn about things like the history of IQ testing, for example, and there are a lot of modern parallels here. So, they come in with some strong tools to have this discussion.
There are a few ethical issues that I think are particularly important. One is accountability: if an algorithm assigns you a risk score (for example, the likelihood of a particular diagnosis or complication), who is responsible for that score—the doctor who communicates it to you, or the company that created the software? Another is the potential for bias and discrimination: who is in the training dataset, and for whom will this tool make (in)accurate predictions?
“Technological progress is not some inevitable, inexorable, inhuman force. We, as a community, should be able to decide which innovations are useful and aligned with our values, and which are not.”
Dr. Jill Dosso
UBC Psychology
What do you hope your students or fellow researchers take away when thinking critically about the intersection of health, tech, and human experience?
I think students have a huge appetite to learn about real-world research and current topics. Students are often more up-to-date on AI tools and other developments than I am. If anything, the challenge is to keep up! I do bring a robotic pet cat to class on the day that we talk about social robots, and students are usually underwhelmed.
Because this is such a fast-moving field, I am learning about the importance of teaching students about methodological tools and critical thinking, rather than crystalized facts that may be out of date in a year. It is particularly exciting to teach psychology because we have important knowledge-creation tools. If someone tells you that ChatGPT will improve your memory, or reduce your loneliness, or reassure you about your health, those are empirical psychological claims and you can design studies to test them! That’s a cool place to be.
In the face of so many complexities like ethical questions, design gaps, and evolving technologies, what gives you a sense of purpose or clarity in your work?
One message that I often come back to (argued here by R. Eveleth) is that technological progress is not some inevitable, inexorable, inhuman force. We, as a community, should be able to decide which innovations are useful and aligned with our values, and which are not. And to do that, we need to do methodologically sturdy research, to hear from people with many types of lived experience, and to distinguish between hype and real evidence. I hope that my teaching and my research play a small part in building this type of capacity at UBC.