Programme

ORIGen will be held in the Palais des Congrès in Montréal, QC, Canada on October 10, 2025.

9:00-9:15 - Opening remarks
9:15-9:50 - Invited talk I: Andreas Vlachos is a Professor of Natural Language Processing and Machine Learning at the Department of Computer Science and Technology at the University of Cambridge and a Dinesh Dhamija fellow of Fitzwilliam College. His expertise includes dialogue modeling, automated fact-checking, imitation and active learning, semantic parsing, and natural language generation and summarization.
9:50-11:00 - Accepted paper lightning talks: 4 minutes each + 1 minute transition
11:00-11:15 - Coffee break
11:15-12:00 - Keynote talk: Malihe Alikhani is an Assistant Professor at Northeastern University’s Khoury College of Engineering and Visiting Fellow at The Center on Regulation and Markets at Brookings. She works towards developing safe and fair AI systems that enhance communication, decision-making, and knowledge-sharing across disciplines and populations.
12:00-12:35 - Invited talk II: Bertram F. Malle is a Professor of Cognitive and Psychological Sciences at Brown University. He received the Society of Experimental Social Psychology (SESP) Outstanding Dissertation award, an NSF CAREER award, the Decision Analysis Society 2018 best publication award, several HRI best-paper awards, and the 2019 SESP Scientific Impact Award. Malle’s research focuses on moral psychology and human-machine interaction.
12:35-2:05 - Lunch
2:05-2:40 - Invited talk III: Q. Vera Liao - Facilitating Appropriate Reliance on AI: Lessons from HCI Research
 Having appropriate reliance on AI is key to harnessing the benefits of AI technology and achieving human-AI complementarity; while inappropriate reliance, particularly overreliance on AI, can lead to a range of harms, from high-stakes errors, de-skilling, to infrastructural vulnerabilities. Since before this wave of LLM technology, the field of human-computer interaction (HCI) has been studying how to facilitate appropriate reliance on AI, through empirical investigation of how people choose to rely or not rely on AI, designing interventions to mitigate inappropriate reliance, and developing approaches to measure and model people’s reliance behaviors. In this talk, I will provide an overview of these lines of HCI research and pose three open questions in the age of LLMs: How should we grapple with the normative question of what constitutes appropriate reliance? How can we measure and monitor reliance without intensive behavior surveillance? How can we deliver targeted preventive interventions to prevent overreliance by accounting for the system, individual, and contextual risk factors?
2:40-3:40 - Poster Session
3:40-4:00 - Coffee break
4:00-4:45 - Panel discussion: Future of Reliable and Accountable AI
Matthias Scheutz, Tufts University
Jesse Thomason, University of Southern California
Diyi Yang, Stanford University
Matthew Marge, DARPA
4:45-5:00 - Conclusion

List of accepted papers