Submissions
About the Workshop
With the rapid integration of generative AI, exemplified by large language models (LLMs), into personal, educational, business, and even governmental workflows, such systems are increasingly being treated as “collaborators” with humans. In such scenarios, underreliance or avoidance of AI assistance may obviate the potential speed, efficiency, or scalability advantages of a human-LLM team, but simultaneously, there is a risk that subject matter non-experts may overrely on LLMs and trust their outputs uncritically, with consequences ranging from the inconvenient to the catastrophic. Therefore, establishing optimal levels of reliance within an interactive framework is a critical open challenge as language models and related AI technology rapidly advances.
- What factors influence overreliance on LLMs?
- How can the consequences of overreliance be predicted and guarded against?
- What verifiable methods can be used to apportion accountability for the outcomes of human-LLM interactions?
- What methods can be used to imbue such interactions with appropriate levels of “friction” to ensure that humans think through the decisions they make with LLMs in the loop?
The ORIGen workshop provides a new venue to address these questions and more through a multidisciplinary lens. We seek to bring together broad perspectives from AI, NLP, HCI, cognitive science, psychology, and education to highlight the importance of mediating human-LLM interactions to mitigate overreliance and promote accountability in collaborative human-AI decision-making.
Submission Information
We welcome papers on the topic of reliance and accountability in human-AI interactions, with a specific focus on under- or overreliance on LLMs, and establishing accountability for joint human-AI decisions. Submissions may fall into any of the following areas:
- Theory and research methods: computational, psychological, cognitive, and formal models of reliance and accountability;
- Technical: novel algorithmic approaches, system descriptions, metrics, and experimental paradigms to measure reliance and accountability;
- Design: interaction design to modulate reliance and establish accountability;
- User studies: empirical studies into reliance and accountability in human-LLM interaction.
This list is non-exhaustive and we welcome any relevant work. Submissions must be made via the official ORIGen submission page on OpenReview.
ORIGen will accept both original research and non-archival cross-submissions. Original research papers may be long papers (up to 9 pages, not including references) or short papers (up to 5 pages, not including references) prepared according to the COLM template. These submissions will undergo double-blind review and so must be fully anonymized. Original research papers may be submitted as archival or non-archival contributions. Accepted archival original research papers will be published in a proceedings after the workshop on an open-access proceedings platform such as CEUR-WS, and will receive an additional page in the camera-ready version to incorporate reviewer comments. Publishing in the proceedings of ORIGen will not preclude submission of the published work in the same or updated form to future venues, unless otherwise indicated by the other venue. We will also welcome cross-submissions from other venues (e.g., *ACL or the COLM main conference) who wish to present their work at ORIGen. These submissions will be assessed for relevance and fit for ORIGen, and may be either single-blind or double-blind.
Please see the important dates. No submissions will be accepted after the posted deadline(s).
Dual-Submission Policy
Papers that have been or will be submitted to other venues must indicate this at submission time. Non-archival cross-submissions of previously-published papers or papers already accepted for publication elsewhere must indicate the other venue at submission time (ideally with a link to a proceedings or OpenReview submission). Authors of papers accepted for presentation at ORIGen 2025 must notify the organizers by the camera-ready deadline regarding whether the paper will be presented.
Presentation at the Workshop
Accepted archival papers must be presented at the workshop to appear in the proceedings. All accepted papers will be presented as posters. At least one author of every accepted paper must register for and attend the workshop in person. Please see the schedule for details.
Attendee Support
We will provide a limited number of grants, with support from the Artificial Intelligence Journal’s 31st Call for Funding Opportunities for Promoting AI Research (under the name First Workshop on Friction in Language Modeling), to help defray travel and attendance costs. Applications for these grants will open after the accepted papers are announced.