We will explore how generative models can effectively facilitate communicative grounding by incorporating theory of mind alongside uncertainty and human feedback. We begin by examining how models signal and quantify predictive uncertainty, highlighting computational parallels to epistemic stance. Next, we discuss belief modeling, presenting evidence that language models can infer degrees of interlocutor uncertainty, a crucial component in managing reference and intent. We address how a failure to accurately track beliefs may lead to sycophancy, or over-alignment with user views. We then explore the positive role of friction introduced through structured discourse or interactional pauses, which slows down interactions to promote clarity and facilitate grounding. Finally, we extend these concepts to multimodal and socially situated contexts, drawing on research in sign language modeling and human-in-the-loop training to illustrate how shared meaning can be constructed across diverse modalities and populations. This line of research demonstrates how generative models embody core mechanisms of pragmatic reasoning, offering linguists and cognitive scientists both methodological challenges and opportunities to question how computational systems reflect and shape our understanding of meaning and interaction.
ORIGen ANNOUNCEMENTS
announcements