In our previous theoretical and empirical work, we have examined people’s trust responses and moral judgments of social robots. I will apply some lessons we learned from this work to the question of moral trustworthiness of LLMs. I will ask three questions: (1) Does it even make sense to treat LLMs as “agents” that “have trustworthiness”? (2) If we do query LLMs’ trustworthiness, what specifically moral attributes of trustworthiness would we want them to exhibit? (3) What would it take to design LLMs that actually have those attributes of moral trustworthiness?
ORIGen ANNOUNCEMENTS
announcements