Language comprehension is a complex process that has long been studied. It draws on world knowledge as well as prior linguistic experience and context. Such context can be the (linguistic) discourse but also a shared visual scene or the (non-verbal) behavior of the interlocutor. Investigating language comprehension in such situated and partly interactive settings, however, poses new problems: The collected data can vary to a great extent and the reciprocity in interactions makes analysis circular.
The talk "Situated language comprehension: How interlocutors integrate speech, scene information and each other's gaze" by Dr. Maria Staudte will present various approaches to studying situated comprehension, some results, and how those results can come from or be applied to human-machine interaction.
Dr. Maria Staudte works at the Department of Computational Linguistics at Universität des Saarlandes, where she leads the Independent Research Group "Embodied spoken interaction"
Maria Staudte has graduated in 2010 in Psycholinguistics at Saarland University on the role of speaker's gaze in the human-robot interaction. After her postdoc at the Cluster of Excellence in Saarbrücken she spent a year at Stony Brook University, NY, examining the importance of intention for the utitility of gaze cues.
In 2013, Maria became the head of the Independent Research Group "Embodied Spoken Interaction" in Saarbrücken's Cluster of Excellence where she currently investigates how interlocutors use each other's gaze in situated interactions and how this affects the spoken content.