= Menu

Who, Me? How Virtual Agents Can Shape Conversational Footing in Virtual Reality

Intelligent Virtual Agents — aug 2017
Download the publication : 17-IVA-FootingVR-CR.pdf [705Ko]  
The nonverbal behaviors of conversational partners reflect their conversational footing, signaling who in the group are the speakers, addressees, bystanders, and overhearers. Many applications of virtual reality (VR) will involve multiparty conversations with virtual agents and avatars of others where appropriate signaling of footing will be critical. In this paper, we introduce computational models of gaze and spatial orientation that a virtual agent can use to signal specific footing configurations. An evaluation of these models through a user study found that participants conformed to conversational roles signaled by the agent and contributed to the conversation more as addressees than as bystanders. We observed these effects in immersive VR, but not on a 2D display, suggesting an increased sensitivity to virtual agents’ footing cues in VR-based interfaces.

Images and movies

 

BibTex references

@InProceedings{PGM17,
  author       = "Pejsa, Tomislav and Gleicher, Michael and Mutlu, Bilge",
  title        = "Who, Me? How Virtual Agents Can Shape Conversational Footing in Virtual Reality",
  booktitle    = "Intelligent Virtual Agents",
  month        = "aug",
  year         = "2017",
  note         = "to appear",
  ee           = "https://link.springer.com/chapter/10.1007/978-3-319-67401-8_45",
  url          = "http://graphics.cs.wisc.edu/Papers/2017/PGM17"
}
 

Other publications in the database