Interaction Sensors

VR and AR require new paradigms for effective interaction with projection-based displays and spatially-augmented reality objects. We are exploring a number of modalities for sensing user actions. These include:
Infrared lighting: We are exploring how infrared lighting and cameras can be used to facilitate touch-interactions with very large-area projection-based displays.
Depth Cameras: Several researchers in human-computer interaction and graphics have reported that kinesthetic memory and physical navigation is very important in interacting with large area displays. When a user walks around, turns his or her head, and changes visual field and focus, mechanoreceptors in the skin, as well as receptors in muscles and joints, are activated. We are exploring the use of Kinect and Leap Motion sensors for interacting with the large-area immersive display in the Augmentarium.
Eye-tracking: We use eye-tracking to understand attention-driven user interfaces for VR and AR, and to quantify over-attention. We are also interested in assessing the impact of saliency-driven rendering to guide visual attention in VR displays.
EEG: Our research is advancing and assessing the impact of visual rendering on brain activity. We are developing, validating, and using EEG interfaces, such as the 14-channel Emotiv headset, to understand brain response to immersive visual stimuli.