Perception of distances in virtual reality (VR) is compressed: objects are consistently perceived as closer than intended. Although this phenomenon has been well documented, it is still not fully understood or defined with respect to the factors influencing such compression. This is a problem in scenarios where veridical perception of distance and scale is essential. We report the results of an experiment investigating an approach to reducing distance compression in audiovisual VR based on a predictive model of distance perception. Our test environment involved photorealistic 3D images captured through stereo photography, with corresponding spatial audio rendered binaurally over headphones. In a perceptual matching task, participants positioned an auditory stimulus with respect to the corresponding visual stimulus. We found a high correlation between the distance perception predicted by our model and how participants perceived the distance. Through automated manipulation of the audio and visual displays based on the model, our approach can be used to reposition auditory and visual components of a scene to reduce distance compression. The approach is adaptable to different environments and agnostic of scene content, and can be calibrated to individual observers.
D. J. Finnegan, E. O'Neill and M. J. Proulx, "An approach to reducing distance compression in audiovisual virtual environments," 2017 IEEE 3rd VR Workshop on Sonic Interactions for Virtual Environments (SIVE), Los Angeles, CA, 2017, pp. 1-6. doi: 10.1109/SIVE.2017.7901607
Paper PDF Download