Skip to main content

Virtual reality (VR) has been around since at least the 1950s, with Morton Heilig’s invention of the Sensorama. This mechanical device presented multisensory information to users, enaging the visual, auditory, olfactory, and touch senses, and giving users the impression of being in an alternate world. Throughout the subsequent decades, the popularity of VR systems has waxed and waned, finding a small niche in the early 1990s in the gaming world. However, technical limitations at the time quashed the popularity of the multisensory technology in favor of standard 2D gaming systems.

The feeling of presence (or telepresence) – the sensation that one is “in the virtual world” – can be accomplished by synchronizing a user’s head movements to the visual change of the scene. For example, when a user turns their head to the left, the head-mounted display must be immediately updated to reflect the new scene the user would see if they turned their head in a real 3D scene. The latency – the time it takes for the VR system to sense the head movement and adjust the display accordingly – is what makes or breaks the virtual experience. If this latency is even a few milliseconds too long, the virtual experience is gone, along the user’s feeling of presence.

Skipping ahead another two decades, the technology has finally caught up – VR systems like Oculus Rift, HTC Vive, and Playstation VR provide users with immersive experiences that immediately place them in an engaging virtual world. In these systems, latency is short enough that when the user moves their head, they genuinely feel they are looking at a real scene, rather than a small pair of screens in front of their eyes.

The challenge of simulating virtual self-motion

However, there is one aspect of the VR experience that remains a challenge – how to accomplish self-motion in a virtual world. When we walk around the world, we understand our own self-motion (or vection) relative to our environment via multisensory cues, including visual changes in the scene (e.g., expansion when we walk forward), vestibular signals (e.g. acceleration or rotation), as well as propriocetive and touch sensations (e.g. the feeling of air passing by our face).

This presents a big issue for VR: if a user is sitting down (in reality), any feeling of virtual vection produced by the visual changes in the head-mounted display will conflict with both the lack of vestibular signals and the lack of touch sensations that are normally associated with real self-motion. This sensory conflict – where visual cues suggest motion, but other cues suggest no motion – is thought to be largely responsible for the motion sickness many users experience when using VR.

Although it is not currently possible to artificially stimulate the vestibular system to produce more realistic feelings of vection, it is possible to provide touch sensations to accomplish a similar goal. In 2011, researchers Takeharu Seno and colleagues at Kyushu University in Japan introduced the idea of pointing a fan in the VR user’s direction to simulate the air flow one experiences while walking. The researchers found that this simple manipulation had a significant effect on reports of self-motion. When the fan was turned on, participants reported feelings of vection more quickly and more intensely than in a control condition where only the sound of the fan was present. However, in a subsequent study, Murata and Seno (2017) found that the fan-produced wind had to be at the appropriate temperature for these effects to work. When the wind was hot (37° C, or 98.6° F) the vection-facilitating effect disappeared, suggesting that users were sensitive to the incompatibility between the surrounding air and the fan-produced wind.

In a new study published in the journal Perception, Yahata and colleagues (2021) tested whether hot air, in the right context, could still facilitate vection. To do so, they had VR participants move through two types of virtual corridors – a regular corridor with cement walls, or a “fire corridor” where the walls are (virtually) on fire. Across conditions, the researchers manipulated the type of wind-simulation (no wind, normal wind, or hot wind). Their results showed that hot wind was very effective at inducing vection, but only when users walked through the virtual fire corridor, and not the regular corridor.

Yahata and colleagues concluded that participants are sensitive not only to difference between the surrounding air temperature and the temperature of the wind, but they interpret the wind differently depending on the visual cues. When they are surrounded by virtual fire, they interpret the hot wind as a vection cue that facilitates the experience of moving through the space; when they perceive a non-firey virtual environment, they interpret the hot wind as just that – hot air.

Although VR technology still has some hurdles to overcome to induce realistic vection, this new research suggests that the solution may come from other sensory modalities. In this case, blowing a fan in the user’s direction can increase the sense of self-motion – as long as the temperature of that air is congruent with the surrounding virtual scene.

References

Seno, T., Ogawa, M., Ito, H., & Sunaga, S. (2011). Consistent air flow to the face facilitates vection. Perception, 40(10), 1237-1240.

Murata, K., & Seno, T. (2017). The facilitation and inhibition of vection by wind of hot and normal temperature. Transactions of the Virtual Reality Society of Japan, 22, 287–290. (In Japanese language).

Yahata, R., Takeya, W., Seno, T., & Tamada, Y. (2021). Hot wind to the body can facilitate vection only when participants walk through a fire corridor virtually. Perception, 0301006620987087.

Source: The Effect of Real Wind on Virtual Self-Motion | Psychology Today