VR Series #4 How Does it Work?
It suddenly occurred to me that whilst we are going through this series of articles talking about the multiple potential healthcare benefits of VR we have never actually considered how it all works and how we can be sure that it is having an effect on the patient….
In Broad Terms: the Devices
While devices generally take the same form some provide PC-based operations whilst Google and Samsung offer more affordable, smartphone-based headsets.
Standalone VR is coming: in 2018 Oculus will launch the Oculus Go (Q4) and Lenovo's Daydream headset is also expected. VR headsets like Oculus Rift (launched 2016) and PlayStation VR are often referred to as HMDs, which simply means they are head mounted displays. The goal is to create what appears to be a life size, 3D virtual environment without the boundaries we usually associate with TV or computer screens.
VR headsets use either two feeds sent to one display or two LCD displays, one per eye. There are also lenses which are placed between your eyes and the pixels. In some instances, these can be adjusted to match the distance between your eyes which varies from person to person. The lenses focus and reshape the picture for each eye and create a stereoscopic 3D image by angling the two 2D images to mimic how each of our two eyes views the world ever-so-slightly differently. Try closing one eye then the other to see individual objects dance about from side to side and you get the idea behind this. Most high-end headsets then incorporate a 100 or 110 degree field of view, which is wide enough to increase the immersion felt by the wearer.
For the resulting picture to be convincing a minimum frame rate of around 60 frames per second is needed to avoid stuttering or users feeling sick: Oculus is capable of 90fps whereas Sony's PlayStation VR manages 120fps.
Now we need to add in head tracking which essentially means that when you wear a VR headset, the picture in front of you shifts as you look up, down and side to side. The headsets incorporate a system called 6DoF (six degrees of freedom) which plots your head in terms of your X, Y and Z axis to measure head movements forward and backwards, side to side and shoulder to shoulder. There are also a few different internal components which can be used in a head-tracking system, such as a gyroscope, accelerometer and a magnetometer. Sony's PSVR also uses nine LEDs dotted around the headset to provide 360 degree head tracking thanks to an external camera monitoring these signals.
Finally, headphones can be used to increase the sense of immersion. Binaural or 3D audio can be used by app and game developers to tap into VR headsets' head-tracking technology to take advantage of this and give the wearer the sense that sound is coming from behind, to the side of them or in the distance.
So what about hand / limb movement? We are starting to see some exciting input options from Oculus, Valve and Sony. Oculus Touch is a set of wireless controllers designed to make you feel like you're using your own hands in VR. You grab each controller and use buttons, thumbsticks and triggers during VR games. So, for instance, to shoot a gun you squeeze on the hand trigger. There is also a matrix of sensors on each controller to detect gestures such as pointing and waving.
Both Valve's Lighthouse positional tracking system and HTC's controllers for its Vive headset have a pretty similar set-up to. They both involve two base stations around the room which sweep the area with lasers. These can detect the precise position of your head and both hands based on the timing of when they hit each photocell sensor on both the headset and around each handheld controller.
Other input methods can include anything from smart gloves and treadmills such as the Virtuix Omni, which allows the user to simulate walking around a VR environment with clever in-game redirections.
Eye tracking is possibly the final piece of the VR puzzle: an infrared sensor monitor's your eyes inside the headset so the FOVE knows where your eyes are looking in virtual reality. The main advantage of this - apart from allowing in-game characters to more precisely react to where you're looking - is to make depth of field more realistic. In standard VR headsets, everything is in pin-sharp focus which isn't how we're used to experiencing the world. If our eyes look at an object in the distance, for instance, the foreground blurs and vice versa. By tracking our eyes, FOVE's graphics engine can simulate this in a 3D space in VR.
Got to be said: it's pretty clever stuff… but how do we know that it is working? In the healthcare sphere how do we know that the patient is reacting to the treatment?
Since the introduction of functional magnetic resonance imaging (fMRI) over two decades ago, MRI research and demand have boomed.
With the newfound ability to map brain functionality using MR technology in real time, it became possible to connect people’s subjective thoughts and emotions with an objective understanding of brain activity.
One of the prominent uses of fMRI has, therefore, been to study the effects on brain activity of VR. fMRI technology now opens up new possibilities to understand how this illusion of presence in a virtual environment causes changes in the brain.
To better understand the neural mechanisms behind this trend, Joseph Andreano et. al., conducted a study to explore one of the fundamental methods of altering immersion levels by comparing brain activity while using unimodal VR (only visual stimulation) and multimodal VR (with auditory and visual stimulation).
As expected, he found that there is substantial response to immersion in virtual environments as compared to rest, particularly in regions associated with visual and auditory stimulation such as primary visual and auditory cortices, fusiform cortex, and amygdala (Figure below).
More interestingly, however, results showed that in the multimodal environment, several brain regions not engaged in the unimodal VR were activated. The primary visual cortex, inferior temporal cortex, and part of the ventral visual stream, regions not associated with audition, were highlighted. Parietal somatosensory areas and bilaterial clusters in the hippocampus were engaged as well.
These results seem to indicate that immersion in a virtual environment activates higher cognitive processes, particularly those related to memory. It also seems to suggest that, when combined, all the high tech described above has a more targeted and widespread effect than 'simple' visual stimulation.
Figure: Activations of visual cortex and hippocampus in response to audio stimulation compared to presentation without.
Hopefully that has given you an overview of (a) the headset technology and the multiple different systems which sit within (and behind) it and (b) the fMRI scanning that allows us to see in real time the effect that immersion is having upon the patient's brain patterns.
I have to admit that I find this stuff incredible: quite where it will all lead is anyone's guess and whilst we can't deny that there is more work to be done and more research to be undertaken the reality is that initial results are encouraging.
Last but by no means least, a plea, if you have the opportunity please follow our new Twitter page @BIRGarage
This information is intended as a general discussion surrounding the topics covered and is for guidance purposes only. It does not constitute legal advice and should not be regarded as a substitute for taking legal advice. DWF is not responsible for any activity undertaken based on this information.