How One Rendering Technique Is Improving VR Performance

Rendering virtual reality is resource-intensive. It involves simultaneously rendering two high-resolution displays, one for each eye; hence why VR requires powerful hardware in order to process and deliver the information to a suitable frame rate for your viewing pleasure.

That said, alongside hardware development, other exciting endeavours are being pursued to mitigate and handle such graphical load – A considerable obstacle on the way towards VR adoption. One such technique that is being introduced is ‘Foveated Rendering’(FR).

Foveated Rendering

The term ‘Foveated rendering’ comes from the ‘Fovea’, the part of our eye or ‘Foveal Vision’ – The center of our field of vision (FOV) where visual acuity is at its peak. Outside of this region is our peripheral vision, suited towards fast detection of changes in our surroundings but poor at discerning any details such as colour, shape and movement.

Check out this fovea detector!

Foveated rendering is a method for improving VR performance by way of reducing GPU load (primarily), through downsizing the quality of peripheral vision output or ‘foveation’. Depending on how this is achieved, you can consider foveated rendering a form of Dynamic Resolution Rendering. Typically a small region equating to around 1/10th of all pixels is rendered to full detail and the remainder of a scene is then limited (rendered to a lower resolution; blurry like our peripheral vision), thus reducing load on computing resources (i.e. GPU and CPU). In effect, this optimisation reduces bottlenecks and allows creators and developers to reallocate these resources towards maintaining a high level of visual fidelity where they need it most. Good stuff! What’s the point in rendering details we can’t see?

What’s the Catch?

Eye movements, or saccades which bring our peripheral vision into full detail are very quick! Are these quick movements an issue? In short, yes. Foveal rendering needs to keep up with the human eye in order to avoid you detecting the low-resolution parts of the screen before it has a chance to update what you are seeing. To keep up, you would need to maintain a high frame rate and more importantly, low latency in response to both changes in eye movements and the image being rendered and delivered. Make no mistake, a lack of display performance is just about tolerated on 2d screens, but inside VR the result is… 🤮 VR is hugely latency sensitive, and so are we! Generally speaking, in VR anything over 20ms latency is going to cause nausea in viewers.

Is it currently or will it ever be possible to render everything as fast as eye movement? Perhaps not, who knows? Is that a deal breaker? Not really because foveated rendering hardware/software solutions in various forms are already being used as we will discuss in a follow up post.

Luckily, things don’t have to be exactly right. There is a balancing act to be had with foveated rendering and it’s somewhat of a ‘Catch-22’, which is: The smaller you make the scope of the fully detailed area, the greater the computational savings. However, the smaller this radius is, the shorter the latency and/or reduced level of peripheral foveation has to be.

Large rendered area (left); Small rendered area (right). Red=Detailed render; Yellow=Peripheral foveation

This is because when transitioning to ‘point B’ via ‘point A’ a noticeable change is going to occur for the viewer, where the image ‘pops’ from blurry to sharp as the resolution change comes into effect. What’s happening is, due to system latency (slow response), the display is not keeping up to speed as compared with the viewer’s eye movement.

When you increase the radius of the detailed focal point area the frequency and duration that this change can be perceived is reduced. As mentioned above, the degree of foveation is another variable which can be adjusted to mitigate issues. I.e. The difference between the full render and level of detail in foveation is minimised.

‘So zero latency is a panacea then!’ Yes, however there is what I might call ‘minimum effective performance’ in that there is a threshold from which a degree of latency is going to be acceptable, even if it’s a fine line.

A Fun Exercise

Something else to consider which somewhat alleviates the impact of the above issue and therefore reducing performance requirements is a natural phenomenon of our vision called ‘saccadic omission‘. This is where neither the eye movement itself nor a gap in visual perception is noticeable during saccadic eye movements of the viewer.

To understand this, stand in front of a mirror and switch point of focus from one eye to the other, you will not notice physical movement of the eyes as you do so. Now try the same with a front facing camera on a mobile phone display. Due to latency, you will notice your eyes moving back and forth. A nice example of eye movement speed vs hardware latency in effect.

Foveated Rendering Continued…

We’ve covered some fundamentals here, but there’s much to discuss on FR. Be sure to read Part 2 where we’ll take a look at how the technique is being delivered currently and some exciting developments surrounding it.

One thought on “How One Rendering Technique Is Improving VR Performance

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s