Virtual Reality for consumers is becoming more and more common. There are now several mobile solutions that transform your smartphone into a virtual reality device, by adding a case with wide-angle lenses attached to it. Also there are dedicated devices for your PC with additional positional tracking, so you can look around corners by moving your head. These devices deliver a highly immersive experience. However, single pixels from the screen can still be noticed. According to researcher Michael Abrash, the screen resolution needs to go up to at least 8Kx8K. That means 64 megapixels need to be rendered, compared to the 4 megapixels we have in today’s best consumer head-mounted displays (HMDs).
Getting the performance to render 16 times more pixels will not be straightforward. To achieve this, we better start looking into smarter ways of rendering. As of now, we treat the image rendered for a monitor almost the same as an image rendered for Virtual Reality. But inside an HMD there are different things happening, that we should not ignore. We talked previously about the radial distortions and chromatic aberration that are caused by the lenses of the HMD and how to handle those in a more advanced way without sacrificing sharpness. But when you look through an HMD there are other distortions as well, which have not been considered during the rendering process so far. Let’s see how the image you see looks like:
Due to the distortion astigmatism, only the center of the image can be perceived very sharp, while with increasing distance from it, it gets more and more blurred. Ideally we would like to minimize this effect, but this could only be done by adding more lenses, which is counter-productive towards the idea of having a cheap and lightweight HMD. Therefore, in practice, almost all new consumer wide-angle HMDs just use one lens.
As the center area can be seen best and the outer areas become blurred, why should we still spend the same performance and quality on each pixel?
We change this behavior by introducing the concept of sampling maps into our research ray tracing renderer (using Intel Embree internally).