The site Road to VR has an article about Nvidia and the changes they are making to improve VR performance and that tech moving into game engines. See: Unreal Engine and Unity to get NVIDIA’s New VR Rendering Tech.
Until recently video cards were given geometry (the mesh world), lights, and a camera position and they rendered an image. As the camera moved the geometry was reloaded shaded and rendered again with mesh and lights in relation to the camera. (Which is technically saying it backward. Consider the camera fixed and to change the view we move the world. Think of your computer screen as the camera. It sits on your desk never moving. Everything displayed on it moves. Thus the reason for reloading geometry.)
Design goals were to get that process fast enough to give good real time renders. That was achieved some time ago. Improvements were to make things faster so more ‘effects’ could be added.
VR has essentially doubled the work load of video cards. A right-eye and left-eye image are needed. This required that the geometry be loaded twice, once for left and once for right. Nvidia has built new tech into their new cards that allows the geometry to load once for multiple renders. The idea is the camera is in one place and the difference in position for left/right images is so small the geometry can be used without a reload. They seem to have figured out how to move the camera small distances without needing to reload the world.
This can make the render three times faster… instead of 50 frames per second (FPS) you could get 150 FPS. For VR this is awesome news.
The point of their article is that Unreal and Unity game engines are changing to take advantage of this tech. The change is probably not trivial. As it is now the game gives the geometry, lights, and camera to the card and gets an image. It shifts the camera for the other eye and repeats.
The new process will probably give the geometry, lights, camera, and a list of views to render then get back an array of images. Adapting old processes to this new one is going to take some code crafting.
Nvidia calls the new tech SMP – Simultaneous Multi-Projection. It is built into Nvidia’s 10 series cards. That is the series the GTX1080 card is in. It is my hope for getting a cheap GTX980. The 1080’s are selling for US$650 to $750. Retail new GTX980’s sell for $400 to $450 (Newegg $319). Prices on eBay are $150 to $300. People are selling 980’s to buy 1080’s.
My hopes are growing. Nvidia released their GTX1060 July 7, 2016. That gives us three 10 series cards; 1060, 1070, and 1080. The 1060 MSRP is US$250. That has to push the GTX960 price down. The 960 sells on Newegg for $170 (7/12). The 960’s price on eBay about $100 to $160.
The 1060 low price puts it in reach of many if not most SL users. That gets the VR support out there. It is going to make Google Cardboard and Samsung Gear much more usable.
Another interesting feature in the Nvidia cards is what they call Ansel™. It is the tech that allows a gamer to capture a 360 photo from within any game world. I have to wonder about the process. If it were a click once and done I can’t see it working in SL. I think that is what Ansel is. So, the problem would be the interest list has the viewer rendering only the stuff in view. Things behind you are not rendering and couldn’t be in the picture.
But, if you read the 360 photo link you find they use the panorama feature of some of the cameras to piece together a full 360 image. That could work. But, I won’t know until I can play with it.
For Second Life…
I wonder if the Lab will look at adopting the SMP render tech to move Second Life™ into a usable VR experience. Obviously only those with the newest hardware could use the tech. So, the Lab has time to let users upgrade while they decide how to support VR or not. But, the tech is going to make it possible for third parties to deliver VR.
The change for SL has to be in the render engine. That is a part of the system even the Lab doesn’t like messing with. Few of the third party developers have the knowledge to modify the render engine. Everyone tends to tweak around the engine. Even the Avatar Complexity addition is less a change to the engine than it is a change that simply tells the engine not to render someone.
Then there is Project Sansar… I suspect they are very aware of Nvidia’s SMP tech and have been for some time. Nvidia supports hardware and software developers. I would bet the Lab is on Nvidia’s mailing list and knew about the coming 1080 and SMP in some form. If SMP is already built into Sansar and they are sweating getting a steady 90 FPS… yikes. But, I think SMP has to make it easier.