Drongle asked in the Content UG meeting about what I’ll call the breakeven point when using normal maps. When does a normal map cost more or less than the number of triangles it replaces?
This question comes from considering the cost added by downloading and rendering another texture, the normal map. As GPU’s render triangles faster every day, there is a moving point at which it may be cheaper to add some number of triangles and skip the normal map. But, where is that point?
The answer is: it is different on each computer. The only way to know is to test. But, that only tells you for the specific hardware you are using.
It would be good to have a rules of thumb that works for the majority of people. Nyx Linden says, “It depends on how overloaded the GPU is in terms of computing and at which stage of the pipeline, the overall memory usage, which depends on the scene and the available hardware, etc. turning on normal maps for a surface is going to increase the load on the GPU computationally, regardless of what texture detail the normal map is. Adding a more detailed normal map will increase memory needs and the stress of managing all the extra texture data. That being said, if you can get away with significantly simpler geometry for a little bit of texture data, that would speed things up greatly for any users that are geometry bound.
Keeping in mind that one of the most important things for a scene is batching – the more items that can be drawn with the exact same textures/settings, the faster your graphics card can blast through them.
So adding a normal map on a small face of one prim, probably won’t help much, but applying it to an entire object while greatly reducing the necessary poly count, probably a better idea.
In the end I’d say if you can get away with MUCH simpler geometry by adding a reasonably sized normal map, I’d go for it, but it’s one of those areas where artists have to balance one cost vs another, and the payoff will be different depending on who is viewing the result.”
On the idea of using procedural textures Nyx said, “Such a solution may help some graphics cards that are low on memory, but have extra cycles to spare, but it depends on exactly how the textures are generated/used, unless you implement a pretty low level system you’re not going to save much in terms of graphics memory on the video card itself. Also there are ways of scaling back texture data to fit the available memory that could be improved upon first.”
I’ll take that to mean the Lab has thought about procedural textures but, sees no benefit in using them for the SL system.
With this information we can look at our system using CPU-Z and GPU-Z. Whichever is running at 100% would show the bottle neck for that computer. We also need to look at our download pipeline to see how full it is. As it reaches 1mb/sec we are ‘sort of’ maxing out on what the SL servers are going to deliver. Also, our Frame Rate is going to show how well our system is handing the decompression and render processes. I could keep going. But, you probably get the idea we can only build for the general conditions.
The general consensus in the 3D world is normal maps can reduce the poly count in a scene by a large amount while increasing detail. We keep seeing people popping up in the forum with 30k to ridiculous 100k polygon items asking why they are having problems. They are things many of us would build with less than 1k polys and add a normal map. There is little doubt that in such cases adding the normal maps is the best choice.