Content Creation UG 2017 w45

Server-Side Baking

The performance testing has been completed. The Lab will need to add additional baking servers.

From the Server-Beta UG, I found that the new baking service outputting 1024 textures has been rolled out across the ADITI grid. So, you can jump over there and see what you think.

Remember. This service bakes a set of three composite textures for the classic avatar’s head, upper, and lower body. And, in general, there is no possible visible difference until you zoom in close enough that the pixel density in the object and your screen comes close to a 1 to 1 ratio. In other words, when you are zoomed in close enough that each pixel in the texture is using one screen pixel.

So, how close is that? If you look at the clothing templates, you can get an idea. Consider a face/head. The template shows the head uses about 80% or more of the vertical height of the texture. Chin to forehead uses about a quarter, maybe a third, of the vertical height of the texture. So, say the face is roughly using a vertical line of 256 pixels.

On a 1920×1080 screen that 256px is about a quarter of the height of the screen. My screen physically measures 12” high (30.5cm). So, to get near the 1:1 pixel ratio, I need to see my face 3” tall (10.2cm). When I get my face 3” high, all of my body is near the 1:1 ratio of texture to screen. It is this size at which the best render of the skin and other clothing will happen.

Currently, the bake system on AGNI, main grid, is baking the composites to 512×512. The 1:1 ratio will happen when the face is about 1.5” (3.8cm) high on the screen.

However, if you photograph avatars as Strawberry Singh does using 3000x3000px images the whole game changes. You can no longer see what you are getting from the viewer on your screen. You will have to go into Photoshop, GIMP, Paint.net, or another image editor and zoom the image to a 1:1 ratio and figure out the ratio of the skin/clothing texture to the final image.

Most us just grab the skin texture from the cache, look at it using a 1:1 texture to screen ratio and see how well it shows in the image. Then if it is the skin we are trying to show off, we adjust our zoom until it looks good, avoiding the math brain damage.

It should be obvious, seeing a 1:1 ratio on your screen while in SL is way unlikely and a rare thing. Knowing how textures end up mapping to the screen is complex, but it provides you the key to creating good-looking optimized textures.

First Phase of Bakes

Vir Linden points out that this is just the first step in changing the bake-service. The goal is to use the service for mesh objects. This presents the problem of how we will talk to the Baker and tell it how to bake composite images for our meshes. The communication problems will have to be worked out in phase two.

With mesh being able to have 8 faces will we use 8 textures? Or will there be a way to tell the bake engine what part of a texture to use for which face? The basic idea is to optimize the use of textures. But, you can probably see the complication that gets added.

I still have questions as to how flexible the system will be. Do, I only ever get one 1024 texture for the 8-faces? Or will I be able to use the texture on the faces I want, like face-1, 3, and 7 with another texture on 2, 5, and 6 and yet another on 4 and 8?

And when will see normal and specular maps added to the bake-service? Or will we?

Skins are becoming awesome. At the fantasy something event, I saw a YS&YS skin with metal sections. Gorgeous. I haven’ tried the demo yet, I plan to. But, I can see it is using specular mapping and may be normal mapping.

If part of the idea for Bakes On is to reduce the mesh body layer count and the number of textures used by mesh bodies, why would mesh designers give up the use of normal and specular maps? Wouldn’t they just continue to use multiple meshes with diffuse, normal, and specular textures? I think they will.

Only time will allow us to learn how this plays out. Vir does seem to be aware of these challenges and is looking at how to handle it. But for now, Vir isn’t saying what they will or won’t do in regard to normal and specular. Just that it is NOT part of the current plan.

The materials attribute layers are not part of the current bake engine. So, it repeatedly sounds like they will leave the materials out until they have the rest of the system working. I think this will severely restrict the usability of the Bake Service and delay its acceptance by designers.

More pages… links below.

2 thoughts on “Content Creation UG 2017 w45

  1. I was wondering what was animesh. I thank your for this post that did help me get a better picture. But I am still confuse as to why it requires a special viewer.

    I think you are an experience opensim user too . If I compare animesh features to NPC opensim feature would you say its doing the same thing ?

    As you know opensim offer non playing character . There are server side bot you can use in your region. You can see them now and they dont requires any sort of viewer characteristics not already available .

    For exemple NPC dancer are used in by Aine dance ball so you can dance in couples in events even if your single. You can also visit outwordz VIRUNGA – HOME OF THE MOUNTAIN GORILLAS where NPC are use to move jungle animal including elephant hypothalamus tigers and even Diane to present an immersion experience .

    So would you know care to explain me what is the difference with ANIMESH ?

    • The reason a special viewer is needed is because the current viewers only know how to handle a skeleton that is connected to an agent, person. New viewers will be able to render mesh & skeletons NOT attached to a viewer. The OpenSim NPC’s sort of cheat by pretending the NPC is attached to a viewer. So, if I understand, the OS NPC’s cost more to render than animesh because of the work-around.

      OpenSim NPC’s are in some ways better than animesh for making and animating NPC’s. But, animesh will likely be more versatile. We will be able to make pets and other non-humans. Some designers are thinking of using animesh to animate the sails on a sailboat. Others are planning to use it to animate steam engines.

      Also, we will be able to attach animesh to our avatars.

Leave a Reply

Your email address will not be published. Required fields are marked *