Second Life: Content Creation (Bento) 2017 w08

I get everything working, then something breaks… So, I have some video from the first part of the meeting. The viewer never connected to voice. Took me a bit to realize it. Multi-tasking tends to promote oversights.

Re-logging broke OBS… and while I thought it was recording, it wasn’t… 

There are some interesting avatars to look at in the video. But, that is about it.

Medhue’s bearded dwarf avatar caused some interesting comment. It is contract work…

Elizabeth Jarvinen (polysail) is working on a Bento skeleton for something… without audio I was missing a lot.

There was discussion about Non-Player-Characters (NPC) and being able to animate objects, not just avatars. Part of the discussion dealt with avoiding scripting the motions of objects and being able to use animations.

The discussion then went into the benefits of being able to animate mesh and avoid the current generation of onion mesh animations.

As it is now, to animate mesh you make a model of the mesh for each frame of the animation. Then you script alpha layers to switch on and off to create the appearance of a moving mesh. This is expensive an polygons and textures.

Some think being able to animate mesh and use Pathfinding would be an exciting change in SL. But, there are problems as people have found out with Pathfinding pets.

Conversation got around to the performance improvements being seen in Bento viewers when in crowds of avatars. I hadn’t heard that the Lindens optimized avatar rendering… or I forgot.

Those making animations are looking into the animation format and SL constraints in an attempt to make animations that work with any size avatar. There are parts of the animation system that have little documentation. Vir has run across stuff in the code and will see about getting some more documentation.

There is also discussion of having the Server Side Baking (SSB) used by the classic avatar expanded to bake the textures on mesh. I think this sounds interesting but, I can’t see the advantage… as best I can tell the proponents of the idea figure they could eliminate the onion skin mesh body parts. I think the idea is underclothes, tats, skin, and clothes could be baked into a single texture.

Consider. I think a Slink body has 5 layers (alpha, skin, tat, clothes, and underwear)… If the body (5), feet (5), hands (5), and head (5) all have five layers and nails have one for a total of 21 textures. Each texture can have material properties; specular (shiny), normal (bump), and diffuse (color). That makes slots for  3 layers.

Figuring out what that stakes up to… we have a possible 21×3 textures the mesh can be using. A 1024px  texture uses 1024x1024x4 (3 colors and alpha) pixels =  4.1 Mbytes. So, the mesh avatar could be using 258MB. If all this could be baked into 4 textures… that is a significant savings.

If this sounds complicated… it is. More than you may think. The various brands of mesh body parts add to the complication. While I see a significant potential for reducing the data load… I’m not sure it justifies the complication for users of conforming to some system bake process.

There is disagreement on what size textures should be produced by the bake engine (SSB), 512 or 1024. Users want 1024…

If RL doesn’t interfere, I’ll try to capture the next meeting.

Leave a Reply

Your email address will not be published. Required fields are marked *