After a long day at the San Diego fair, getting home at 6AM, and then sleeping until noon I find we have a new technology announcement from Linden Lab. You can find it here: Project Shining to Improve Avatar and Object Streaming Speeds.
They have tried to tone down the geek speak. So, it is readable. If you are still wondering what it means for you and the future, well… I can give you my take and speculation.
The Lab has divided the new work into 3 groups called the Shining Projects:
- Project Sunshine
- Object Caching
- HTTP Library
Project Sunshine is changes coming out the research done for Cloudy, Blurry, Avatars. I started writing about Cloudy-Blurry as a subject in March as Charlar separated from the Lab and Nyx Linden took over the Content & Mesh group. (#SL Clouds, Grey, and Blurry Avatars)
Early in the year we started to see more problems with avatars failing to rez. The term Bake Fail became well known across the grid.
If you don’t know about how avatars render: rendering the avatar is a special case. It is a more complex process than rendering in the rest of Second Life. The idea with avatars is to save CPU cycles on users’ computers and the SL servers. The idea requires your viewer to download all the textures that make up your avatar and its clothes; a shape, skin, top, pants, hair, and shoes textures. The viewer ‘bakes’ those into a single composite texture. You see your avatar render nice and sharp when that bake completes. Then your viewer uploads that composite texture for all to see. That saves others downloading all the individual textures and baking them. Your avatar goes blurry as it downloads and decompresses the composite texture you just uploaded and it becomes sharp when that process finishes. You are the only one that sees the double blurry for your avatar.
If that process hangs or fails there is a problem. Where it fails affects what you or others see.
I suspected the recent roll of server software that uses the Pre-rendered Avatars was part of the effort to resolve avatar bake fail problems. See: #SecondLife News Week 26. Since this is the only package currently running in the Release Channels, it should roll to the main channel next week. It is unclear if that will be Tuesday or Wednesday because Monday is a national holiday in the US.
The original process for rendering avatars was designed to reduce the load on the SL servers and distribute the work across the client computers, our computers. Linden CPU cycles were saved at the expense of greater bandwidth consumption. That plan reduced the bottle neck that existed then.
Now that compositing process is being moved to the Linden Servers. They will spend Linden CPU cycles to handle the compositing process. Obviously something has changed and a decision has been made that this new choice will better handle current bottle necks. Remember. More and more of the SL communications is happening via HTTP protocols, which is much more fault tolerant.
Internet connection speeds are somewhat a misnomer. A megabits per second rating is about how much data we can get through a pipe in a set time. I suspect many people confuse that with the speed at which the data moves. Data moves across the Internet at the speed of light, well slightly slower. But, not enough difference to bother with. The Electromagnetic wave is 96 to 97% the speed of light. From New York City (NYC) to Los Angles (LA) (about 2,500 miles – 4,000km) it takes about 0.01344 seconds or 13 milliseconds for an electric wave to get there.
What this means to us in SL is a speed limit based in Frames Per Seconds (FPS). Ideally our viewers will run at 30+ FPS for a smooth lag free experience. That gives the viewer about 33 milliseconds to do everything it needs to do and talk with the server to render an image on our screen. The Lindens plan most of the design on the idea we will get 45 FPS. Whatever, just getting the information from LA to NYC burns up about a third of that.
It actually takes longer for the wave to travel from NYC to LA because the wave must go through routers and switches and they take time to function. This ups the time to 25 to 80 milliseconds on a good day.
It is very likely the Lindens are starting to run up against the hard limits of physics. We can’t relieve the speed bottle necks by changing Einstein’s physics, at least not yet. Our choice is to shorten distances. It seems that is what the Lab is doing. By handling the compositing process in the SL servers a huge delay can be taken out of the process.
It seems the Lindens are also going to try caching appearances. The Pre-Rendered Avatars we are seeing now are likely a test of the idea. All the clothes skin combinations in the Linden Library of avatars we find in our inventory are being cached.
I suspect that will guide future development by letting them accumulate data on hits and misses. Consider whether you would cache clothes and skin or maybe just clothes? Would it be faster to composite clothes worn by lots of avatars into the cache and then composite the clothes with the different skins as needed? There probably isn’t anyone that really knows.
Remember. A few weeks ago the Lindens cleaned out the data base from 1.2 petabytes down to 192 terabytes. So, they have lots of data storage room sitting around.
Speed may be a significant consideration. However, they describe a control issue. Because of our varied video cards and viewers, it seems they are seeing variation in the composite avatar textures being baked and sent in by users.
In late 2011 I was still writing about problems between viewers using open source JPG2000 software and the Lab using the proprietary KDU software to generate their JPEG2000 images. KDU is: Kakadu Software. Only Firestorm and the Linden viewers use KDU, AFAIK.
By using a compositing server on the Linden side those problems can be removed. They can have complete control over the process and not have to deal with what nVidia, AMD, and Intel do. Also, viewer differences will make less of a difference.
With things being cached and data retrieval being a local thing in the event of uncached appearance the process should be much faster and more reliable.
When I first came into SL caching was really bad. Today new people coming into SL wonder of there really is a cache that works. So, the Lab’s statement there is room for improvement is a bit of an understatement. It has gotten better while I’ve been in SL, but it still seems lame.
They point out that what is actually cached is much more limited than I imagined. I know when I’ve been out exploring then return home the render of my home is way slower than I expect. I’m there every time I log in. I’ve played with cache sizes and other settings to see if that could be improved without good results.
The viewer and server process of caching is going to change. The viewer and server will first quickly discuss want is current and what is out of date. The viewer will start drawing whatever is current locally available in the viewer cache.
I take it that the viewer now going to keep much more data in the cache. So, if you have your cache size set low, plan to increase it. I’ve been seeing cache sizes max out around 600mb. I am betting I’ll see that go up.
The Lab is saying once downloaded the viewer should never have to re-download. Well, lets put that in perspective. All the data in SL is 192 terabytes. If I start racing through SL It shouldn’t take too long to fill up even a 2 terabyte drive. So, even if not mentioned in the announcement there is going to be some cache clearing at work.
But, it does sound like the cache is going to be better.
If you played in Blue Mars you probably noticed higher frame rates and greater detail. A significant part of the reason they could do that is the region you were in was pre-downloaded, all the textures, meshes, scripts, and etc. Way less time was spend downloading and decoding the assets that made up the region.
By having a more effective cache we move closer to that model. We and the SL servers will spend less time downloading things.
We still have the problem of decoding the JPEG2000 textures. Currently the images are stored in a format that can’t be opened by most imaging programs. It is part of the Intellectual Property (IP) protection used in Second Life. No mention is made of any changes to that process. So… we are still unlikely to get renders as fast as they could be from cached textures.
HTTP stands for: hyper text transfer protocol. Cool huh? So, WTH does it mean to you?
You may not know that all internet communication, and even cell phone communications, is via small packets of information. It is like sending each page of a 300 page book in an envelope, so 300 envelopes to send the book. It’s a bummer if an envelope gets lost. You miss that page.
Error correction is about how to go back to the original sender and get a copy of the missing page.
Establishing a communication channel, breaking things into packets (evelopes), transmitting them, receiving them, acknowledging receipt, sending a replacement copy for non-acknowledged pages, assembling them in the right order on the receiving end and eventually closing the channel are tasks handled in HTTP.
The Lab has long been ripped on for using a lame HTTP setup. So, it seems that is being remedied.
Just as more and more graphics render functions are moving from software into the graphics cards, so too are more network functions moving from software into the chips on network cards. A new library will move those tasks out of the Linden software and call on the hardware to handle them. This should mean better overall network performance.
Some Other Factors
Here and there we see or hear slips of information coming out of the Lab. Often while a Linden won’t say anything about an issue they will say someone is looking at or working on it. It is hard to piece those bits together.
There are rumors, or at least I’ll call them that, that Runitai is working on the render pipeline. I’ve heard it said that some of the work is awesome. I suppose some the third party viewer development people know but they aren’t talking.
If true, better rendering is in the pipeline.
This rumor has been around a while now. Here and there we get hints. Ask a TPV Dev and they will likely say, “I can’t say anything.” That is different then their saying, “not that I know of…”
So, as we get better caching, server side compositing and better network connections we may see a real materials system arrive.
In Blue Mars and in Cloud Party some of the great detail and looks is due to their materials systems. I believe it is just a matter of time until we see that in SL. Being able to add normal maps to models is part of professional development systems. For SL to get there, we have to have a materials system at some point.
I won’t hold my breath or put money on it, but I believe a materials system is coming.
It looks like we are going to see some major improvements to Second Life. These are sudden changes to the SL development plan. I recognize some of these changes in projects that have been in progress for months.
I do think Cloud Party likely triggered the announcement. Cloud Party is new but it does have promise and that it has two things professional developers want has probably moved the Lab to make an announcement to possibly mitigate some migration away from SL.
Cloud party has user made animation skeletons and a materials system. These are things for which we have asked for a long time. A number of Lindens want them too. I suspect the challenge has been how to open the features up to amateur content creators and maintain acceptable performance. With the infrastructure changes it may be possible to add them.
If they do add them, expect these new features to be part of the ‘new’ accounting system we use with mesh objects.
Whatever the case, I find the news exciting.