Oz Linden is still looking for someone to test the current viewer caching system. Then either fix it or write a new viewer caching system. I am one that would love to see this. I’ve never believed the caching system worked well. Now Oz says he sees the viewer downloading things he thinks are already or should be in the cache.
If the viewer is not finding the items it needs in the local cache, of course it has to call on the system and download them, which means CDN is supposed to deliver it. But, CDN has to check with the asset servers to see if an asset has changed and if changed download it. But, as I understand it, SL assets don’t change. A changed asset gets a new UUID and to the viewer and CDN it would be a new thing.
There is also the case of an item falling out of the CDN’s cache and there still being a copy in your local cache. If the viewer finds it, that would be a huge time savings. It would save the CDN freshness check and possible re-download when the CDN cache had timed out.
I also wonder if CDN’s can take advantage of JPG2000 files? The idea in JPG2000 is that the file contains various sizes and qualities of the image. Plus parts of the image can be saved at different resolutions. I am not at all sure one can specify that type of image control for an image uploading to SL.
A JPG2000 request is made for one of the image’s qualities and resolutions. Only the necessary image data is sent. Other sizes/qualities can download in background. The idea is to get something to the user as fast as possible from a compact file.
The Lab may well be using a computationally demanding format (encoding-compressing/decoding-decompressing) in a system that cannot take advantage of benefits of JPG2000. Or the benefits may be built into the image format and everyone supports it so it works. But, as decoding and streaming handlers are needed I don’t see how that is likely. But, that is above my pay grade.
But, most of this is moot if the cache works well. Anyone out there good at writing caches?