Second Life™ data centers are located in Phoenix and Dallas according to Maestro Linden. He is not sure but thinks there is another in DC. A few months ago the Lab was consolidating from three data centers to two. So, it is probably two, but one never knows.
When asked, Maestro confirmed there is no data center in San Francisco.
If you geo-locate the region IP addresses (site) you can get from the viewer’s Help-About…, you will probably find the addresses are in other locations. For me most regions appear to be served from San Francisco. I suspect that if you are on the east coast of the USA you’ll find them in a different city, but I’m not sure. Let me know if you check it.
I’m about 460 miles from San Francisco and about 300 miles from Phoenix. So, one would think that I would connect to the closer servers in Phoenix. But, network traffic routes through the major backbones and data centers. Traffic leaving Southern California generally travels up to Los Angles (LA). For me to connect to a site in Cambridge, Mass. (126.96.36.199) my connection is through LA and up to Montréal, Canada and eventually to Cambridge. Not the most direct route for surface travel. The point being map distance is not the same as network distance.
We also know that the Lab uses Amazon’s cloud computing for storing and compiling code for the viewers and servers. Are they also using Amazon’s Cloud for asset storage? May be. Amazon’s data centers are all over the word with one center in Dallas. So, it would seem to make sense that they would.
That I connect to San Francisco and Maestro says there is no data center there suggests to me he may be making a distinction between a processing or switching center and a data center.
Whatever the case, the Lindens really don’t want to layout the architecture of the SL system. That would make something easier for griefers hacking the system.
Second Life Servers
You probably know the Main Channel was rolled back from the week 17 version to the week 16 version. The main grid lost the nice new AO functions and some Pathfinding fixes.
This was due to a problem in the week 17 code creating a problem in the backend services. For a server to know which servers are handling the regions adjacent to the region it is serving, it makes a call into a service and asks. Seems the week 17 code was hard-of-hearing and kept asking ‘What did you say?’
Apparently this was running the servers that answer the ‘Who’s near me’ questions at near capacity. There was enough traffic congestion in the system that regions were taking a long time to connect to neighbors after a restart, like hours.
So, for week 18 we are running on a week 16 version of the server software.
The RC channel Le Tigre is running a highbred version of week-17 code. I has all the week 17 upgrades expect the one that caused the performance issue from trying to find its neighbors.
The RC channel Blue Steel is running the Experience Tools package with the same changes as Le Tigre to avoid performance problems.
There is a report that scripts are running slow in Le Tigre.
The RC channel Magnum is running the HTTP function changes package with the same highbred change as the other two channels. Maestro is hopping this package passes testing and makes it to the main channel in week 19.
Not a Bug
If you have submitted a feature request, you may have noticed it getting marked as resolved with the reason Not a Bug. This is common response for feature requests. You may also notice the status change to ‘not applicable,’ which is apparently a byproduct of the Not a Bug setting.
The Lab does not currently provide a ‘feature request channel.’ Feature requests are filed as a Bug Report. A number of us recommend that the report title include the words ‘feature request.’
Since the BUG JIRA project is mostly for the Lab dealing with bug reports they transfer the feature requests out of the BUG Project. They go into a feature request list, which I have no idea of where it is kept.
So, don’t go freaky thinking your wonderful feature request has been rejected because it is not a bug. That response is to help those Lindens fixing bugs narrow down their information load.
You have probably noticed that the selection animation, like when you edit something, for the avatar is a bit ridiculous at times. The subject of whether or not that could be changed came up.
The ‘selection arm animation’ is not currently in the list of default animations that can be replaced with user animations.
Maestro Linden tells us from the Linden side that the selection animation is called ‘editing’ and it is a priority 2 animation. That means it can be overridden.
Whirly Fizzle tells us that in Firestorm and some other TPV’s one can disable it by using the Debug Setting to disable pointer target using PrivatePointAtTarget. The SL Viewers do NOT have this setting.
Not So New Default Animations
You may remember some time ago, about April-May 2012, we got a new walk animation to get rid of the duck waddle. I recall a contest to find a better default animation. But, I can’t find anything on the contest or the results of it. Maestro remembers something about it as a few others do. But, no one seems to know the results.
The Debug Setting UseNewWalkRun tells the viewer which animations to use. I assume this only changes what you see in your viewer. The default value for the setting is TRUE in the SL Viewers.
SVC-7947 – SL External Animation priority changed? – This was a bug report on the change in animation priority for the new default walk animation dated May 2012. It has been fixed and animation priorities work as always.
But, the new (2012) default walk animation looks the same as the previous (pre-2012) walk animation, at least to me. So, it appears there has not been a change. But, it has been so long since I looked at the default SL walk animation I am sure I don’t remember.
Does anyone know what the deal is?
I didn’t know about these. It seems that if a region goes into a series of crashes that can force a roll back to a previous state. Regions are backed up. I am a little fuzzy on exactly when, but frequently. They are definitely backed up when they shut down. I’m also a bit fuzzy on how many backups are kept, but it is at least two.
When a region crashes the simulator is started on a new server and the region state comes from the most recent backup. This includes objects, scripts, terrain, etc. If the region crashes again within in a 10 minute window, it repeats the process. About the third time this process repeats the services has the start use the PREVIOUS not the most recent backup.
So, anything built or changed between the backups is lost.
For a tricky griefer the ideal trick is to get a griefer toy in and then not trigger it until the region restarts. This gives it time to be included in the region backup. Then each reboot of the region brings the griefer toy back too.
There is a suggestion and hopefully soon a feature request to have all objects returned, that would normally be cleared by standard region return time outs, returned during the boot process.
There are some considerations one has to think about. Basically if region is currently using object return, the new feature should not be a problem.
In regions not using object return it is common for people to forget to have things built in the right group. Implementing this feature would likely return all those objects making some people unhappy.
Whatever, this does seem like a feature that will at some point get implemented.