#SL News Week 19

Kelly Linden provided some information on the Linden Scripting Language (LSL) function llGetAgentList(). The server maintenance package with this new function rolled to the main grid Tuesday. It rolled in spite of a problem. See JIRA item:

SCR-311llGetAgentList() with scope AGENT_LIST_PARCEL or AGENT_LIST_PARCEL_OWNER returns empty list when attached to avatar.

Server Scripting User Group

Of course those most interested in getting this feature intended to use it in avatar attachments. Frustrating. Kelly believes they can fix the problem and we’ll either see the fix pushed to the main grid or pop up in a release channel, depending on how well the fix goes.


The question came up as to why llGetParcelPrimOwners() only returns a list of 100 entries. It is not uncommon for a region to more than 100 prim owners in a region.

None of the Lindens present were sure. I suspect it is a legacy thing that has never been changed. I expect to see a JIRA requesting a change.

Disappearing Adjacent Regions

Some have started to notice that adjacent regions do not reappear after region restarts. It does not seem to be a problem on the grid-wide rolling restarts. It is a problem for single region restarts. The Lindens think regions might normally take 15 minutes or so to cycle in adjacent regions. But, the cases being reported now are hours after a restart. The problem is happening with restarts after region crashes.

While one can TP into the restarted regions and see them on the map, one cannot walk or fly across into the invisible region even if it shows as up on the World Map.

Andrew Linden says, “Well, there is a problem where the “region presence” data is cached for a while (30 min? an hour? – the duration of the cache is unknown. I’m not even sure where it is being cached… local squid or some other place) such that if a region comes up and asks for the presence info of its neighbors, and they are down, it can cache that state for that cache period. Even if they come up a few minutes later.

However, that is mostly a problem for regions that have been down for days, or never up, and suddenly are added to the world, and their neighbors take a while to connect because they have cached data for their list of neighbors. I’ve been thinking about trying to fix that neighbor awareness problem. It bites me sometimes when I’m bringing up some test regions that are neighbors.”

Andrew says there is a region presence server. Regions ask it who their neighbors are.  When regions come up, they inform the presence server. If a region has not been told about its neighbors, it will not accept connections coming from the adjacent regions. Andrew says the reason for that is, “It isn’t as simple as just accepting all connecting neighbors blindly, because that can cause problems if there are “dupes”, [which is] where there happen to be two simulator processes running the same region, which can happen on a very big grid sometimes. A simhost might momentarily fall off the net, for some network glitch. If the time is long enough then it will get replaced and if it [the original sim] suddenly comes back… then you’ve got two of them. Anyway, things get complicated when you’re managing a big grid.

Three Days of Maintenance

You have seen the notice posted here, on Grid Status, and various other blogs that grid maintenance will be occurring Tuesday, Wednesday, and Thursday evening from 6 PM to 2 AM PDT (8 hours). The peak use in Second Life® is around 2 PM PDT. By 6 PM use is down by about 15 to 30%. The least number of users are on about 2 AM PDT.

I suppose the time chosen is somewhat for the Lab’s convenience and to spread the inconvenience across the US and Pacific countries rather just the Asian and European countries by centering the time around the minimum use hours (2 AM PDT).  But, not one is saying so that is a guess.

Andrew Linden tells us, “I think part of that is an operating system upgrade on some hosts, not network level maintenance, but I’m not sure. We’re definitely working on migrating to later versions of Debian, but there will be a few upgrades along the way before we arrive at Debian/Squeeze.

Simon Linden also said, “I think that’s for some servers to get re-imaged with a more updated version of the OS. They get shutdown, start up on another server, then the now-empty servers can be re-imaged and brought back online. That’s repeated for every server that needs it … usually in batches.

5 thoughts on “#SL News Week 19

  1. Pingback: New LSL stuff for the main channel - SLUniverse Forums

  2. Indeed, very interesting to know these details. And it also gives an idea of how complex it is to run a system as big as Second Life, and the level of expertise required. It is sad that the Lindens get a lot of flak and not enough kudos for all the work they do.

    • I attribute much of the flake the Lindens get to ignorance and individuals’ inability to cope with frustration. Unfortunately there is no easy fix for either.

  3. I suspect the Garbage problem is more to do with old accounts than with active accounts and little-used content. And 100 hours is hardly long enough a test. I would want to see twice that just to be sure of including a weekend.

    My understanding is that SSDs are an important part of modern multi-level cache systems. That garbage data might be stored in the slowest level, the existing asset server is effectively the next level, and the SSDs would sit between spinning disks and RAM, caching the reads. It wouldn’t be a good idea to write-cache via an SSD. The hardware supports a limited number of data writes to a location. and each logical write could need several physical writes.

    But the SSD caching I have heard about has been for servers supplying large data sets, and SL has almost the opposite problemL a large number of small data sets.

  4. Pingback: Grid Maintenance: all silent on the Linden front | Living in the Modem World

Leave a Reply

Your email address will not be published. Required fields are marked *