Kelly Linden provided some information on the Linden Scripting Language (LSL) function llGetAgentList(). The server maintenance package with this new function rolled to the main grid Tuesday. It rolled in spite of a problem. See JIRA item:
SCR-311 – llGetAgentList() with scope AGENT_LIST_PARCEL or AGENT_LIST_PARCEL_OWNER returns empty list when attached to avatar.
Of course those most interested in getting this feature intended to use it in avatar attachments. Frustrating. Kelly believes they can fix the problem and we’ll either see the fix pushed to the main grid or pop up in a release channel, depending on how well the fix goes.
The question came up as to why llGetParcelPrimOwners() only returns a list of 100 entries. It is not uncommon for a region to more than 100 prim owners in a region.
None of the Lindens present were sure. I suspect it is a legacy thing that has never been changed. I expect to see a JIRA requesting a change.
Disappearing Adjacent Regions
Some have started to notice that adjacent regions do not reappear after region restarts. It does not seem to be a problem on the grid-wide rolling restarts. It is a problem for single region restarts. The Lindens think regions might normally take 15 minutes or so to cycle in adjacent regions. But, the cases being reported now are hours after a restart. The problem is happening with restarts after region crashes.
While one can TP into the restarted regions and see them on the map, one cannot walk or fly across into the invisible region even if it shows as up on the World Map.
Andrew Linden says, “Well, there is a problem where the “region presence” data is cached for a while (30 min? an hour? – the duration of the cache is unknown. I’m not even sure where it is being cached… local squid or some other place) such that if a region comes up and asks for the presence info of its neighbors, and they are down, it can cache that state for that cache period. Even if they come up a few minutes later.
However, that is mostly a problem for regions that have been down for days, or never up, and suddenly are added to the world, and their neighbors take a while to connect because they have cached data for their list of neighbors. I’ve been thinking about trying to fix that neighbor awareness problem. It bites me sometimes when I’m bringing up some test regions that are neighbors.”
Andrew says there is a region presence server. Regions ask it who their neighbors are. When regions come up, they inform the presence server. If a region has not been told about its neighbors, it will not accept connections coming from the adjacent regions. Andrew says the reason for that is, “It isn’t as simple as just accepting all connecting neighbors blindly, because that can cause problems if there are “dupes”, [which is] where there happen to be two simulator processes running the same region, which can happen on a very big grid sometimes. A simhost might momentarily fall off the net, for some network glitch. If the time is long enough then it will get replaced and if it [the original sim] suddenly comes back… then you’ve got two of them. Anyway, things get complicated when you’re managing a big grid. ”
Three Days of Maintenance
You have seen the notice posted here, on Grid Status, and various other blogs that grid maintenance will be occurring Tuesday, Wednesday, and Thursday evening from 6 PM to 2 AM PDT (8 hours). The peak use in Second Life® is around 2 PM PDT. By 6 PM use is down by about 15 to 30%. The least number of users are on about 2 AM PDT.
I suppose the time chosen is somewhat for the Lab’s convenience and to spread the inconvenience across the US and Pacific countries rather just the Asian and European countries by centering the time around the minimum use hours (2 AM PDT). But, not one is saying so that is a guess.
Andrew Linden tells us, “I think part of that is an operating system upgrade on some hosts, not network level maintenance, but I’m not sure. We’re definitely working on migrating to later versions of Debian, but there will be a few upgrades along the way before we arrive at Debian/Squeeze.”
Simon Linden also said, “I think that’s for some servers to get re-imaged with a more updated version of the OS. They get shutdown, start up on another server, then the now-empty servers can be re-imaged and brought back online. That’s repeated for every server that needs it … usually in batches.”