A Lab Chat meeting with Ebbe Altberg, Linden Lab CEO, was yesterdays (1/21) big event. We learned new things about Second Life™ and Project Sansar. We are starting to see reaction from a few to the new information.
I am paraphrasing what I heard on the audio and I’ve shortened it. For some questions the answers were repeating what was said earlier. So, I’ve included time marks to you can easily find the actual statements in the audio stream. I have also expanded on what Ebbe says from what I believe is implied. Take that with however much salt you feel is appropriate.
The main source of information for those that missed the meeting is the site: LabChatSL.com. This morning only the audio is available. Later a video and transcript are promised. The audio file resides on Drax Files. This is the unedited audio with an introduction. Length: 1hr 41 minutes. The audio is slightly over-driven making it hard to understand in some places.
Saffia Widdershins opens by telling people about LEA regions. The actual Q&A starts at Time Mark (TM) 0h:04m:10s.
Questions were taken from Second Life residents who posted them in the SL Forum. 80 Questions total. Only part of those questions could be asked because of time constraints.
0:05:10 – What tools will creators have for making content for the Project Bento avatar? Will it be possible to create animations for just facial expressions? Question lasts for 1m:20s. OMG.
Ebbe starts with tools. The Lab has been working with AvaStar and MayaStar. Those are ready now. I’ve played a bit with AvaStar 2. More complete information about Bento is on the wiki and that will be updated to stay current. Also, a Project Viewer is available and it is being updated.
There are no plans, by the Lab, to build new in-world animation tools. Also, later in the meeting Oz messages in that the ‘recording’ feature of the viewer is not to be changed. At least not now.
0:08:00 – Ebbe says if they make new default avatars, they may add new appearance sliders for the new bones in those default avatars. But, that work has not been planned as this stage of Project Bento is just to get it completed and working.
8:30 – Translation (moving position) of bones was a problem. At first translation was not permitted in the preview grid because of the known and anticipated problems. There was a load calmer from users for translation. It is now enabled in response to user request and the Lab is evaluating how people are using bone translation.
The intention of the Lab is to make it work and keep it enabled. But, there were and are problems problems to solve regarding translation. So as far as Ebbe knows, the issue is undecided. If they do decide it is possible, there will be additional development time spent solving the problems bone translation presents. I suppose that will push back the completion date.
9:30 – Recording animations? Ebbe expects it to be as always. – I don’t recall the viewer’s ‘recording’ ability recording the movement of animations, just the avatar. But, I seldom use it. Machinima people seem to depend on it.
Ebbe does not know if separate facial expressions will be possible. – Jo wants a single raised eyebrow. I see that as no more complicated for Bento than making an animation in SL now that raises one hand. We animate as few or as many of the existing bones as we want now. I see nothing that would change that with Bento.
Out of Sequence: 0:21:10 – Per Oz Linden – No recording ability for Bento avatars. None planned. Plus he confirmed my thought above on no problem doing single bone animation for a raised eyebrow.
I see a problem regarding how many animations can run at one time. We know animations can override other animations. It is why we have animation priorities. Looking at Limits in the wiki animations have length as time: 60sec subject to file size, file size: 120k bytes, priorities: 4 or 6 depending on animation format, and number of joints in an animation: 32.
Not included in the Limits is a limit I’ve heard discussed, which is about how many bones can be moved per rigged mesh. Content creators are said to break some mesh items into separate mesh parts to work-around the limit. I haven’t done enough work with animations to understand how all the limits work together or whether there are limits on how many animations run at the same time.
More pages, links below…
Pingback: Lab Chat episode 2 audio on the Drax Files Radio Hour | Daniel Voyager's Blog
Hm. Not as much info here as I would like for an ambitious project apparently only half a year from beta, and just one year from release. Seems like they’re still sorting a lot out.
I have no idea how they’re expecting to properly categorize what will eventually end up being a very large variety of items, to have them integrate fluidly with various experience inventory rulesets. The only potential way I can see it working really well is with a crowd sourced tagging system of some sort. But if they do, it could be really great.
The Lab is not providing a ton of new information. What they are making clear is that many of the answers people want can only be answered by the designers building new places using Sansar.
I agree on the sorting. Much of what they are building will use new ways of doing old things. I phrase it as they know what was done in SL and understand what is needed, but they are experimenting with new ways.
Categorizing… everyone does it differently. I expect the Lab to provide the means to categorize, but I think the content creators will likely decide the categories. While that won’t be crowd funded, it will be the group of content creators making decisions.