It has been over a year since I last tweaked my video card for use with Second Life. A recent exchange in comments with Lance Corrimal, Dolphin Viewer, got me wondering about how I have my video card setup. It’s been over a year since I last made changes. When using the default High setting without other tweaks I get from 15 FPS to 80 FPS with my 8800 GTS depending on which viewer and which region I’m in.
Plus I’ve upgraded my video driver over a dozen times in the past year. So, maybe it is time to look at my computer’s settings and see if I needed to change anything. Plus its rainy in California… So, this article is about the arcane art of Graphics Settings tweaking. We’ll see if I can explain things in English for the less geeky residents of Second Life and provide something helpful to those neither novice nor supper geek.
I can’t tell you how to best set your computer graphics for SL considering how different everyone’s computers are and how different our preferences and perceptions are, that just won’t work. Hopefully I can help you understand the settings so you can set performance and image quality you will enjoy.
Part of the joy in Second Life, at least for me, is good looking 3D rendering. The problem is the best rendering takes time. We can either have a beautiful but slowly built screen image or a fast but lame image or something in between. It seems everything that improves performance degrades the image quality. The compromise points are decided by our individual personal preferences and perceptions. For instance; I would rather die a little more often in combat games and look good (/me bats eyes).
Since video performance affects movement slow video rendering makes movement jerky and introduces input/mouse problems. Anything below 25 Frames Per Second (FPS) starts to show a noticeable jerk. In the 5 FPS range it seems like the computer is hanging and we experience large jerks in the image and avatar movement. Some people don’t notice it and it drives others nuts. So, graphics settings end up being a compromise between; performance, render quality, computer cost, and personal preference. Since we already have a computer cost is not an issue until we decide to upgrade.
For many the challenge is; knowing which things affect quality and performance and what a setting does. For many it is a trial and error process. To get away from some of the trial and error I’ll provide some answers for some questions I’ve had.
There are a load of settings that have to do with aspects of image quality named things like; Anti-Aliasing, Anisotropic Filtering, and more. Those names do not mean anything to many people. Since many changes are visually only noticeable in side by side comparisons it is difficult to see the effect of small changes, especially if you don’t know what you are looking for. To see an excellent explanation of the terms and SEE what they do, check out the article on GameSpot, How to Optimize Your Frame Rates. For more written information and longer explanations see TweakGuides’ plain English The Gamer’s Graphics & Display Settings Guide. The nice part of the article at GameSpot is you can see the before-and-after pictures side-by-side for how each setting change affects the image. That should save you a load of time figuring out which settings you may want to tweak and how far to tweak them. Plus I don’t want to make all those images.
To see the effect of AA on your screen image see the GameSpot article linked to above. AA is a feature that has a highly noticeable impact on both image quality and performance. A better image means worse performance. Depending on your video card it can drop your FPS rate by 50% or 60%. But, it makes the image look much nicer, great for taking photos. With newer video cards the slowdown is much, much less.
You can change the setting and then use Ctrl-Shift-1 to open the viewer stats panel and see the FPS and other rendering stats. If you want the gory details use Ctrl-Shift-9. In some viewers these keys only work if you have the advanced and developer menus open (Ctrl-Alt-D and then Q).
To turn on AA in Second Life Viewer 2 (SLV2) one needs to look in Preferences -> Graphics -> (button) Hardware. In SLV2’s prior to SLV2.4 (which is still Beta as I write) AA is broken. I haven’t played with it in SLV2 for some time so the fix might be in 2.3, but I don’t think so. By default AA is turned off.
If you have a low end video card, enabling this can bring your computer to its knees and destroy your frame rate. The stronger (newer) your video card the higher you can push this setting. There is little to be gained by pushing this past 4x in SL, but that depends on your personal perception of the resulting image.
Depending on your video card you may have various other AA settings, like; Supersampling, multisampling, Quincunx, transparency, Gamma Correct Anti-aliasing, temporal, adaptive anti-aliasing, Coverage Sampling Anti-Aliasing (CSAA), Custom Filter Anti-Aliasing (CFAA), and Morphological Anti-Aliasing (MLAA). Since these settings are not in SLV’s Preferences you have to set them in your video card controls.
Those terms above are listed in order of worse performance to better performance. Super-Sampling is a bit of a brute force method. The newer types of AA have links and provide AA more efficiently. One has to do some trial and error testing to see how AA performs on their hardware. More Video RAM improves AA performance. Newer video cards have better, faster AA processing algorithms.
UPDATE: (9/2011) There are new ways to handle AA now. Depending on your video card you may have 30 settings that affect or are affected by your decisions on AA. A new… tutorial on AA and the new processes can be found at Tom’s Hardware: Anti-Aliasing Analysis, Part 1: Settings And Surprises
Anisotropic Filtering (AF)
In the SLV2 viewers the setting carries a warning that the performance will be slower with AF turned on. That is true but nothing like the performance hit of AA. GameSpot points out that even older cards do pretty well with this setting and the performance hit is small.
The SLV2’s only have an on or off setting for AF. This means if you want better AF you will have to use your video card’s settings to override the viewer’s settings. The Lindens have hard coded in whatever value they are using. The difference in the performance hit between 2x and 16x for this setting is small on newer cards. However, I can’t see much difference between 4x and 8x in the results.
With nVidia cards one can override the setting in the nVidia control panel. It can be done on a game-by-game basis too.
I’m assuming you are using a digital monitor. If your using a CRT, things are a bit different. For a digital screen your screen has a resolution, something like1920x1080 pixels. Your best quality comes from using the screen’s native resolution. That information can be in the display’s manual and likely will be shown in the screen settings/properties of your operating system, Windows, Mac, or Linux. The native resolution is usually listed as the ‘recommended’ setting.
Any setting other than the native or recommended requires the display to adjust the image to fit the screen and that degrades the image. There is no performance hit as the 60 FPS refresh most digital monitors are locked to never changes.
The Second Life screen can be run full screen or windowed. The more pixels the viewer has to render the lower your frame rate. Again, the trade offs are opposites. The more pixels used the better looking the image. However, video cards are so fast total pixel count is a minor issue.
This is a viewer setting in Graphics. I suspect most SL residents understand how this works. The farther the draw distance the more stuff you can see, which means more textures have to download and the render engine has to render more objects.
Draw distance is measured from the camera. Distance settings that extend into neighboring regions add an extra load as the regions have to send information into the region your avatar is in. Linden Lab is working on new features and changes to the texture and object download process and render pipeline to improve performance.
Sometimes called dynamic shadows these shadows add considerable load to the render process. Anything less than an 8800 nVidia or similar card is hopelessly overloaded. KirstenLee is rebuilding the render pipeline and is planning to have a new render engine in her S21 series viewers. The S20(44) has parts of that new pipeline. With SLV2’s I get 2 to 5 FPS with shadows on. In KirstenLee’s S20(42) I was getting 8 to 15 FPS. The new 200 series nVidia cards are said to do a good job of handling shadows and providing high FPS rates.
There are additional settings for shadows and how they are rendered. Plus there are settings for how lights are rendered. Most of these features are in development. LL is building a better interface to OpenGL and sorting out problems with these features. So, I’m going to skip them. Also, you’ll only find the settings exposed in TPV’s (KirstenLee’s).
There are few settings you can control in regard to texture. The quality of the image in regard to textures depends on the amount of video memory one has. More memory (RAM) is better. New cards should have at least 512mb of video ram. Many will have more.
The viewer only has a setting for the texture cache size. Your video card will likely have additional settings. One is Texture Filtering that limits the anisotropic samples to use. However, this feature is not used in SLViewers, AFAIK. It is a DirectX thing.
Update: In early 2011 Linden Lab changed how textures download. Previously a UDP protocol was used. Now the HTTP protocol is available. Currently third party viewers have settings that allow one to use either UDP or HTTP. HTTP is faster and more reliable.
There are a number of other texture filters that improve quality or performance. On nVidia cards one selects between 5 quality/performance settings and nVidia sets the filters.
Viewer Texture Cache (Update)
All SL viewers have texture caches. These have nothing to do with your video card settings but the cache does affect performance of your card. If the card has to wait for a texture to download and decode, it can’t be rendering the texture to the image. There are 2 basic disk caches. There are all the textures used in the viewer skin in one cache. You never need to be concerned with it.
There is the other large texture cache that holds all the textures downloaded from SL. In mid 2011 that cache changed. It is now a 1,000 megabyte cache (max size) that is indexed for faster access and more retention. So, return visits to frequently visited places should rez faster.
Push the SL disk cache up to the max. The only reason not to do so is limited disk space.
VSync & Triple Buffering
VSync is not in the GameTap article. You will find comments about it. I’ll give you the short course.
VSync is short for vertical synchronization and is a term from CRT display technology. In analog television pictures it was a way to keep the incoming signal and the TV’s display synchronized. When sync is lost; the picture rolls, which is seen as the top of the picture moving up or down the screen and appearing to slide out the bottom or top.
In game technology term VSync means something a little different, but similar. In both cases it is about when to start drawing the image on the screen. The video card can render some number of frames and it varies. The monitor can display some number of frames and it is fixed, never varying. LED, LCD, and Plasma screens usually display 60 FPS. But, the image never rolls because the image signal is digital not analog. The top left pixel in the video card is always the top left pixel of the display and that part is always in sync unlike a CRT.
The video card is painting a picture into video ram (memory). The display is reading that picture from the same video ram. If they do not read and write in sync, the display can show parts of different frames as the card races past the display or lags behind it. You see the result as TEARING. You would likely see tearing as your avatar spins in place, quickly turning. Your view gets part of one frame in the top and another part in the bottom and they don’t line up. (See: Wikipedia image for an image and get more information here.)
Vsync prevents the video card from sending a frame to the display until the frame is completely rendered. The card waits for the display to signal it is ready to draw a new frame. Obviously with the video card waiting for a VSync signal there is likely to be some delays.
To get around that delay buffering is used. A buffer is just more video memory. Each buffer is a separate block of memory and holds a screen full of image. Avoiding the delay is where Triple Buffering comes in (more info). Triple Buffering allows the video card to render up to 3 separate images (or more) and provide only complete images to the display. The video card will not have to wait on the display to draw the next frame. While the display is reading image 1 the card can be rendering image 2. If the card gets done with image 2 before the display is done painting image 1 to the screen it moves on to image 3. This would mean that with a 60 FPS display the card could render up to 180 FPS without VSync slowing it down. With larger onboard video memory more buffers can be created. Using ten buffers would allow a video card to render 600 FPS and never slow down waiting on the display.
If the video card is slower than the display then the display is going to overrun the card. Without VSync it will be showing images that are incomplete, part new frame and part old frame. VSync keeps that from happening. So, image 1 is shown until image 2 is complete. So, if the card is running at 30 FPS the display may draw image 1 twice before moving on to a completed image 2. So, your card is getting 30 FPS to the screen while the screen is drawing 60. As the rendered FPS decreases the completed images are redrawn more times. At 1 FPS the display would draw the same image 60 times.
The point to take away from this is that VSync changes appearance (better – no tearing) and can degrade performance. But, buffering allows the card to run free somewhat breaking the tie between the card and the display, allowing the display to avoid tearing regardless of how fast or slow the card can render. VSync without buffering can force a fast card to wait on a display. In general even a card generating 59 FPS will take a significant hit when VSync is on without Triple Buffering. In general cutting the cards FPS to 30, half the screens refresh rate. So, VSync and Triple Buffering need to be used together.
On a 1920×1080 pixel screen a 6mb buffer is needed to hold the screen image. Figure 20mb of video ram for 3 buffers, which is a small fraction of the video ram available on most new cards. So, this is a negligible performance hit for improved appearance. But, nothing this cheap is that simple. Triple Buffering can cause problems by creating input and mouse lag.
So, while the settings are nifty, unless tearing is a problem for you, turn off VSync and Triple Buffering. In general, unless you are seeing tearing (which may also appear as other odd graphics artifacts) VSync is not worth it.
By default VSync is off in SLV2.5 Development. It is controlled by a Debug Setting: DisableVerticalSync.
Newer CPU’s have multiple CPU’s built into the main CPU chip. Intel labels them duel core and quad core. Prior to duel core chips there were mother boards that would allow one to plug in two cpu chips. Now the two, four, six, or eight cpu’s are built into one chip.
Prior to multiple CPU computers a computer could do only one thing at a time. The one CPU chip ran from task to task doing a little bit on each task, called task swapping. To the user it looked like the computer was doing multiple tasks at the same time. Much like the multiple still frames of a motion picture appear to be moving images when quickly played back. This mean a task that was waiting on the harddrive could be put to sleep and the CPU could do something else, say recalc a spread sheet. Basically the CPU was never supposed to wait on hardware.
The thing is if two 5-minute tasks were run at the same time, while it would take the computer five minutes to complete a single task, it would take maybe 10.5 to accomplish both tasks run at the same time. This worked well because tasks like word processing and email leave lots of idle time for the computer to do things between keystrokes. So, slow applications where the computer had to wait on the user or hard disk had free time left to give to other applications. So multiple threaded applications allowed the five minutes task to complete in may be 5.5 minutes while you typed and not slow you down. This is a big advantage. But, rendering a video game is different in that instead of the computer waiting on the user for the next key press, and wondering off to do something else, the user is now waiting on the computer to draw the screen so we can do something… like shoot the bad guy before he gets us.
Multi-threading using extra cores, CPU’s, to do multiple things allows the computer to complete multiple CPU-intense tasks much faster. So, the 2 five minute tasks instead of taking 10.5 minutes to complete on a single core are completed in about 5.25 minutes on two cores. With a quad core one could complete 4 five minute tasks in maybe 5.5 minutes. It’s way more complex than that but that is the basic idea.
So, can we use multiple threads with Second Life? In general, yes and no. It depends on which viewer and what version of your video driver you are using. My dual core running Phoenix (725) and nVidia’s 260.99 driver shows one CPU running at 60 to 75% and the other at 20 to 40% and is making 45 FPS. When I turn on multiple threads (Advanced -> Rendering -> Run Multiple Threads) it jumps to about 63 FPS. Usage changes to about 90% and core 2 shows 60% peaks. So, Phoenix seems to be using multi-threaded processing. When I go into a new area my FPS drops to 15 to 25 FPS and CPU usage jumps to 60 to 95% on core 1 and 40% on core 2 with lots of long spikes to 95%.
I get similar results with Dolphin Viewer. It peaks around 73 FPS. In a new scene I drop to 25 to 35 FPS. It also has the Advanced -> Rendering -> Run Multiple Threads setting. I have the video card set to use my video card’s global settings for Dolphin and that allows the application to control multiple threads.
In KirstenLee’s S20(44), which I finally got working and that is another story, I can’t find a multi-thread setting. It runs at 30 to 45 FPS and at 9 to 12 in a new scene. Going into the video card’s control panel and enabling multi-thread for KirstenLee S20.exe doesn’t seem to make any difference. However, both cores’ usage graphs very closely parallel each other.
Second Life is sort of a multi-threaded game but it is not perceived that way. Especially when it comes to the video. (See: VWR-1135 and vote – See: VWR-864 for the meta-issue over viewer) The server and, I think, the LL viewer software are having more multi-threading added.
A couple of weeks ago the Lindens tried multi-rez on the server side and had to turn it off as they ran into too many problems. I think it is now on the back shelf while other issues are handled, but I’m unsure of multi-rez’s status.
Also multi-threading works better with DirectX (DirectX11) than it does with OpenGL and SL is an OpenGL application. The performance graph shown is a 2010 graph but when in 2010 is unclear. Some multi-thread things that worked in the 169.x nVidia drivers stopped working in 258.x versions. It’s unclear what the status is in the current 260.x drivers.
Also, there seems to be confusion between the graphics drivers and games as to who is going to handle multi-threading. Having the game and the video driver both set to use multi-threading can slow everything down and causes problems and lockups, but that varies from game to game and video card to card.
To turn off multi-threading go into the nVidia Control Panel, navigate to 3D Settings -> Manage 3D Settings, then in the global profile tab, change “Threaded Optimization” to Off or Auto. It is debatable whether turning this off in the Global settings or the in the individual applications is the better idea. For now Global should be fine, but make your own call. An additional hint is that many users find setting the card’s setting to OFF and the game’s in game option to ON works best.
In the SLViewer there is a Debug Setting, which in seems unrelated to anything I see in Preferences; RenderAppleUseMultGL.
Of course your internet connection has a lot to do with your FPS rate. I’ve been telling people to speed test to San Francisco. It seems most of the SL servers are located in Dallas now. So we should speed test to Dallas, Texas or may be both.
What to set your bandwidth to is a matter of a great deal of debate. Depending on who you choose to listen to it can be 500mb/sec to reduce load on servers and reduce various user problems. Or up to 80% of your download speed as shown by speed testing for better performance. I’m convinced this setting is affected by so many variables unique to each user and changes so quickly there is not authoritative answer to this question. It’s mostly trial and error to find out what works best for you.
I hope this helps. Let me know if it was understandable or too geeky and technical.