Honour has a post up titled: How Deep is Your Field in Second Life? She was having a problem getting the viewer’s Depth of Field (DoF) feature to work. She links to Ricco Saenz’s explanation of how to get it working: How to actually capture depth of field on your SL photos. But, not why it does what is does.
DoF is one thing I have found I cannot control in my RL S4’s camera, or have yet to learn how. It is a reason I still use my Sony digital camera for RL imaging. DoF is something I seldom use in Second Life™ because I prefer the control I have in Photoshop. But, I can imagine that some will find DoF more usable in the viewer than adding it in post production.
Ricco’s article is important for those taking hi-rez images with the viewer. He is explaining that what you see on the screen and what you get in the screen capture/photo at high resolution are not the same. I run into the same problem in preparing images for print and the web. It is a resolution/pixel count thing. Once you understand you can compensate and get the result you want.
The Tech
There is a technical explanation for what is happening. My two images show my standard SL view (1688×968 window), the one I use for daily display in Second Life. The other (below) is a higher resolution 3000x1723px image that I might take for a higher quality image. You must enlarge them by clicking to see the comparison make sense. And even then your computer is going to try hide some of the problem making it difficult to see what is happening.
The trick is to get full size image displayed at ACTUAL size or 1-to-1 pixel. You can download the images Links: Image1, Image2 – the new click-zoom makes it hard to get to the full size image now) or get the full size image on screen in your browser. Just click the image links to get the full size image and then right-click and select save it to your computer or click and get it in your browser. In Windows you can double-click the downloaded images to open them in the Windows Photo Viewer. There is a button for actual size. At actual size, the DoF effect in both images is the same. But, you can see the same thing in your browser. When downloading, you can examine the images in more detail.
Notice the edge of the pyramid and the near tree over my left shoulder, the avatar’s left. The pyramid and tree are different sizes, but the blur is the same in both images. It is when you make the pyramid and tree the same size that you see what appears to be the blur from DoF changing or disappearing altogether. It didn’t. It is what you are doing to the image that makes it appear that way.
In the top image, you can easily see DoF in action. This image was captured from the SL Viewer using the default settings for DoF. Third party viewers have the controls for DoF where you can easily use them to adjust DoF and the amount of blur you see. In the SL Viewer you must go into the Debug Settings from the Advanced Menu (step-by-step is here: SL Wiki Depth of Field). The values you can change are:
- CameraAspectRatio – (Default 1.5) “Camera aspect ratio for DoF effect” — Set this to the aspect ratio of the camera you’re modelling. For example, a 35mm camera has an aspect ratio of 3:2 (1.5). Second Life will use this as a frame of reference for how field of view and focal length must be adjusted depending on window size.
- CameraFieldOfView – (Default 60.0) “Vertical camera field of view for DoF effect (in degrees)” — The default FoV for the camera you’re trying to simulate. Second Life will use this as a frame of reference for adjusting focal length as the in-world field of view changes.
- CameraFNumber – (Default 9.0) “Camera f-number value for DoF effect” This is a simulated f-stop as you’d see on a camera with and adjustable aperture. A typical 35mm lens might have a range of f/2 to f/22. The smaller the number, the wider the aperture. In general, a smaller f-number will result in a narrower depth of field. When trying to tune depth of field for a particular image, this is the number to modify.
- CameraFocalLength – (Default 50) “Camera focal length for DoF effect (in millimeters)” Different cameras have different focal lengths (the distance from the outer camera lens to the film). In general, a shorter focal length will result in a closer hyperfocal plane — that is, the subject distance at which the far focal plane approaches infinity. You should choose what kind of camera you’re modeling and set CameraFocalLength to the focal length of that camera and leave it. Adjusting field of view will lengthen or shorten the simulated focal length appropriately to simulate the use of a zoom lens. Learn more about hyperfocal distance.
- CameraFocusTransitionTime (Default 0.5 sec) – How many seconds it takes the camera to transition between focal distances.
What is Happening?
If you use GIMP or Photoshop, you are aware that many image tools work on a range of pixels. Pixels are arranged in rows and columns, like a chessboard. When we use a blur factor of one the target pixel is mixed with the eight surrounding pixels (green). The blur process is repeated for every pixel in the image. If we set the blur factor two then the 9 pixel’s surrounding 18 pixels (red-pink) are added into the mix and so on as we increase the blur factor. The same thing is happening in the Viewer’s DoF process.
At high or low rez the DoF feature is blurring the same number of pixels. If one blurs a 100×100 pixels (100×100=10,000) that is about 0.6% of the total image (1688×968=1,633,984). In the hi-rez image that 100 pixels goes down to about 0.19% of the image (3000×1723=5,169,000), effectively cutting the blur by over a third.
When you look at an image on your computer it figures out how to display it on the screen. In the Windows Photo Viewer the Actual Size/Fit to Window button is telling the computer how you want the image pixels fitted to your screen. The Actual Size means to give every pixel in the image a screen pixel, 1 to 1. This often means the entire image will NOT fit on the screen. When the computer is told to fit it on the screen, the computer does some math and figures out how many pixels to average together and assign to a pixel on your screen.
As the image is shrunk to fit on the screen, it is literally SHARPENING the image. This means the blur added to make the image look as you wanted is being removed.
What Do We Do?
The challenge is in knowing what will be done with the final image. If the image is going to the web or otherwise intended for use on a computer, the image can be lower resolution, fewer pixels. If it going to be printed in a magazine or other paper media a much higher resolution image must be used.
Knowing the final use tells us whether to over blur the image we are seeing or go for a WYSIWYG.
Also, knowing how we are going to handle the image post-production tells us what we need to do. Taking a hi-rez 3,000+px wide image with the viewer and then turning it into a 600-800 pixel wide image for the web means we will be applying a large amount of sharpening by downsizing the image. We will definitely need to over blur the image. Taking 3,000px down to 600px is a reduction factor of 5 times.
While there is a way to mathematically figure out what is needed, my experience is professional graphic artists seldom bother with the math as it is difficult to include all the factors. They experiment. In time their experience guides them.
Realizing what is happening with the image process allows a person to plan how to get the result they want. Big image to small: over blur. Small image to small image: do WYSIWYG.
I’m sticking with my pixel pixies explanation :p
For all practical purposes yours works as well as mine. 🙂
Thank you for explaining the tech aspect of the DoF issue so clearly 🙂
Thanks for the kind words.
The depth of field is an *issue* that affects camera lenses. It is an imperfection of the lens systems that causes a trade off between its luminosity and its sharpness (the more open the diaphragm, the more luminosity but the less depth of field, i.e. the smaller amount of sharp planes of objects).
3D graphics don’t have this issue, because they don’t use lenses and the result is that when you look at any detail on a 3D generated graphics, it is always sharp, regardless of how far is the object in the virtual world or how bright is the scene.
The 3D graphics’ way of seeing a (virtual) world is in fact *way* closer to how your brain sees the (real) world through your eyes (because your eyes, albeit also suffering of the DOF issue, always auto-focus in a split second on any object you are looking at, whatever its distance and your brain assembles all those sharp, split seconds views into a single sharp image of the world).
It is therefore an amazement to me that some people can find the depth of field \feature\ to be a plus in 3D graphics… It’s like if you found it nicer to have the graphics in black and white only instead of in full colours (black and white being another limitation of old TV cameras and even older films) ! Not to mention it taxes the frame rate…
You out tech’d me… 🙂 You are however right on.
The reason people use, or at least I use, a blurred background or foreground is to focus the observers attention to a specific place in the image. I would assume that people that do their image composition in the viewer have the same reason. For those that turn it on an leave it on… I’m like you. I don’t understand.
Adding to what Nalates says, I’d like to remember that the idea here is to capture depth-of-filed effect on your SL photos. On RL cameras, it’s true, the depth of field primarily results from the necessity to open the lens diaphragm, allowing more light to enter. Nonetheless, this went far beyond the \necessity\. I mean, in many situations you can choose to open the diaphragm or to increase exposure – and you choose to open the diaphragm because you can achieve a certain result on your pic, be it merely to increase \focus\ on something (making everything else blurry), be it another artistic/aesthetic intention.
Comparison with the way we see things is also a bit problematic. Not only we do \see\ DoF as a blur system, we also see it as a result of parallax (because we have two eyes). It doesn’t happen neither with camera lenses (unless you intentionally superpose two images, and it is generally a work for post-production, so it has nothing to do with the lens) nor with 3D programs, at least not that i know of (maybe parallax has been simulated by someone, i don’t know, but I guess it would be really annoying to see that). But, again, what I pointed on my blog and what I think that Nalates pointed here is not how to have a DoF sensation, but how to capture DoF blur on your SL photos – which, as i said, may respond to an aesthetic intention.
Sorry, my answer, above, was to Henri Beauchamp, but I posted it on the wrong answer level.