Pages

Sunday, May 21, 2017

Too Bright, Too Dark

The visible world around us is constantly changing. One second we may have the sun in our eyes, and another we're in a dark closet trying to find the light switch. Such fluctuations in brightness could have ended up being a serious problem for our vision if it weren't for a handy built-in feature: our eyes automatically adapt to the lighting conditions of the surrounding environment.

Most people are well aware that their pupils change size to handle lighting conditions. A larger pupil admits more light and makes the scene brighter, while a smaller pupil admits less light and makes the scene dimmer. This mechanism is fast and effective, and is controlled by the brain stem, which acts autonomously - you don't even have to think about it.

As useful as this type of adaptation may be, it has some limitations. For example, most artificial lighting is 20 times dimmer than the sun, while pupil dilation only brightens light by a factor of 4 (when compared with a fully constricted pupil). And then there are all those backlight situations - like when an unknown person is standing in front of a bright window, and you need to see whether it's a home intruder or a visiting friend. In short, pupil adjustment is insufficient. So what else do we have?

The answer: sensory adaptation. This term refers to the loss of sensitivity that comes after exposure to a stimulus, an ability that is actually shared by all of our senses. If you've ever looked for your glasses only to find them on your face, you've experienced the effects of sensory adaptation. It's useful because it eliminates old data so it doesn't obscure the new data (kind of like what Facebook does with old and new posts).

Our vision is also subject to sensory adaptation: try staring at the same point for 2 minutes, and everything will fade to the same shade of grey. This happens because the retina becomes desensitized to what it sees. Bright light causes the retina to become less sensitive, making the light appear dimmer. Dim light allows the retina to regain lost sensitivity, so the light seems brighter. In addition, different parts of the retina are affected independently, so while one object is darkened, an object right next to it could be brightened. Hence, the bright and dark areas of a scene are both made less extreme, so that the scene as a whole becomes easier to see.

There are two types of adaptation to light in vision, then. The first type affects the whole image, and is accomplished by pupil dilation/contraction. The second type can control different parts independently, and is accomplished by sensory adaptation.

***

What remains is to find a practical application. Obviously, photography is the place to go. A digital camera is very similar to an eye: both detect light, and both relay information to a central processor (whether a computer or a brain). One of the biggest differences between these two tools is the way they adapt to light.

Most cameras have an adjustable aperture. The aperture is the hole that light enters through. It usually contains a system of tiny fins which move to make the aperture larger or smaller, and are adjusted (often automatically) based on the brightness of the scene. This corresponds directly with the pupil in the human eye.

A camera can also adjust the exposure time, which is the amount of time that the shutter is open and light is permitted to enter into the camera. If the camera is digital, it can adjust the sensitivity of the sensor, which involves multiplying the data from the sensor by some numerical value. These mechanisms correspond partially with the second type of eye adaptation: they control the brightness (or, more properly, "exposure") of an image, but they cannot control parts separately. In other words, unlike the eye, a camera always adjusts the brightness of the whole photo equally.

This is actually a pretty big problem, because many photos end up looking completely different from reality. Next time you get the chance, try taking a photo of a person standing in front of a window. Visually, you will see both the person and the background, but the camera will either make the person a silhouette, or will wash out the background.

The solution is a technique called high-dynamic-range (HDR) imaging. In HDR imaging, multiple photos are taken with different exposures. Then, the best parts of each image are combined to form a single image that is more evenly exposed. Some people criticize HDR images for being artificial, but when done well, HDR images mimic human vision much better than normal photographs do.

As an illustration of this technique, I'll describe my process making an HDR image of a chapel at night. I started by taking these four images:



As you can see, the chapel is mostly washed out in the first image, but in the later images, the sky and the surrounding buildings look too dark. Using GIMP, I stacked the images, and used layer masks to make the images transparent based on brightness. The result was an image that showed detail on the chapel, without losing the surrounding buildings or the light on the clouds.


And that's it - a great image out of a number of okay photos. Kind of cool what you can do when you apply biology to technology, isn't it?


3 comments:

  1. Great technique. Awesome photo!
    I'll still take my eyes over a camera any day! :-)

    ReplyDelete
  2. Hey dude! That's cool! I think I remember you from Khanacademy!

    ReplyDelete