Tech Myths: Boosting Reality

HDR: Intro to High Dynamic Range

 

If you already know what HDR is and means, then a lot of what follows may be old news to you. But as a PC and tech reviewer, somehow I totally missed the rise of HDR imaging, and in asking around, it seems that most people who aren’t active in photography are likewise oblivious. So for all of you in the latter group, keep reading.

Let’s go back to those multiple exposures. Back in my darkroom days in high school and college, I remember using multiple negatives taken at different exposures and exposing the photographic paper with different areas from different negatives, trying to get just the right blend from different images into a final image. This involved a huge amount of trial and error, taking hours just to get one decent result (if one was had at all). Gustave Le Gray pioneered this technique back in the 1850s. A century and a half later, we now have computer analysis tools able to do something similar but even better through a process called high dynamic range (HDR) imaging. To oversimplify things, HDR software takes a collection of exposures of a single scene and merges them, combining the dynamic ranges found in each of them to produce a single image with a greatly extended range.

In fact, this HDR image contains a far wider range of values than nearly any monitor can display, and most native HDR shots look like garbage when first created. The image must then be tone mapped. Tone mapping takes the native values found in the image and “maps” them to a color set compatible with the output media. The algorithms and settings used when doing this tone mapping have a dramatic effect on output quality. You’ll find that many HDR images look artificial and surreal, like yesteryear’s Technicolor on steroids. Others look surprisingly natural. You’ll find plenty of good and overdone examples of HDR work in the Flickr HDR Pool.