One of the primary challenges in digital landscape photography is coping with natural dynamic range that far exceeds what a digital camera sensor can capture. While digital sensor capabilities have significantly improved over the last twenty years, they are still much less capable than our human eyes. For example, look and photograph a full moon on a mostly clear night. While your eyes will see detail in both the sky and moon (e.g., dark areas on the moon which are lava flows, and wispy clouds and stars in the sky), your camera will likely capture an all white moon or an all black sky (or both). While many photographic subjects can be captured adequately with a modern DSLR camera, sometimes the camera needs a little help. I’ll discuss three such methods in this article.
The first and most familiar technique for dealing with natural dynamic range in excess of your camera sensor is HDR (High Dynamic Range) processing. There are numerous HDR software packages available, and the use of HDR software is not just for addressing dynamic range issues. Many photographers like the punch that sometimes can be added to a photo with such software. While I certainly have done that too, I primarily use HDR software to compensate for excessive dynamic range, such as with alpenglow light at dawn.
As all photographers know, the best light of the day usually occurs when the sun is low on the horizon either at dawn or sunset. Since the light is coming through significantly more atmosphere than in the middle of the day, the shorter wavelengths of light are filtered out, leaving the warm hues of red and orange. Such situations are when I primarily use HDR processing. In the example below, the first light of the day is illuminating the high peaks behind Dream Lake in Rocky Mountain National Park. While the sunlight is striking the tall mountains, the area at lake level is still dark. To produce an image that displays the alpenglow light on the mountains, while still showing detail of the regions at lake level requires blending images. In this case, three images, each 1 EV apart were sufficient. The three images were combined to produce the final result with HDR Efex Pro 2, which is my preference for HDR software, used as a plug-in with Adobe Photoshop.
HDR processing is often simple and effective. But sometimes problems arise. Ghosting occurs when parts of the scene move during shooting, such as leaves blowing in the wind. If there is a hint of noise in the sky, HDR processing will probably make it worse. And in my experience, the more shots I need to cover the entire dynamic range of a landscape scene, the more likely something will go wrong when processing the image.
An alternative to traditional HDR software processing is use of luminosity masks. This is a fairly advanced method of combining areas of two (usually) images that together cover the dynamic range of the landscape scene. A tutorial explaining the technique can be found here, along with a link to download a free luminosity mask action set. In simple terms, luminosity masking involves overlaying a darker version of an image on top of the lighter version. The action set then produces a mask so that the darker version can be painted onto the lighter image, getting rid of overexposed or ‘blown’ highlights. It’s definitely worth following the tutorial by Jimmy McIntyre and downloading the two images he provides as examples. While I can post-process images with luminosity masks about as quickly as with traditional HDR software, it takes quite a bit of practice and is certainly more complicated than simply choosing a preset from a familiar program. In my example below, I used a luminosity mask. Without this technique the woods on the left of the river would be too dark and the more brightly illuminated leaves at the top above the river would be overexposed.
Luminosity masking is generally much cleaner than traditional HDR processing and avoids issues with noise or artifacts. If I used HDR software on the image above, there would definitely be problems with moving leaves since the exposures were long to produce the silky water. As a general rule, I find that if two images captured 1 EV apart can capture the entire dynamic range of a landscape scene, I use a luminosity mask to produce the final image. If three images are required (which is often the case with alpenglow shots), I use traditional HDR. But what to do when the natural dynamic range of a landscape scene is monstrous?
I am fortunate enough to live in the path of totality for the Great American solar eclipse of 2017. I spent months preparing and practicing how to get the most from two minutes of a total solar eclipse. Some of the shots were easy (e.g., diamond ring phase at the beginning and end). But when I first started trying to process images showing the solar corona, the results were horrible. Producing images that showed the solar corona multiple solar radii from the blocked sun required lots of exposures. And all that dark sky looked dreadful when combined with HDR processing. Luckily, I stumbled upon a technique using Photoshop smart objects that produced clean images. I’ll describe the basic steps below with an example from Lower Antelope Canyon in northern Arizona.
Antelope Canyon is a narrow slot canyon through desert sandstone carved by flowing water. The scene is surreal, beautiful, and has a big dynamic range. When looking horizontally down canyon (no sky in image), three exposures may cover the range. When I was there, I shot quite a few sequences looking upwards and including the sky. In those cases as many as seven exposures each one EV apart were necessary to cover the range. So most images in a series will have a completely blown sky, and the ones that retain the blue color will have canyon walls too dark to discern any detail. While I did originally use HDR software to process my Antelope Canyon images, I have since found that using the Photoshop smart object technique to be superior. I’ll briefly describe the steps below.
Open Photoshop and choose from the File menu the Scripts tab and select ‘Load Files into Stack’. Then check the box ‘Attempt to Automatically Align Source Images’ as well as the box ‘Create Smart Object after Loading Layers’. The series of images will then be loaded in layers, aligned, and a smart object will be created from the layers. From there, under the ‘Layer’ tab in Photoshop choose ‘Smart Object’ and then ‘Stack Mode’. The choices from there worth exploring include ‘Median’ and ‘Mean’. The distinction is based on statistics, but sometimes one looks better than the other. So often I will duplicate the smart object so I can try each method of combining the series and compare. In the example below, the sky was still a bit too light. So I used one copy of the smart object and chose the stack mode labelled ‘Minimum’. That combination produced the perfect sky (since it essentially used the darkest image). So with two separate layers, one version showing the requisite detail in the canyon walls, and another with the perfect sky, I combined those layers with a luminosity mask to produce the final image below.
Naturally, different people will develop their own road map to get to their idea of a great image. And I doubt I’ll ever encounter a photographer who processes art just like me. But exploring the variety of techniques for dealing with dynamic range in digital imagery can only help you develop your own map. To that end, I hope this article helps.