Friday, December 26, 2008

Look Sharp!

Every once in awhile I come across images on the Web that look unusually crisp and sharp. But not in a bad way with obvious halos and other over-sharpening artifacts in images almost as common as those that look too soft. After tinkering with a few approaches, I've settled on a method that works well for me.

For a long time, I've been a big fan of PixelGenius's PhotoKit Sharpener, a suite of Photoshop tools for image sharpening based on the seminal work of Bruce Fraser and other authorities. Although I still use it exclusively for all sharpening aspects in my printing workflow, I've sometimes been less enamored of the results for Web sharpening.

Until recently, I used PhotoKit Sharpener to do capture sharpening and then used Photoshop's bicubic sharper algorithm for resizing down to Web territory. I've been relatively pleased with the results but looked around for a little extra boost.

After reading up on clinical comparisons of resampling algorithms (of which there are many more than offered in Photoshop), it seemed like there was a relative consensus that Lanczos, Sinc, and Catrom algorithms tended to do a better job than some of the others, including Photoshop's bicubic family.

I decided to try using the ImageMagick convert command to resize using the Lanczos filter. It's a command-line tool (which automatically turns many people off) with a huge and daunting set of often sparsely-documented options. But, after some Web research, here's what I ended up coming up with as my own starting point for down-sizing images for Web use:


convert -filter Lanczos -resize "500x500>" -density 96x96 \
-quality 80 -sampling-factor 1x1 -unsharp 0.6x0.6+1+.05 \
input_file_name output_file_name


This combination of command line options does the following:

-filter Lanczos
obviously selects the Lanczos resampling method from the many that ImageMagick supports.

-resize "500x500>"
indicates we want the down-sized image to be 500 pixels on the longest dimension. The ">" character indicates that we don't want to create a new image if the existing one is already that size or smaller.

-density 96x96
is just bookkeeping to indicate the image has a resolution of 96 ppi. It's not really necessary.

-quality 80
indicates the JPEG compression quality I want.

-sampling-factor 1x1
indicates the sampling factor for the JPEG compression. I don't know for sure, but this may be the default.

-unsharp 0.6x0.6+1+0.5
applies a little unsharp filter after the down-sizing. This is one of the most difficult areas to find useful information and guidelines on but this is what I came up with for 96 ppi display and it works well in most cases for me.

When I prep an image for Web use, I do the bulk of the work in Photoshop, convert to the sRGB color space (boo! hiss!) and save it as a TIFF file. Then I apply the convert command just described to create my Web-sized image. I actually have this command, and a couple others to handle IPTC copyrighting, etc., in a batch file that I just pass the TIFF file name to for processing.

Not every image will work optimally with this particular process but it's looking like a great starting point for my taste. Give it a try if you have ImageMagick installed. (It's available on all major platforms.)

Sunday, November 2, 2008

Faking Slow Exposures for Waterfalls

When shooting pictures of waterfalls, here's a way to get that soft milky look to the moving water when you can't get exposures long enough to do it directly. I recently had a pocket camera and pocket tripod in my coat pocket on a nice hike along a series of waterfalls in Ithaca, NY. I didn't have a polarizer or neutral density filter to enable slower exposures and this example was shot in open shade on a bright sunny day. I used the slowest ISO I had available (80) and closed the aperture as much as I could without destroying the image with diffraction. I was still stuck with a 1/30 second exposure giving the following result:



Not bad but I wanted the falling water to look softer. With the camera fixed on the little pocket tripod, I shot 5 identical exposures of the waterfall without moving the camera. I used the 2-second timer on each exposure to make sure my shutter-button presses didn't mess up the exposures.

I brought the 5 images into Photoshop on separate layers of one file. I wanted to blend the images together so that stationary objects stayed the same and the moving parts (the water) got averaged together. It's a simple matter of changing the layer opacities to allow each layer to contribute equally to the final composite. The background layer stays at 100% opacity, the second layer is at 50% opacity, the third at 33% opacity, and so on as seen here:



If you think about it a little bit, this allows each layer to contribute equally to the averaged stacked final image. Of course I could have used fewer or more images depending on what the desired outcome is but you get the idea. The result with 5 stacked images is much closer to what I was hoping for. Ten images would have made it really milky smooth.

Thursday, October 2, 2008

Night Sky Shooting

The current DSLRs are getting better characteristics making night sky photography viable: lower noise at higher ISOs and better long-exposure tolerance. Upon getting a Nikon D700, one of the first things I wanted to do was try out night sky photography like I used to do in the good old film days. You have two choices with long exposures of the night sky: either keep the camera stationary and get star trails or move the camera to compensate for the earth's rotation to get long exposures of faint sky features without smearing into trails.

It's the latter that I find most interesting usually. So, with a vacation based on a cottage overlooking a pond coming up, I hastily put together an equatorial mount to take night sky pictures if the weather cooperated.



An equatorial mount is simply a camera (or telescope) mount that allows you to rotate your gear precisely around an axis parallel to the earth's axis of rotation but in the opposite direction to compensate for the apparent rotation of the night sky as the earth turns.

You can build a primitive mount using some scrap lumber and some common hardware as I did. At its core, the mount is two boards joined at one end by a door hinge. The hinge is our rotation axis and will be aligned with earth's axis as described later. The bottom board is fixed in position so that it correctly aligns the hinge. At the end opposite the hinge on the bottom board we drill a hole through which a T-nut can be driven for a 1/4 X 20 bolt to be screwed in. (This means a bolt 1/4 inch in diameter with 20 threads per inch. The bolt should be at least 3 inches long.) In use we slowly (one revolution per minute) screw in the bolt so that it drives the upper board away around the hinge axis. This drive bolt is located 11.43 inches from the center of the hinge axis.

Here's why. (Math-phobes skip this paragraph, it will make your head hurt.) The earth completes one revolution every 23 hours, 56 minutes, and 4.1 seconds approximately. This is the sidereal or star day, not a solar day which is the familiar 24 hours, the extra four minutes of which are spent rotating a bit more to compensate for the earth's orbit around the sun. So, this means the earth rotates 360° every 23 hours, 56 minutes and 4.1 seconds or about 1436 minutes. That means the earth rotates about 0.25° every minute so we need to rotate the same amount in the opposite direction to compensate. Because we're using a bolt with 20 threads per inch, we can move our upper board 1/20 inch per minute. We need to find the distance between the hinge and the bolt that will cause the upper board to rotate 0.25° when we raise the far end 1/20 inch. Simple trigonometry shows that distance to be 0.05/tan(360/1436) or 11.43 inches. There's some heavy round off in these calculations but the error is well within our ability to drill holes with precision into lumber.

Next we need to mount our board assembly to keep our hinge axis in alignment with earth's. In my case, I knew I was going to be shooting pictures at 44° north latitude so the hinge needed to be angled up at 44°. I cut a couple of supports at this angle and screwed the bottom axis board to it. An adjustable setup would be better if you're going to use this at different locations. The angled supports were in turn screwed to a piece of plywood with a hole drilled in the center to thread a bolt through to secure the whole apparatus to a tripod. (You could simply set it on a level table top instead.)

The final tweak was to screw a piece of wood cut at 45° to the top rotating board to provide some elevation for mounting a ballhead and the camera, giving them clearance to point in any direction needed.

To align everything on site, it helps to have a compass and a good map that provides a couple of important pieces of information. You need your latitude so that the hinge can be angled up the proper amount, and you need the direction of true north that the hinge axis must be aligned with. Good maps, such as the USGS maps in the US, also include an indication of how far magnetic north deviates from true north. In my case, I was shooting at about 44° north latitude at a location where magnetic north was 14.5° west of true north. After adjusting the compass accordingly, I lined everything up beforehand, composed the approximate scene I wanted to capture, locked down the controls, and waited for dark. An alternative to all this is to eyeball along the hinge at night so it points in the direction of Polaris, the North Star, if you live in the northern hemisphere. In my case this wasn't an option because north was obscured by trees.

When shooting, keep a watch handy with a little light on it and use it to synchronize your rotation of the bolt with the seconds indicator of your watch—remember, we're screwing in the bolt at 1 complete revolution per minute. Trip your shutter and start slowly rotating the bolt and, if you've measured and aligned everything properly, you should get nice crisp star images.

Or not. As I found out, many wider-angle lenses don't do stars very well. There is a lot of coma and astigmatism that you don't notice in normal shooting. Experiment with lenses and f/stops to find the right combination to give you the best results.

In my case, I was getting good results at ISO 3200, 30-second exposures with a zoom set at 35 mm and f/4. I would shoot multiple exposures and then use stacking software to combine them to get nice clean (relatively) noise-free images.



Note that this solution is only accurate for a few minutes at a time, when the boards are closest together. As they rotate farther apart, the movement of the bolt pressing against the upper board has a progressively smaller effect on the rotation rate of the upper board. If you want a more robust solution, things get a lot more involved and you may want to simply buy a commercial equatorial mount (and maybe even a clock drive to turn it).

Update March, 2009: The image above, of the Milky Way over Osgood Pond in the Adirondacks, was the grand prize winner in the Adirondack Life magazine's annual photography contest. It was shot from the canoe house deck of the Gazebo, a gem of a rental cottage on the pond.

Sunday, September 14, 2008

Highs and Lows

I was at an arts and music festival helping out Lori with her pottery booth. I had a chance to wander around and shoot a few pictures. I brought the little Canon G9 and a monopod to get something other than the straight-on shots we instinctively go for.



One thing you can do is attach the camera to one end of the monopod on the side and then just rest that end on the ground and have the camera shoot upwards to get a ground-level view of the world.



Another easy way to get a different viewpoint it to fully extend the monopod and raise it high overhead to get a top-down view.

In both cases I find it convenient to simply set the camera at the hyperfocal distance (about 6.5 feet or 2 meters for the G9 at 7.4 mm set at f/5.6). Set the mode to aperture-preferred, put on a 5-second timer, click the shutter and either set the camera down or raise it high and wait for your 5-count to let you know the shutter has released.

It's not going to get you precision framing but you can get decent quality images from a seldom-used perspective. Also, if you're using a pocket camera, it's quite easy to move the camera into position at the end of the monopod and you won't attract quite as much attention as you would trying to wrestle a big DSLR into position to do the same thing.

(The only disturbing comment I got was from a woman who came from behind and, as she passed, said "Oh, thank God; I thought you had a gun." Maybe I should put some nice friendly colored electrical tape on my gun-metal grey monopod to reduce such anxieties.)

Friday, September 5, 2008

Part 7: Final Thoughts


To wrap things up on this series of posts, here are some personal observations on images created with point-and-shoot consumer pocket cameras like the Canon G9. This photo is a ground-level view of the Lackawanna Train Station, now a historical museum, in Vestal, NY. It was shot with G9 and processed using the techniques we've been going through. Shot at 7.4 mm and f/4, the processing removed the mustache distortion (combination barrel and pincushion distortions often seen in shorter focal length zooms), we've reduced the transverse chromatic aberration, and we compensated for the light fall-off (vignetting) at the edges.



Looking at a detail of the original image, we can see the chromatic aberration along the high-contrast edges of an eave ornament. After processing we see the chromatic aberration cleaned up and image details are sharper.



So let's summarize what we're working with:

  1. Small-sensor pocket cameras have great depth of field. In many cases this is good but it substantially removes one of the creative controls you have with a DSLR and fast lens: no chance of getting nice gauzy out-of-focus backgrounds with these cameras.
  2. Pocket cameras all have some amount of transverse and longitudinal chromatic aberration. We can reduce longitudinal (axial) chromatic aberration by using a smaller aperture; we can reduce transverse chromatic aberration using red and blue channel corrections. There are other chromatic aberrations inherent in the small sensors and inexpensive lenses that are beyond our ability to correct but these tend to be secondary.
  3. These small cameras (and many lenses on the big boys) have issues with geometric distortion. Barrel distortion, pincushion distortion, and even the more complex compound distortions can be corrected effectively. Not all pictures need this attention but ocean horizons, building edges, and other straight-line picture elements leap out when geometric distortion goes uncorrected.
  4. Small sensors have noise issues (at least as of this writing). Sadly the manufacturers still feel obligated to push the sensor resolution to ever-higher levels for marketing advantage but the price is paid in increased noise. Many cameras, like the Canon G9, would have far better images if the sensor sites were a bit larger even if it means a lower megapixel count.
  5. Looking up close at images created by any of these cameras will be clear to the experienced eye that they come from a smaller inexpensive point-and-shoot, and not one of the bigger pricier DSLRs or medium format digital cameras with good lenses.
  6. Almost any current point-and-shoot pocket camera is far more than adequate for images destined for the Web or for small family album prints. Only with extra care, as we've been exploring, will the images survive closer scrutiny and larger printing. However, you will not see an image taken with one of these class of cameras on the cover of Architectural Digest.


So, although it's tempting to say we can, with the tools and techniques described here, enhance a point-and-shoot image up into DSLR level of quality, that's only true if the shooter using a DSLR doesn't apply equal skill and similar tools to nudge those images to the next level of quality. The point-and-shoot images start off at a substantial disadvantage, can be improved greatly, but won't ever quite reach the quality of a carefully crafted DSLR image.

For the G9 in particular, here's a summary of its characteristics:

  1. Images suffer from moderate geometric distortions, edge fall-off, and chromatic aberrations as expected. None are show-stoppers and a carefully shot image can be cleaned up substantially using the techniques we've explored here.
  2. The aperture "sweet spot" for the G9 is between f/4 and f/5.6. Any lower and axial chromatic aberration becomes an issue. Any higher and diffraction starts turning detail to mush.
  3. Many of us who own the G9 wish the zoom range went a little lower, say to 24 mm equivalent rather than 35 mm. There is an after market for adapters to extend the zoom range optically. I've mentioned the Raynox wide-angle adapter which, when added to the G9, will yield a focal length down at the 24 mm equivalent range. The image, however, suffers greatly with uncorrectable vignetting at the corners at short focal lengths, bad geometric distortion and substantial chromatic aberration. Some, but not all, of this is correctable. My advice to anyone wanting quality 24 mm equivalent images from their G9 is don't consider the Raynox adapter. Maybe others are better but I have my doubts.


Bottom line: don't bring a knife to a gunfight. A consumer pocket camera does not compete with any reasonably good DSLR with a good lens. The pocket camera does, however, work great when you're not carrying a DSLR. The pocket cameras have some other strengths compared to DSLRs which we'll visit in future posts, not the least of which it attracts less attention than the flamboyant DSLRs. (For some reason some uninformed power-tripping Barney Fifes of law enforcement don't find pocket cameras the grave threat to national security that a SLR is, especially one mounted on a tripod.)

Monday, August 25, 2008

Part 6: Image Correction Alternatives

After enduring the math and tedium of developing our own corrections for images with vignetting, geometric distortion, and chromatic aberration, it's worth looking at some alternatives not involving such a high pain level.

First, if you're a user of a recent version of Photoshop, you have the Lens Correction filter available. This filter has adjustments for the 3 areas we've been exploring. It works fine for estimating simple adjustments in any of these 3 areas but it doesn't have any kind of built in knowledge of adjustments required for particular lens/camera combinations. If you only need approximate corrections and like the easy-to-use interface, this is a good way to go. If, on the other hand, you have stricter requirements or you have short focal length lenses with complex mustache distortion or other more complex problems, the Lens Correction filter won't handle it.

Another option to consider is DxO Optics Pro which is a sophisticated product handling a number of image problems including those we've been discussing plus many others. This software is tied directly to your camera/lens combination by way of modules that have very specific correction parameters built in that are the result of DxO's expert lab testing. Results can be very good, the product comes with standalone and Photoshop plugin versions, and can be used to batch process large quantities of images fairly easily. On the minus side, it's not inexpensive (although certainly not overpriced for what it does), it's a bit daunting to learn the quirky interface, and it only supports a subset of camera and lens combinations. For example, at the time of this writing, it doesn't support the Canon G9 we've been using as our example in this series.

Finally, there's PTLens which is a much less ambitious effort in terms of functionality but which has a much broader camera/lens coverage than DxO. It comes with a standalone version (handling only JPEG and TIFF images) and a Photoshop plugin. In terms of cost (about 10% of DxO), simplicity, and effectiveness for its strong point, geometric distortion correction, this is a gem. It also handles vignetting and chromatic aberration but you must make those adjustments manually, unlike the corrections for geometric distortion which cover a very broad range of cameras and lenses. PTLens had its genesis from Panorama Tools roots and handles geometric distortion under the covers in the same way that we discussed using the third-order polynomial radial corrections—except you don't have to be aware of it. By the way, the manual corrections for chromatic aberration in PTLens are also analogous to what we developed using corrections for the red and blue channels. In fact, the PTLens sliders show the numeric radial deviation from 1.0 (no change) as you visually tweak the image to your satisfaction. You can interchangeably use the PTLens numbers and the d coefficients we used in hugin parameters. I routinely will use PTLens to get in the ballpark and then transfer the corrections to hugin to refine the correction to as close as I can get to eliminating transverse chromatic aberration. The PTLens Photoshop plugin can be used to handle large numbers of images simply by making a Photoshop Action to run during ACR processing for example. The one thing I don't like about PTLens is it doesn't handle very large images well, having a tendency to bomb out with memory errors. But, other than that, you'll have a hard time finding a bigger "bang for the buck".

Both DxO Optics Pro and PTLens have free trial versions which should be exploited if you're at all interested in looking at their capabilities.

Given you can buy good solutions for correcting image problems we've been discussing, you may wonder why all the trouble doing it ourselves with all those measurements and mathematical manipulations. There are a few reasons. First, none of the products mentioned above handles all camera/lens combinations. For example, DxO doesn't handle the Canon G9 and PTLens doesn't handle the Canon G9 with the Raynox wide angle adapter we mentioned back in the post about geometric distortion. Second, by doing your own measurements you can be more confident that your custom-built corrections are as accurate as possible for your particular images. And third, there's nothing like grinding your nose into the pixel-level image flaws to get a real understanding of what your equipment capabilities and limitations are.

Next up, some final thoughts on compact consumer cameras generally and the Canon G9 in particular, given what I've learned going through these image correction exercises. After that, maybe we'll get back to some real photography discussion instead of all this techie pixel-bending stuff.

Monday, August 18, 2008

Part 5: Putting It All Together

In previous posts, we've looked at the Canon G9 point-and-shoot camera's vignetting, geometric distortion, and chromatic aberration. We've also looked at strategies for addressing these issues, primarily using the fulla command from the hugin panorama photo stitcher package.

The fulla command doesn't have a nice friendly GUI interface but it does allow us to put together arbitrary corrections to be applied to large numbers of images without a great deal of effort. Each of the fulla corrections we've explored in previous posts can be combined into a single command addressing vignetting, geometric distortion, and chromatic aberration.

Your workflow sequence is very important. The corrections we've been discussing should be applied to the images before any cropping, resizing, etc. have been done. I always shoot raw so I use ACR or some other tool for "developing" the image to set the correct initial tonality, color temperature, etc. and then export the image in 16-bit TIFF format (lossless unlike JPEG). I apply the image corrections and then bring the corrected version into Photoshop for subsequent processing.

Let's start with a hypothetical file shot on the Canon G9 at 7.4 mm and f/4, and processed from the raw CR2 file into a 16-bit TIFF file called example.tif. To apply my standard corrections for vignetting, geometric distortion, and chromatic aberration, I would use the following command:

fulla -c 1.0398:-0.1155:0.1954:-0.1605 \
-g 0.028:-0.0871:0.0521:1.007 \
-r 0:0:0:1.00024 -b 0:0:0:1.00041 example.tif

(The '\' characters indicate arbitrary line breaks for formatting here. This is actually all one command line.)

The "-c" option gives the polynomial coefficients for correcting vignetting, the "-g" option gives the polynomial coefficients for geometric distortion correction, and the "-r" and "-b" options provided the polynomial coefficients for transverse chromatic aberration correction. (But you already knew that.)

When fulla is done crunching the numbers, it outputs a file with a "_corr" filename suffix. So our example correction would create a new file called example_corr.tif. It will look substantially better than the original with the corrections applied.

Naturally you don't want to have to type in that command every time you process a file so you can create a MSDos batch command file (on Windows) or a Bash script (on Unix systems) to generalize it. Let's say we want to correct arbitrary numbers of files with one command. We can create a batch file like this:


for %%f in (%1) do \
fulla -c 1.0398:-0.1155:0.1954:-0.1605 \
-g 0.028:-0.0871:0.0521:1.007 \
-r 0:0:0:1.00024 -b 0:0:0:1.00041 %%f

(Once again, this should be all on one line in the real batch file.) Assuming you named this something like G9_74_4.bat (because it's only useful for the G9's 7.4 mm f/4 images), you can process all the TIFF files in a directory with a single command:

g9_74_4 *.tif

Unix, Linux, Mac users can create analogous script files using for iterations to do precisely the same thing.

I must mention something about the fulla command here: it's flaky and not mature in some regards. One frustration is that, when it outputs a corrected image, that corrected image has its metadata stored in an unconventional way that is not visible to Photoshop and many other applications. If you're running any kind of responsible workflow, this is not good, but there is a way around the problem.

ExifTool by Phil Harvey is a superbly-done high-function image metadata management tool. It has an extraordinary number of capabilities and it handles them well. If you download the tool, you can use it to save your EXIF (and other metadata) from your image, apply your fulla corrections, and then restore the original metadata in standard format. For example, you can save the metadata for all TIFF files in a subdirectory with this command:

exiftool -o %f.mie -ext tif .

Process your files using the fulla corrections and then restore the metadata to the corrected files.

exiftool -overwrite_original -tagsfromfile %-.5f.mie \
-ext tif .

You can even skip the intial save to MIE file (Meta Information Encapsulation) and simply rewrite the original file's metadata over the fulla output file:

exiftool -overwrite_original -tagsfromfile %-.5f.%e \
example_corr.tif


Finally, if you're really ambitious and have Perl programming skills, you could use the Perl Image::ExifTool package to determine the focal length and f-stop for the image and then do a table lookup to determine which of the fulla option parameters to use. I've done a rudimentary job of this just for the subset of camera settings I almost always use for the Canon G9.

If all these seems like a lot of trouble to correct the occasional image, you're right. It's most worthwhile if you routinely have large numbers of images you wish to optimize; then the use of fulla in batch or script files makes more sense.

If you only do occasional corrections, want something a bit easier to deal with, or want a commercial solution, stay tuned. Next up we'll glance at a couple of alternatives for correcting images that don't require all the up-front work we've been slogging through here.

Sunday, August 17, 2008

Part 4: Chromatic Aberration

There are many reasons why an image may contain chromatic aberrations, the unexpected color casts or fringes in certain parts of the image. Lenses, particularly inexpensive ones such as found in point-and-shoot cameras, can contribute two main types of chromatic aberration: longitudinal (or axial) and transverse (or lateral).

Longitudinal chromatic aberration is caused by the lens system's failure to focus all wavelengths of light to the same focal plane. If we look, up close, at a piece of an image with sharp high-contrast edges, we can see evidence of longitudinal chromatic aberration in the unnatural blue and purple casts of leaves and branches in this example. This is part of a Canon G9 image shot at 7.4 mm at f/2.8.



The solution to longitudinal chromatic aberration is often quite simple: stop the aperture down, causing a greater overall depth of field and reducing the effect of the wavelength focus differences. When we close the aperture down to f/5.6, we notice a much better color rendition on this same subject.



From a series of tests on this subject, I notice the longitudinal chromatic aberration is mostly gone at f/4, and almost totally gone at f/5.6. There is no noticeable improvement if you stop down further. However, due to diffraction limitations, the image turns to mush, particularly at f/8. So, the rule of thumb is to stick between f/4 and f/5.6 to limit longitudinal chromatic aberration in images.

Transverse chromatic aberration is caused by the lens system enlarging the image on the focal plane to differing amounts depending on the wavelength of light. The effect is more noticeable the farther from the center of the image you look.

To evaluate transverse chromatic aberration, I taped a large (6 feet by 10 feet) sheet of white paper to the side of the garage and placed black electrical-tape X's all over it. This provided a target with a few handy properties for this exercise: relatively neutral tones (not favoring a particular color strongly), high contrast sharp edges between the black tape and white paper, and daylight illumination covering the whole visible spectrum (unlike, for example, fluorescent lights which have spikes at various wavelengths).



I set the Canon G9 camera to aperture-preferred f/4, framed the target to get useful X's to the corners, and shot at several focal lengths from the shortest, 7.4 mm, to the longest, 44.4 mm.

If we take a close look at the upper left corner of the 7.4 mm shot, we can see evidence of significant chromatic aberration around the X. (This sample of the full image is blown up to twice size and the saturation boosted way up to make the color aberrations quite clear.)

To correct this, we're going to use the fulla command mentioned frequently in previous posts. It's part of the freely-available hugin panorama photo stitcher package. The approach taken by the fulla command is to leave the green channel of the image alone and adjust the blue and red channels to line up correctly with the green channel. This makes a lot of sense since green appears brightest to our eyesight, there are twice as many green photo sites on a Bayer matrix sensor as there are either red or blue photo sites, and usually the green channel is the cleanest compared to the other two. Like we've seen in the discussions of vignetting and geometric distortion, fulla uses a polynomial to accurately represent adjustments to be made to each of the red and blue channels depending on radial distance from the center of the image. The polynomial adjustment, once again is of the form

a * r3 + b * r2 + c * r + d


There are a few ways we can approach finding the values for a, b, c, and d to use for correction. First, we can break the image into separate R, G, and B channel images and use one of the panorama stitching programs to compare the red and green channel images, select control points, and optimize the corrections for the red channel image required to make it line up with the green channel. Do the same thing comparing the blue channel image with the green channel, and we can get the set of polynomial coefficients that will shape the red and blue channels to line up with the green.

To break an image into separate channels, you can use Photoshop, go to the Channels tab, and select Split Channels. This will create 3 grayscale images, one for each color channel, with "_R", "_G", or "_B" appended to the filename to indicate which color channel it represents.

Another alternative to the tedium of fitting control points in a panorama stitching tool, is to use the tca_correct tool supplied in the hugin package. This command-line program also needs the separate channels broken out into separate image files with each beginning with "red_", "green_", or "blue_" denoting the channel it represents. An additional requirement is that each of the channel images must be an RGB image, not grayscale so that additional step would need to be done in Photoshop if you're going that route.

Another way to get the separate channel images broken out is through the use of ImageMagick a wonderfully powerful set of command-line tools available on all major platforms. Using ImageMagick's convert command we can extract each of the RGB channels into separate RGB image files:

convert %1 -channel R -separate -depth 8 -type TrueColor red_%1
convert %1 -channel G -separate -depth 8 -type TrueColor green_%1
convert %1 -channel B -separate -depth 8 -type TrueColor blue_%1

(This example uses a MSDos batch command file for Windows; the analogous thing can be done with the same parameters on the other platforms.)

The tca_correct command to evaluate your image and determine the polynomial coefficients is

tca_correct -o abc filename

where filename is the root file name without the color prefixes. After crunching things for a bit, it will show the fulla options to be used to correct the transverse chromatic aberration.

This is ingenious, simple, and a great time saver. However, there are a couple of problems with this approach. First, the tca_correct command is not being distributed reliably with the various hugin updated packages. Second, it doesn't work. At least for me, it didn't work well—it seemed to substantially under-correct the test images I used.

Another way to go, the method I chose, is to use the old reliable eyeball-it approach. For the G9 test images I was looking at, the transverse chromatic aberration appeared to be approximately linear with the radius. Logically this makes sense given that, by it's nature, transverse chromatic aberration is caused by the failure to cast different wavelengths on the image plane with the same magnification. We need to shrink or expand the red and blue channels to line up with the green. I used the fulla chromatic aberration correction command options on my test images adjusting only the d polynomial coefficient which is the linear term of the expression and thus causes a linear expansion or contraction of the channel.

Fulla wants 2 options provided for the red and blue channel adjustments ("-r", and "-b" respectively) providing the polynomial coefficients in the usual syntax:

fulla -r a:b:c:d -b a:b:c:d filename


I set the a, b, and c coefficients to zero and start with a small adjustment. By examination of the test image above, it's clear both the red and blue channels need to be pushed out so we start with something like this:

fulla -r 0:0:0:1.0003 -b 0:0:0:1.0003 filename


We examine the output, and adjust blue and red channels up or down as needed. For the 7.4 mm f/4 test image above, I ended up with the following adjustments:






fulla -r 0:0:0:1.00024 -b 0:0:0:1.00041 filename

As you can see in the processed image (magnified and saturation-enhanced the same as the original above), we've eliminated the most egregious transverse chromatic aberrations.

With luck, the rest of the image will be similarly corrected. If not, there may be secondary adjustments required where you have to start tweaking the c coefficients, for example. It's not something to look forward to.

Now that we've addressed longitudinal and transverse chromatic aberrations, our images should be free of chromatic aberrations, right? Sadly, no. There are other chromatic aberrations originating in other parts of the camera system including the photo sites themselves and the microlenses on top of them. Once again, the point-and-shoot cameras tend to be worse, particularly due to the tiny photo sites and their very high density. These aberrations, unfortunately, can not be fixed as easily as what we've been talking about. However, usually these are not nearly as noticeable as the aberrations we do have some control over.

Next, we'll put vignette correction, geometric distortion correction, and transverse chromatic aberration correction together in a more organized way now that the hard work has been done. We'll also look at a couple of commercial alternatives to our labor-intensive solution, one modestly priced and the other more expensive and robust.

Monday, July 28, 2008

Part 3: Geometric Lens Distortion Correction

All lenses, particularly cheaper mass-produced lenses have some amount of geometric distortion. Some primary lenses, particularly expensive ones, can be highly corrected. Others, particularly with large zoom ranges, are characterized by mixes of barrel or pincushion distortion (or both). The result is images where straight lines appear curved, particularly noticeable in the outer portions of the images.

In 1998 Helmut Dersch created a set of software tools that could be used to, among other things, correct lens distortions so that they could be more effectively stitched together in panoramas. His library of tools caused a cottage industry to grow with many GUI front-ends developed to use the tools. Not everyone is interested in creating panoramas, but many are interested in correcting noticeable distortions for things like ocean horizons, architectural features, and many other distortion-sensitive subjects.



The Canon G9, which we've been using as the guinea pig for our series discussion, has a zoom range of 7.4 mm to 44.4 mm (equivalent to roughly 35 - 210 mm in 35 mm terms). At the short end of the range, it has moderate inner barrel and outer pincushion distortion (the combination commonly called mustache distortion). As the lens zooms out, images acquire slight pincushion distortion. The compound distortions in the lens don't correct well using simplistic generic tools so we'll develop specific corrections to flatten the images at the various zoom settings.

We need to apply a corrective distortion to the image to compensate for the lens-induced distortions. The image distortions are radially symmetric so we'll use a function that determines image distortion based on radial distance from the center of the image. In short, we'll be using a 3rd-order polynomial to correct the radial distance each part of the picture needs to be moved, based on its original radial distance from the center:

a * r3 + b * r2 + c * r + d

Our goal is to find the values for a, b, and c to correct for our lens's distortion at a specific focal length. The d value is only used to adjust the total image size to compensate for the other corrections. It has a value of

d = 1 - (a + b + c)

There are a few approaches to finding the coefficients for the polynomial. One way is to shoot several images and stitch them together into a panorama using one of the commonly-available tools and then note the corrections the software made to fit the images together. In essence, the process of stitching the images accurately requires the software to correct for distortions to complete its job.

We'll use a more targeted approach using one image. In this technique, you shoot an image that contains several horizontal and vertical lines that you know are straight. A large office building is ideal. The objective is to have several lines available in the image at varying distances from the center of the image and stretching all the way to the edges of the image. You then use a panorama stitching tool such as the freely-available hugin panorama photo stitcher package. Using hugin to do this, you load the target image twice, cancel the automatic control point detection, and manually add control points along the lengths of each of the lines you know should be straight. All control points on a line must belong to the same line labeled tn where n is a line number starting at 3 for the first line. (Lower numbers are reserved for other types of lines.)

Once you go through the tedium of identifying a few hundred control points on 6 to 12 lines covering various parts of the image, you run the optimizer to determine what the coefficients should be to cause the distorted lines to become straight.

For the G9, I created test images through the zoom range and here are the corrections I came up with which give me images with the lens distortions largely removed.


Canon G9 Lens Distortion Correction Coefficients
Focal Length a bc d
7.4 mm 0.028 -0.0871 0.0521 1.007
8.2 mm 0.0313 -0.089 0.0474 1.0103
9.0 mm 0.0082 -0.0186 -0.0046 1.015
12.7 mm 0.0187 -0.0558 0.0528 0.9843
16.8 mm -0.0172 0.0541 -0.0477 1.0108
22.0 mm -0.0053 0.0196 -0.0201 1.0058
25.0 mm -0.0038 0.0131 -0.0133 1.004
29.2 mm -0.0065 0.0184 -0.0111 0.9992
36.8 mm 0.0102 -0.0355 0.0391 0.9862
44.4 mm -0.0159 0.0626 -0.0674 1.0207
Special bonus: 7.4 mm with Raynox 0.7X wide angle conversion lens attached:
5.2 mm 0.0513 -0.1509 0.0835 1.0161


To apply corrections to an image, you can use the hugin fulla command we've mentioned previously. This time we use the -g option to pass our correction coefficients:

fulla -g 0.028:-0.0871:0.0521:1.007 image_file_name

The output will be our distortion-corrected image. (In this case the image of the building was shot with the G9 set at its shortest focal length, 7.4 mm, so we use the coefficients corresponding to that focal length.)



This technique will go a long way toward correcting obvious geometric distortion but it's not perfect. It doesn't correct to perfection although in most cases it's nearly impossible to tell without careful measurements of the resulting image. It also doesn't handle radially asymmetric distortions such as might occur in misaligned lenses.

Next up, we'll take a look at chromatic aberration and see how we can correct one type of it.

Sunday, July 6, 2008

Part 2: Vignette Correction

All camera/lens systems vignette the produced images, some much more than others. Vignetting (or more-correctly light falloff as we're discussing here), the darkening of the created image toward the edges, is caused by a number of factors. We are so used to seeing vignetting in images, we usually don't notice it; in fact many think that vignetting can add to the appeal of an image and will add more in during post-processing. Vignetting in some applications is not desirable: scientific and technical images, cartographic images, repro work, and panorama stitching are among the applications where vignetting is to be avoided.

In this image, the upper left portion is a flat evenly-lit surface showing the effects of vignetting causing light fall-off radiating from the center. It becomes noticeable when compared to the true flat field uniform gray half of the image in the lower right.

We're going to take a look at a couple of options for eliminating or reducing vignetting in images during post-processing, particularly using the fulla command available in the free hugin panorama photo stitcher package. The fulla command is a useful utility we'll explore for other purposes in future posts, as well as using it to address vignetting here.

Before we get started, a couple of things need to be understood. First, vignette characteristics are unique to a particular camera,lens, focal length, and aperture. Change any one of those and the vignette characteristics change. The rate of brightness falloff from center to edges of an image is not uniform; systems that may cause the same center-to-edge brightness falloff may do it in very different ways. It can happen in a gradual linear way or it may have relatively uniform brightness until getting closer to the edges when it drops of quickly. Vignetting is not necessarily radially symmetrical, although in most cases it is close enough for our purposes in this discussion. Finally, this overview is not for the faint of heart; it outlines the non-obvious aspects of what we're doing but if you wish to pursue this yourself, you must be willing to dig in and work through the details yourself.

Using a flat field image as a corrective for other images is a technique that has been around a long time and is still commonly used by astrophotographers. Essentially, you shoot an image of a perfectly uniform empty subject devoid of any detail and uniformly lit from edge to edge. A clear cloudless haze-free sky directly overhead would be an example. You then apply this image to target images using software that cancels out the brightness differences represented in the flat field image. The advantage to this approach is that you are directly using the systems recorded image to cancel out the brightness anomalies in the produced images. The fulla command can use a flat-field image to correct for vignetting by passing the flat-field image's name as a command option.

The trouble comes when you try to produce an accurate flat-field image for your camera, particularly if you aren't doing this for a telephoto lens. It's actually fiendishly difficult to get the featureless uniformly-lit subject you need to do this. The gray image above is an attempt to produce such a flat-field image (upper left portion) using the Canon G9 camera's 7.4 mm focal length and f/2.8 aperture. This is a relatively wide angle (about 50° on the long side), presenting quite a challenge for coming up with a suitable flat-field subject. I was not successful—I came close but never could get an image in which all corners were consistent, for example.

Another approach is to correct the vignetting mathematically. You provide a formula to software that can use to determine what brightness correction should be used at each radial distance from the center of the image. The fulla command provides 2 options for doing this, one in which you provide the formula for the value that should be added to the brightness depending on radial distance, the other method uses a correction in which the brightness is divided by a value provided by a formula. I found the latter worked better for me. The fulla command has very sparse and nearly unusable documentation so I'll fill in the details as I describe the process I used to profile my camera system for vignette correction.

The net result of what we're going to do is to provide a set of 4 numbers to a fulla option that describe an equation fulla uses to determine what adjustment needs to be made at each radial distance in the image. The four numbers are coefficients for an even-powered 6th order polynomial of the following form:

a + b * r2 + c * r4 + d * r6

where r is the radius from the center of the image (0) to the farthest corner (1.0). This formula is applied at each radial distance by dividing the image's RGB values by the calculated factor to determine what the new adjusted values should be. Assuming you haven't left already, our job is to find the values for a, b, c, and d that work best for the particular camera, lens, focal length, and aperture we're interested in correcting.

The first step is to measure the brightness fall-off from the center to the corners of an image produced by your system. Because there's so much trouble trying to get a single flat-field image, we'll shoot multiple images of a uniformly-lit small target, panning the camera from the center to a corner between shots. Our goal is to get about 15 or 20 shots of the target ranging in position from the center to the lower right corner of the image.

I used a gray card illuminated by an incandescent lamp that was turned on about 20 minutes before starting to make sure the light has stabilized. The camera was mounted on a tripod about 15 feet away and set on the focal length and aperture I wanted to profile (7.4 mm at f/2.8 for this example). I adjusted the tripod head so the camera was angled to the left in such a way that I could pan over the subject from the center to the lower left corner without adjusting anything else on the camera. Then I made the series of about 20 shots panning a little more to the left each time until I had images of the target all the way into the farthest reaches of the corner. I made sure I was using manual exposure and the target was approximately gray (midrange) in value. By the way, I was shooting raw, not JPEG, as I didn't want the internal JPEG processing mucking up the precision of my test images.

Next step is process the images precisely how you would normally do it. Do not apply any contrast adjustments to the images unless you use exactly the same adjustments on all your images all the time. I ended up generating about 20 TIFF files that I could load in sequence into Photoshop to sample the brightness of the target at each position.

Next use your Photoshop (or other) eyedropper tool to measure the same relative location of your target in each image. I used the 5X5 averaging for more uniform readings. Record the location in the image that you sample (x and y values) and the readout of the brightness (either the green channel, for example, or gray value if you converted the images to grayscale during processing).

Plug these into a spreadsheet and start crunching the numbers into something more meaningful. First, you want to convert the x and y values into a radius value ranging from 0 at the center to 1.0 at the far corner. The Canon G9 I used makes images 4000 X 3000 so the spreadsheet formula for finding the radius based on x and y readings goes something like this:

=sqrt(($Xn-2000)^2+($Yn-1500)^2)/2500

where X and Y are, obviously whatever column numbers you used to plug in your x and y coordinate numbers.

Next, create a column of your radius numbers squared—this will be used in determining the magic coefficient numbers we're after.

Finally, create a column of normalized brightness levels. At this point you need to decide how you want to correct your image's brightness levels for vignetting. You could brighten the darker outer region, darken the lighter central region, or do something in between. I took the in-between route to minimize the damage I'm doing to pixels that are having their brightness adjusted. So, my normalized brightness values are calculated by dividing the recorded brightness value by the average of the brightest and darkest values. This would give you a range of numbers going from a little over 1.0 to a little less than 1.0 depending on how bad the vignetting is. For my f/2.8 test images, the brightest central reading was 153 and the darkest corner reading was 129 giving me a normalized range of 1.085 to 0.915, approximately.

Next we calculate a polynomial regression line through the values we've calculated. If you are using an Excel spreadsheet, this is easy. Create a XY (Scatter) graph of normalized brightness values corresponding to the radius squared. Then on the chart do Add Trendline and, in the Type tab, select Polynomial and Order 3; in the Options tab select "Display equation on chart" and "Display R-squared value on chart". Once you click OK, a regression line will be drawn through your scatter plot and an equation will show up on your graph. This equation has the a, b, c, and d coefficient numbers we've been working toward. In my f/2.8 case, the regression equation produced was

y = -0.1355x3+0.0443x2-0.0943x+1.0873

The terms are in the reverse order of the fulla format so our a, b, c, and d coefficients for fulla are 1.0873, -0.0943, 0.0443, and -0.1355 respectively. The R2 (squared coefficient of variation) value was 0.9963 in my case, which is a very good fit. (A R2 of 1.0 represents a perfect fit while 0 represents no correlation whatsoever; the closer to 1.0 we get, the more confidence the regression line is a good representation of our measured data.)

So, why did we plot against radius squared rather than radius? It's because we need the even-numbered sixth-order polynomial which isn't available as an option in Excel so we finessed it by using a 3rd-order polynomial regression line against r2.

If you don't have Excel, you may have access to another spreadsheet program with a similar regression analysis capability. If not, there are other programs such as statistical analysis packages that can do it, or you can even use online tools to accomplish the same thing. See for example http://udel.edu/~mcdonald/statcurvreg.html and scroll down to the heading, "How to do the test" which has links to several online tools as well as a spreadsheet you can download for use when you don't have the integrated trend line feature.

Next we look at a quick test. This image is a composite of 2 images shot with a pan between to see how they match up. Notice the seam line where the two images line up. The vignetting is clearly visible. If we process the two images separately using the fulla tool, we can check how effective our calculations were. We use our calculated coefficients in fulla using the command like this:

fulla -c 1.0873:-0.0943:0.0443:-0.1355 image_file_name

When we composite the corrected images together in the same way as the originals, we see the seam is markedly less visible indicating our correction has improved the vignetting situation.

Is it perfect? Not at all. This method can cause color shifts because it's not actually correcting strictly luminance but rather just doing a crude adjustment of the RGB values in the image. It also uses assumptions of perfect radial behavior, an imperfect regression line, imperfect input test data, and so on. However, in most cases it is an improvement as we can see in the corrected image. Is it worth the trouble? Perhaps not for most images but once the calculations are done it can be applied nearly effortlessly to any image shot with the same camera, lens, focal length, and aperture combination such as what we're targeting here for our panoramas.

There are other more advanced techniques evolving that can guess at vignette correction, particularly in multi-image panorama environments and make adjustments accordingly. Of course there's the now-standard panorama tools Smartblend, Enblend, etc. that disguise vignette and other tonal anomalies.

Next up, we'll look at lens distortion and how we can correct that using fulla as well. We're working toward a single fulla command line to correct as many flaws as we can and when we're finished we'll put together some simple scripts to simplify correcting batches of images.

Thursday, June 26, 2008

Introduction: Understanding and Fixing Compact Camera Images

This is the first of several posts where we'll look at the characteristics and problems with compact consumer digital camera images and work out ways to deal with them. I recently decided to dust off my panoramic tripod head and try it out with the Canon G9 compact consumer camera. Setting up the panorama head to get the entrance pupil of the lens centered on the horizontal and vertical axes was simple. More time consuming was measuring and automating the correction of the G9's images so that they could be used effectively for high-quality stitched images.

First, these are the assumptions going into this series. The average person won't go to the lengths discussed here to squeeze the most quality out of the camera's images. (But you aren't an average person). I'm using the Canon G9 camera as an example but the concepts and techniques apply to any digital camera in this class. Furthermore, for simplicity and because I'm doing this ultimately for use in wide-angle panoramas, I'll only discuss the shortest focal length (7.4 mm). The same issues and techniques apply to other focal lengths in the zoom range but let's not get too buried in detail. Finally, while some of this discussion is of most obvious use when the goal is trying to stitch images together, the techniques can be used to greatly improve single images as well.

Let's start by discussing sharpness and depth of field. If you're not familiar with the concepts, head on over to the superb tutorials at the Cambridge In Colour site and get acquainted with depth of field and hyperfocal distance concepts. Because of the tiny sensors in the compact consumer cameras like the G9, the depth of field is very deep compared to a dSLR, for example. I prefer to use the more conservative Leica proportional circle of confusion size for calculations. For the G9's size, the standard assumed circle of confusion size used to determine what is in focus is 0.006 mm but I'll use the slightly smaller 0.005 mm size for added insurance. At that size, and the 7.4 mm focal length lens set at its widest aperture (f/2.8) the hyperfocal distance is 12.7 feet. That means if you focus at 13 feet, everything from a little over 6 feet away to infinity will be in focus. And that's the shallowest depth of focus you can get in this camera! At f/4, the hyperfocal distance is 9 feet meaning everything from 4.5 feet to infinity will be in focus. That should suit all but the most extreme requirements for depth of field, but the G9 will go all the way to f/8.

The problem with going beyond f/4 is that the G9 (and similar cameras) is diffraction-limited at f/4. That means that if you close the aperture down smaller, you will get greater depth of field but the overall image sharpness will decline due to diffraction effects. There may be extreme cases where f/4 just isn't sufficient and you're willing to pay the price of the image degradation to get the extended depth of field. For me, however, I'm sticking with f/4 and focused at 9 feet for the bulk of my work with this camera. In fact, I've set the custom settings (C1 and C2 on the G9) to be aperture-preferred f/4, manually focused at 9 feet. I turn on the camera, dial to C1 and I'm set to get the sharpest images with the most depth of field without any more fuss. (Obviously if you're subject is closer than 4.5 feet you need to override this.) I set up the C2 custom set the same way but added a 2 second timer delay to the settings so that I can trip the tripod-mounted camera shutter and get my hands off before the shutter fires. That greatly reduces the chances of vibration blurring.

The following example image was shot at the shortest focal length (7.4 mm), widest aperture (f/2.8), and focused at the hyperfocal distance of 13 feet. The Cyclamen flowers 6 feet away are sharp, as are the leaves on the trees behind the house. Only the closest stretch of fence rail on the right is showing focus-blurring.



By the way, if you do frequently use the camera in closer and still need to manage the depth of field closely, go to www.dofmaster.com and get the DOFMaster Depth of Field Calculator program to make your own depth of field calculator. Using this a little bit is a real eye-opener when you see what happens when you change aperture or focus distance.

The next series of posts will explore analyzing and fixing the images that come out of the camera. We'll fix vignetting, lens distortion, and transverse chromatic aberration. Your images will look better than ever. Meanwhile, get familiar with hugin panorama photo stitcher software. It's free and, even if you never do panoramas, it includes a command called fulla that we'll be using to correct our image flaws. It's a command-line utility, not a windows program, but that works to our advantage to automate image correction.

Saturday, June 7, 2008

Selective Focus Stacking


There are three controls commonly used to select what is in focus in an image: selecting what the lens is focused on (obviously); varying the aperture to control the depth of field; and tilting the plane of focus (via tilt/shift controls on a view camera or a PC lens on a SLR). Using these, even combined, sometimes won't create what you want. Suppose you want a fairly deep zone in focus and everything else way out of focus. The aperture control can give you a deep zone of focus when stopped way down but it either may not be enough or it may make other elements sharper than you want. If you open the aperture up, you get the nice unfocused surroundings but with a very shallow zone that's in focus. We want to get the best of both extremes.

This is where focus stacking comes in. Focus stacking entails taking several identically-exposed images from the same vantage point, but with each one focusing to a different depth of the subject you want in focus. Specialized software then combines the images, selecting the sharpest portions of each image and creating a single image with often remarkable depth of field. Focus stacking is commonly used in macro- and microphotography where depth of field is miniscule even with the aperture closed way down.

You can use focus stacking in a slightly different way. Open your aperture way up (giving a very shallow depth of field and nice unfocused surroundings) and shoot several images varying the focus through the depth of the region you want in focus. Combine with focus stacking software and you can get an image with the main subject in sharp focus front to back, and the surrounding depth nicely out of focus.

The example image of a rose blossom on a piano keyboard was shot with 5 exposures with focus ranging from the near tips of the petals, to the far edges of the sepals behind the blossom. The 50 mm lens was wide open at f/1.4 for these exposures giving a very soft image except for the vary shallow zone of focus where the lens was focused in each image. Combine the stack in the software and you get the rose blossom and sepals razor sharp front to back with everything else nicely blending to gauzy tones. You won't be able to get this effect in any other way.

This particular image was created using the freely-available CombineZM focus stacking software, but there are several other choices of software that can do the same thing.

Saturday, March 15, 2008

Diffuser Snoot



Here's a do-it-yourself contraption you can build that combines the characteristics of a snoot and a softbox. It consists of 2 nested tubes. In my prototype I made the inner tube from a length of scrap mat board about 4 inches wide bent into a cylinder and taped in place. On both open ends I taped some white tissue paper to serve as double diffusion. This diffusion drum was then rolled into a large tube of black paper about 24 inches wide. The whole thing was about 10 inches in diameter. A flash is fired in one end and diffuse light is sent out the other.

The device has a few interesting characteristics. When up close to the subject it provides nice even diffused light. The inner diffusion drum can be slid within the outer cylinder to change how much hood is exposed beyond the diffuser. This can control how fast the light falls off around the edges. The net result is a nice soft light that falls off quickly around the edges, sort of a spot diffuser.

Tuesday, March 4, 2008

Take Those Shower Caps

Almost every hotel or motel room you stay in will have a freebie shower cap among the amenities in the bathroom. Take it and toss it in your camera bag.

These things are great to have in a pinch when the rain starts coming down and you can't stop shooting. (Or, better yet, you want to shoot because you know rainy days offer great light.)

They're just the right size to put over your flash and Pocket Wizard combo or slip onto your camera between shots. Carry a few plastic clothes pins to clip on the shower cap if it's windy. You can use them as little anchors or to take up slack in the plastic.

Don't book a room at Hilton just to get the shower caps but, if you happen to be there anyway, grab them; they work great and the price is right.

Saturday, March 1, 2008

Shadow Noise Elimination Technique

Here's a tip for dealing with noise in shadow areas of images. This technique assumes you're satisfied with midtone and highlight quality but have unacceptable noise levels (luminance, chrominance, or both) in shadows. This description presumes raw shooting and the use of Adobe Camera Raw (ACR) for image development, but can be adapted for other environments.

In a nutshell, you layer two versions of the image. The bottom layer is the normal exposure with noisy shadows. The layer above it is an overexposed version, which is tonally adjusted. Blending is used to selectively allow only the adjusted noise-free shadow tones of the top layer to be visible while the midtones and highlights of the bottom correctly-exposed layer are used.

The example images shown are crops at 2X scaling and have levels cranked up so you can clearly see what's going on. (Click to view full size.) First, the noisy normally exposed image:



You will shoot two exposures of your scene, one correctly exposed, the other 2 stops overexposed. (We'll look at a variation of this technique using only one image later.)

The overexposed image has what were the shadow tones of the correctly-exposed image placed in your camera's midtone sweet spot where it should be relatively noise-free. In ACR, develop your correctly-exposed image normally and then apply the same ACR settings to the overexposed version with one exception: move the Exposure slider to -2.0 to move the overexposed midtones down to the shadow region.

The shadow tones will now look flat because they don't have that nice S-curve adjustment at the toe that's applied in the raw processing. Apply a contrast adjustment yourself, either in ACR or later with a Curves adjustment. Your highlights will be blown out but you don't care because you won't be using the highlight parts of this image.

Bring the adjusted exposure image version into a separate layer above the correctly-exposed image layer. They need to register precisely—jigger them if needed. (Pardon the sophisticated technical terminology.) Now you'll do some magic to only keep the shadow regions of the upper layer.

Leave the blending mode at "Normal". Go to "Blending Options" for this layer and adjust the "Blend If" setting at the bottom as follows. Leave the drop-down setting at "Gray"; drag the right slider on "This Layer" toward the left to around 128; alt-click the slider to split it and drag farther left to about 64. What you've done is made the brighter regions of the layer transparent and the shadows opaque, with a nice gradual transition between the two to make it look natural. Poof!, instantly-improved shadow tones less the noise. Tweak the "Blend If" sliders to optimize for the particular image you are working on. There's quite a remarkable improvement in the quality of shadow tones of the blended image.


For images that you don't have, or can't shoot, the second (overexposed) version of the image, you can try the same technique from a different angle. Duplicate the original image and apply an aggressive noise reduction on it using one of the standard noise-reduction tools. Because you'll only be using the shadow tones, you can be quite aggressive with the noise reduction as the introduced artifacts won't be as noticeable in the shadows. Put the noise-reduced version on a layer above the original, and apply the blending technique as described above. You'll get the benefit of strong noise reduction in the shadow tones without the image degradation in the midtones and highlights that might be introduced by noise reduction.

Monday, February 25, 2008

Snootier Snoot

A snoot, as you may know, is a tube or cone used to restrict the light from a flash or other light source to a narrower beam. I wanted a snoot that, instead of throwing a fuzzy blob of light, created more of a spotlight effect. I designed and built one for this purpose.

Usually when a snoot is used, the gentle transition from light to dark that is cast is considered desirable. When you want a crisper edge to the transition you need to address two issues:

  1. The usually relatively short snoot compared to the relatively large light source surface (i.e. the flash head) means you will naturally have a fuzzy light edge thrown on the subject.
  2. Unless great care is taken, the inside of a snoot is prone to reflect light off the snoot walls causing stray light to reduce constrast of the edge transition.


Using optics to focus the light beam would be an obvious choice to address these problems. If you don't want to use optics, you need another approach. I did this by designing a long snoot with carefully sized baffles inside to block all but the beam of light I wanted to throw. Total cost was a couple dollars worth of materials.

A detailed explanation of the design and construction of this snoot is available on my Web site.

So, how good a job does it do? You be the judge. The spotlight on this cheeseball lounge singer is from the Snootier Snoot.

Vignetting Revisited

After enduring a little confusion and frustration in correcting some images from my Canon G9 camera, I investigated a little more and found ...