Single Pixel Details

A one pixel moon would hardly be satisfying.

In last month's blog I demonstrated that larger pixel scales make just about every aspect of imaging more forgiving. From that, it sure sounds like the larger the pixel scale the better. In truth, there's a battle between the benefits of a large pixel scale and a small one, because the smaller your pixel scale, the finer you can resolve your images. Thus, there is an opposing desire for as small a pixel scale as possible.

Let’s take an extreme example for illustration's sake. What if we had a humongous pixel scale, say, half a degree per pixel, and we took a photo of the Moon. You wouldn't be especially satisfied with the results!

All of the light coming from the details you might want to capture — craters, rilles, maria — would be collected in a single pixel (assuming the Moon was centered on that pixel), resulting in nothing but a big white square on the background sky. We also tend to like zooming into our photos to see more details, but enlarging a single pixel doesn’t really give us a more detailed square, does it? When there aren't enough pixels to represent the image we want to reproduce, we call this undersampling. A one-pixel Moon is about as extreme an example of undersampling as you could ask for.

 

Better sampling

A better sampled moon shows us some surface features. 
[credit, Richard S. Wright Jr.]

Now, let’s take a look at a more highly sampled Moon image, one with a smaller pixel scale. We can see the familiar features we want. We can even zoom in a bit to see more details. The sampling is better because the number of pixels used to represent the image is sufficient to display all the details we want.

 

Or is it? Zoom in a little more and the details start to get pixelated and we start seeing blocky features were we want crisp image details. Obviously, we still need more pixels!

It would seem that if our equipment and skill are up to the task, we would want as small a pixel scale as we can possibly get. Alas, there are two very significant obstacles in our path; first is the physics of light and optical design, and second is . . . you guessed it, atmospheric seeing.

Our first limit to pixel scale is related to the smallest detail your optics can capture — a big, complicated topic I can’t squeeze into this blog. I’m simply going to say, diffraction-limited optics are now more or less the norm for telescopes, and if your optical design is diffraction limited, you can be assured the real limit to image sharpness is the atmosphere, not your equipment . . . unless you're in orbit!

 

Still too few pixels.

We need more pixels if we want more detail. But how far can we go? 
[credit, Richard S. Wright Jr.]

Remember, seeing is a measure of how much the stars are moving around due to the atmosphere's turbulence. Stars are point sources and make the ultimate test for how fine a detail we can capture (they literally are the finest detail available). If the seeing is 3 arcseconds, this means that not only stars, but other fine details, say, in a nebula, or in lunar craters, are also going to be spread out and blurred by that much as well. Thus, the seeing is the absolute limit on how fine a detail you can possibly capture.

 

Now the question is, how many pixels do you want to use to try and capture that detail optimally? In astrophotography this optimal sampling is typically referred to as critical sampling.

Usually when this topic comes up, poor Harry Nyquist gets dragged into the conversation, and someone attempts to explain signal processing (and the Nyquist-Shanon sampling theory) in one or two paragraphs. I took a whole course on signal analysis in college (well, at least one . . .), and I really don’t think I can do it justice in a single blog post. Instead, I’m going to distill the spirit all the way down to what really matters to us when trying to photograph star fields. So here's Richard’s Super-Simplified Sampling Theory for Astrophotography, which I think anyone can understand quite easily. Here it is:

What is the minimum number of pixels required to make a star look round?

Yep, that’s it. How many pixels do you need to adequately sample a round star that has been distorted and enlarged by atmospheric seeing? The image below shows the first three possible solutions, with a circle superimposed that represents the theoretical star.

 

Various sampling values

Which of these sampling values looks more round?
[credit, Richard S. Wright Jr.]

We already know from the Moon example that one pixel represents undersampling. Many understand Nyquist as saying that a signal must be sampled at least twice, that is with at least two pixels. However, sinusoidal signals (such as round things, like stars) need to be sampled at least three times. So you really need at least three pixels across (and down – this is a two dimensional signal, remember) before a star image is going to start to look roundish. In other words, three pixels spread across the seeing limit would be the minimum, or critical sampling.

 

If your seeing is 2 arcseconds (a pretty still night for most of us), then a critically sampled image would have a pixel scale of 2 / 3, or 0.66 arcseconds per pixel. In fact sampling theory says this is the minimum. If your cell phone signal was minimally critically sampled in this way, you would be pretty unhappy with the sound quality. For astrophotography, though, 3 pixels across your seeing limit isn't really the minimum but the maximum sampling you should attempt. Why? Because until now, all of this has been academic . . . and the real world is a much harsher place than the classroom.

 

Small and large blobs

A high-resolution blurry blob is just a larger blurry blob! 
[credit, Richard S. Wright Jr.]

First, unlike your cell phone signal, there is much more noise in an astronomical image, and oversampling makes the noise more prominent on a per-pixel level. Also, and more importantly, details due to atmospheric turbulence are soft (read: blurry). Increasing the resolution of a soft image simply makes a bigger soft image. Unlike the initial Moon example, once you reach the seeing limit, there are no more details for you to acquire with additional sampling. Moreover most of your stars and other details are going to straddle multiple pixels, rather than being exactly centered; with dithering and multiple image stacking, even a slightly under sampled image can be quite satisfactory.

 

So, how do you know what your seeing is? A quick and dirty way to determine your seeing is to come to a good focus on a star, and measure the FWHM (Full Width Half Maximum) diameter of the star. Most camera control programs will have a readout that will tell you this. Also, many autofocusing programs will report the best FWHM available once you are in focus. Remember, too, that seeing conditions vary with the weather and can change throughout an evening. Seeing can even depend on which direction you are pointing: A warm house, for example, will create air currents that significantly degrade seeing conditions. There are times I’ve set up only to find that, due to poor seeing, I’m so oversampled that I simply can't get any good data with my system. Those are the nights to practice something else, or call it a night and get some sleep!

There’s a lot of things I’ve glossed over here, and there are some clever ways to “beat the seeing” in some circumstances (lucky imaging). You can be sure these ideas will come up again in future blogs, including the topic of proper sampling for quality results!

 

 

 

Source: Richard S Wright Jr