A digital image is composed of discrete ‘picture elements’ or ‘pixels’. When a real image is created from a camera or detector, each pixel’s area is used to store the number of photo-electrons that were created when incident photons collided with that pixel’s surface area. This process is called the ‘sampling’ of a continuous or analog data into digital data.
When we change the pixel grid of an image, or “warp” it, we have to calculate the flux value of each pixel on the new grid based on the old grid, or resample it. Because of the calculation (as opposed to observation), any form of warping on the data is going to degrade the image and mix the original pixel values with each other. So if an analysis can be done on an unwarped data image, it is best to leave the image untouched and pursue the analysis. However as discussed in Warp this is not possible in some scenarios and re-sampling is necessary.
When the FWHM of the PSF of the camera is much larger than the pixel scale (see Sampling theorem) we are sampling the signal in a much higher resolution than the camera can offer. This is usually the case in many applications of image processing (nonastronomical imaging). In such cases, we can consider each pixel to be a point and not an area: the PSF doesn’t vary much over a single pixel.
Approximating a pixel’s area to a point can significantly speed up the resampling and also the simplicity of the code. Because resampling becomes a problem of interpolation: points of the input grid need to be interpolated at certain other points (over the output grid). To increase the accuracy, you might also sample more than one point from within a pixel giving you more points for a more accurate interpolation in the output grid.
However, interpolation has several problems. The first one is that it will depend on the type of function you want to assume for the interpolation. For example, you can choose a bi-linear or bi-cubic (the ‘bi’s are for the 2 dimensional nature of the data) interpolation method. For the latter there are various ways to set the constants186. Such parametric interpolation functions can fail seriously on the edges of an image, or when there is a sharp change in value (for example, the bleeding saturation of bright stars in astronomical CCDs). They will also need normalization so that the flux of the objects before and after the warping is comparable.
The parametric nature of these methods adds a level of subjectivity to the data (it makes more assumptions through the functions than the data can handle). For most applications this is fine (as discussed above: when the PSF is over-sampled), but in scientific applications where we push our instruments to the limit and the aim is the detection of the faintest possible galaxies or fainter parts of bright galaxies, we cannot afford this loss. Because of these reasons Warp will not use parametric interpolation techniques.
Warp will do interpolation based on “pixel mixing”187 or “area resampling”. This is also similar to what the Hubble Space Telescope pipeline calls “Drizzling”188. This technique requires no functions, it is thus non-parametric. It is also the closest we can get (make least assumptions) to what actually happens on the detector pixels.
In pixel mixing, the basic idea is that you reverse-transform each output pixel to find which pixels of the input image it covers, and what fraction of the area of the input pixels are covered by that output pixel. We then multiply each input pixel’s value by the fraction of its area that overlaps with the output pixel (between 0 to 1). The output’s pixel value is derived by summing all these multiplications for the input pixels that it covers.
Through this process, pixels are treated as an area not as a point (which is how detectors create the image), also the brightness (see Brightness, Flux, Magnitude and Surface brightness) of an object will be fully preserved. Since it involves the mixing of the input’s pixel values, this pixel mixing method is a form of Spatial domain convolution. Therefore, after comparing the input and output, you will notice that the output is slightly smoothed, thus boosting the more diffuse signal, but creating correlated noise. In astronomical imaging the correlated noise will be decreased later when you stack many exposures189.
If there are very high spatial-frequency signals in the image (for example, fringes) which vary on a scale smaller than your output image pixel size (this is rarely the case in astronomical imaging), pixel mixing can cause ailiasing190. Therefore, in case such fringes are present, they have to be calculated and removed separately (which would naturally be done in any astronomical reduction pipeline). Because of the PSF, no astronomical target has a sharp change in their signal. Thus this issue is less important for astronomical applications, see Point spread function.
To find the overlap area of the output pixel over the input pixels, we need to define polygons and clip them (find the overlap).
Usually, it is sufficient to define a pixel with a four-vertice polygon.
However, when a non-linear distortion (for example, SIP
or TPV
) is present and the distortion is significant over an output pixel’s size (usually far from the reference point), the shadow of the output pixel on the input grid can be curved.
To account for such cases (which can only happen when correcting for non-linear distortions), Warp has the --edgesampling option to sample the output pixel over more vertices.
For more, see the description of this option in Align pixels with WCS considering distortions.
see http://entropymine.com/imageworsener/bicubic/ for a nice introduction.
For a graphic demonstration see http://entropymine.com/imageworsener/pixelmixing/.
http://en.wikipedia.org/wiki/Drizzle_(image_processing)
If you are working on a single exposure image and see pronounced Moiré patterns after Warping, check Moiré pattern in stacking and its correction for a possible way to reduce them
GNU Astronomy Utilities 0.23 manual, July 2024.