Blur is an inevitable unwanted phenomenon, which is present in all digital images. It results in smoothing high-frequency details, which makes the image analysis difficult. Heavy blur may degrade the image so seriously, that neither automatic analysis nor visual interpretation of the content are possible. If we did not have proper tools for processing and analyzing blurred images, many unique images would become useless. Two major approaches to handling blurred images exist. They are more complementary rather than concurrent; each of them is appropriate for different tasks and employs different mathematical methods and algorithms.
Image restoration is one of the oldest areas of image processing. It appeared as early as in 1960’s and 1970’s in the work of the pioneers A. Rosenfeld, H. Andrews, B. Hunt, and others. In the last ten years, this area has received new impulses and has undergone a quick development. We have been witnesses of the appearance of multichannel techniques, blind techniques, and superresolution enhancement resolved by means of variational calculus in very high-dimensional spaces. A common point of all these methods is that they suppress or even remove the blur from the input image and produce an image of a high visual quality. However, image restoration methods are often ill-posed, ill-conditioned, and time consuming.
On the contrary, blur-invariant approach, proposed originally in 1995, works directly with the blurred data without any preprocessing. Blurred image is described by features, which are invariant with respect to convolution with some group of kernels. Image analysis is then performed in the feature space. This approach is suitable for object recognition, template matching, and other tasks where we want to recognize/localize objects rather than to restore the complete image. The mathematics behind it is based on projection operators and moment invariants.