Models and Methods for Estimation and Filtering of Signal-Dependent Noise in Imaging
Azzari, Lucio (2016)
Azzari, Lucio
Tampere University of Technology
2016
Teknis-taloudellinen tiedekunta - Faculty of Business and Technology Management
This publication is copyrighted. You may download, display and print it for Your own personal use. Commercial use is prohibited.
Julkaisun pysyvä osoite on
https://urn.fi/URN:ISBN:978-952-15-3887-2
https://urn.fi/URN:ISBN:978-952-15-3887-2
Tiivistelmä
The work presented in this thesis focuses on Image Processing, that is the branch of Signal Processing that centers its interest on images, sequences of images, and videos. It has various applications: imaging for traditional cameras, medical imaging, e.g., X-ray and magnetic resonance imaging (MRI), infrared imaging (thermography), e.g., for security purposes, astronomical imaging for space exploration, three-dimensional (video+depth) signal processing, and many more.
This thesis covers a small but relevant slice that is transversal to this vast pool of applications: noise estimation and denoising. To appreciate the relevance of this thesis it is essential to understand why noise is such an important part of Image Processing. Every acquisition device, and every measurement is subject to interferences that causes random fluctuations in the acquired signals. If not taken into consideration with a suitable mathematical approach, these fluctuations might invalidate any use of the acquired signal. Consider, for example, an MRI used to detect a possible condition; if not suitably processed and filtered, the image could lead to a wrong diagnosis. Therefore, before any acquired image is sent to an end-user (machine or human), it undergoes several processing steps. Noise estimation and denoising are usually parts of these fundamental steps.
Some sources of noise can be removed by suitably modeling the acquisition process of the camera, and developing hardware based on that model. Other sources of noise are instead inevitable: high/low light conditions of the acquired scene, hardware imperfections, temperature of the device, etc. To remove noise from an image, the noise characteristics have to be first estimated. The branch of image processing that fulfills this role is called noise estimation. Then, it is possible to remove the noise artifacts from the acquired image. This process is referred to as denoising.
For practical reasons, it is convenient to model noise as random variables. In this way, we assume that the noise fluctuations take values whose probabilities follow specific distributions characterized only by few parameters. These are the parameters that we estimate. We focus our attention on noise modeled by Gaussian distributions, Poisson distributions, or a combination of these. These distributions are adopted for modeling noise affecting images from digital cameras, microscopes, telescopes, radiography systems, thermal cameras, depth-sensing cameras, etc. The parameters that define a Gaussian distribution are its mean and its variance, while a Poisson distribution depends only on its mean, since its variance is equal to the mean (signal-dependent variance). Consequently, the parameters of a Poisson-Gaussian distribution describe the relation between the intensity of the noise-free signal and the variance of the noise affecting it. Degradation models of this kind are referred to as signal-dependent noise.
Estimation of signal-dependent noise is commonly performed by processing, individually, groups of pixels with equal intensity in order to sample the aforementioned relation between signal mean and noise variance. Such sampling is often subject to outliers; we propose a robust estimation model where the noise parameters are estimated optimizing a likelihood function that models the local variance estimates from each group of pixels as mixtures of Gaussian and Cauchy distributions. The proposed model is general and applicable to a variety of signal-dependent noise models, including also possible clipping of the data. We also show that, under certain hypotheses, the relation between signal mean and noise variance can also be effectively sampled from groups of pixels of possibly different intensities.
Then, we propose a spatially adaptive transform to improve the denoising performance of a specific class of filters, namely nonlocal transformdomain collaborative filters. In particular, the proposed transform exploits the spatial coordinates of nonlocal similar features from an image to better decorrelate the data, and consequently to improve the filtering. Unlike non-adaptive transforms, the proposed spatially adaptive transform is capable of representing spatially smooth coarse-scale variations in the similar features of the image. Further, based on the same paradigm, we propose a method that adaptively enhances the local image features depending on their orientation with respect to the relative coordinates of other similar features at other locations in the image.
An established approach for removing Poisson noise utilizes so-called variance-stabilizing transformations (VST) to make the noise variance independent of the mean of the signal, hence enabling denoising by a standard denoiser for additive Gaussian noise. Within this framework, we propose an iterative method where at each iteration the previous estimate is summed back to the noisy image in order to improve the stabilizing performance of the transformation, and consequently to improve the denoising results. The proposed iterative procedure allows to circumvent the typical drawbacks that VSTs experience at very low intensities, and thus allows us to apply the standard denoiser effectively even at extremely low counts.
The developed methods achieve state-of-the-art results in their respective field of application.
This thesis covers a small but relevant slice that is transversal to this vast pool of applications: noise estimation and denoising. To appreciate the relevance of this thesis it is essential to understand why noise is such an important part of Image Processing. Every acquisition device, and every measurement is subject to interferences that causes random fluctuations in the acquired signals. If not taken into consideration with a suitable mathematical approach, these fluctuations might invalidate any use of the acquired signal. Consider, for example, an MRI used to detect a possible condition; if not suitably processed and filtered, the image could lead to a wrong diagnosis. Therefore, before any acquired image is sent to an end-user (machine or human), it undergoes several processing steps. Noise estimation and denoising are usually parts of these fundamental steps.
Some sources of noise can be removed by suitably modeling the acquisition process of the camera, and developing hardware based on that model. Other sources of noise are instead inevitable: high/low light conditions of the acquired scene, hardware imperfections, temperature of the device, etc. To remove noise from an image, the noise characteristics have to be first estimated. The branch of image processing that fulfills this role is called noise estimation. Then, it is possible to remove the noise artifacts from the acquired image. This process is referred to as denoising.
For practical reasons, it is convenient to model noise as random variables. In this way, we assume that the noise fluctuations take values whose probabilities follow specific distributions characterized only by few parameters. These are the parameters that we estimate. We focus our attention on noise modeled by Gaussian distributions, Poisson distributions, or a combination of these. These distributions are adopted for modeling noise affecting images from digital cameras, microscopes, telescopes, radiography systems, thermal cameras, depth-sensing cameras, etc. The parameters that define a Gaussian distribution are its mean and its variance, while a Poisson distribution depends only on its mean, since its variance is equal to the mean (signal-dependent variance). Consequently, the parameters of a Poisson-Gaussian distribution describe the relation between the intensity of the noise-free signal and the variance of the noise affecting it. Degradation models of this kind are referred to as signal-dependent noise.
Estimation of signal-dependent noise is commonly performed by processing, individually, groups of pixels with equal intensity in order to sample the aforementioned relation between signal mean and noise variance. Such sampling is often subject to outliers; we propose a robust estimation model where the noise parameters are estimated optimizing a likelihood function that models the local variance estimates from each group of pixels as mixtures of Gaussian and Cauchy distributions. The proposed model is general and applicable to a variety of signal-dependent noise models, including also possible clipping of the data. We also show that, under certain hypotheses, the relation between signal mean and noise variance can also be effectively sampled from groups of pixels of possibly different intensities.
Then, we propose a spatially adaptive transform to improve the denoising performance of a specific class of filters, namely nonlocal transformdomain collaborative filters. In particular, the proposed transform exploits the spatial coordinates of nonlocal similar features from an image to better decorrelate the data, and consequently to improve the filtering. Unlike non-adaptive transforms, the proposed spatially adaptive transform is capable of representing spatially smooth coarse-scale variations in the similar features of the image. Further, based on the same paradigm, we propose a method that adaptively enhances the local image features depending on their orientation with respect to the relative coordinates of other similar features at other locations in the image.
An established approach for removing Poisson noise utilizes so-called variance-stabilizing transformations (VST) to make the noise variance independent of the mean of the signal, hence enabling denoising by a standard denoiser for additive Gaussian noise. Within this framework, we propose an iterative method where at each iteration the previous estimate is summed back to the noisy image in order to improve the stabilizing performance of the transformation, and consequently to improve the denoising results. The proposed iterative procedure allows to circumvent the typical drawbacks that VSTs experience at very low intensities, and thus allows us to apply the standard denoiser effectively even at extremely low counts.
The developed methods achieve state-of-the-art results in their respective field of application.
Kokoelmat
- Väitöskirjat [4888]