 Research
 Open Access
 Published:
Efficient shiftvariant image restoration using deformable filtering (Part II): PSF field estimation
EURASIP Journal on Advances in Signal Processing volume 2012, Article number: 193 (2012)
Abstract
Abstract
We present a twostep technique for estimating the point spread function (PSF) field from a single star field image affected by shiftvariant (SV) blur. The first step estimates the bestfitting PSF for each block of an overlapping block grid. We propose a local image model consisting of a pattern (the PSF) being replicated at arbitrary locations and with arbitrary weights. We follow an efficient alternate marginal optimization approach for estimating (1) the most likely pattern, and (2) the locations where it appears in the block, with subpixel accuracy. The second step uses linear dimensionality reduction and nonlinear spatial filtering for estimating the entire PSF field from the grid of local PSF estimates. We simulate SV blur on realistic synthetic star fields to assess the accuracy of the method for this kind of images, for different blurs, star densities, and Poisson counts. The results indicate a moderately low error and very robust behavior against noise and artifacts. We also apply our method to real astronomical images, and demonstrate that the method provides relevant information about the underlying structure of the actual telescope and atmosphere PSF fields. We use a variant of the method proposed in Part I to compensate for the observed blur.
Introduction
Shiftvariant (SV) image restoration requires knowledge of the point spread function (PSF) at each image location. If we have access to the imaging device, and the capture conditions are known, it may be possible to obtain the PSF field by precalibration. In many practical situations, though, we do not have access to this information. Moreover, the PSF field may change due to factors that are beyond our control (e.g., atmospheric turbulences, device temperature, vibrations, fog, relative movement of camera and objects). A given PSF field must then be estimated solely from the observed image(s) we want to restore.
The most common approach for estimating the blurring kernel from a single image consists of formulating a joint optimization problem, often based on statistical models of the image and the degradation, which is used to estimate both the uncorrupted image and the kernel. Typically, a starting guess is refined by alternating between estimating the image (assuming the kernel is known) and the PSF kernel (assuming the image is known). Apart from other potential problems like convergence and stability, adding such an outer loop to the estimation makes it especially heavy in computational terms, as each of these marginal estimations is, by itself, computationally expensive. To the best of the authors’ knowledge, the intrinsic complexity of these approaches has prevented their application to the general case of SV blur kernels (although special cases have been treated, like considering foreground and background layers, e.g.,[1]).
Furthermore, as blind restoration problems are intrinsically highly ill posed, stable solutions have been obtained mostly using some prior information about the kernel, adapting the problem solution to particular situations. Many methods benefit computationally from using a restricted (e.g., parametric) PSF model: Prior knowledge is applied in camera motion[2, 3], parametric models are used in defocus and lens aberration correction[4, 5], sparse characteristics are exploited for frosted glass[6], etc. In the specific case of PSF field estimation in astronomical images, several authors have studied orthogonal representations to characterize observed PSFs from stars[7, 8]. The underlying motivation is to provide a robust tool against noise, flexible and possibly adaptive, without imposing a narrow structure to the PSF field. They may also facilitate PSF field interpolation (see, e.g.,[9, 10]). In the dataadaptive case, the linear dimensionality reduction (by singular value decomposition (SVD) or principal component analysis) deserves to be mentioned, which connects directly to the idea of deformable filters[11] approached in the companion paper, Part I. Function bases are optimal in a leastsquare sense, and they are ranked in terms of energy contribution to the PSF field description, so they can be selected to filter measurement noise in the estimation in a way that can be easily automated with techniques like Generalized Cross Validation[12]. Not surprisingly, such dimensionality reduction techniques have previously been used to estimate the PSF field[13]. However, they have typically been used for special cases, such as for modeling particular devices (for instance, the Large Synoptic Survey Telescope[14] or the Advanced Camera for Surveys on Hubble space Telescope[15]), adapted to particular stars as the ideal PSF reference, or for modeling gravitational lenses[16, 17]. Hence, these approaches lack generality in the sense that they cannot be used as a general, “knowledge blind” tool for astronomical PSF field estimation. Only a few of the referred methods (like[18]) intend to apply their PSF estimation results to image deblurring, and none of them draw the clear connection between image restoration and PSF field estimation.
To alleviate the previously mentioned computational bottleneck, many techniques analyze the image in search of local features which provide information of the PSFs across the image, such as edges and corners[4]. These groups of pixels give clues which are used to compute point and line spread functions without user intervention. The idea of extracting local information about the PSF to build local PSF estimates is especially relevant for the SV blur case. Considering the lack of generality of the most common approaches outlined above, and taking the benefits of utilizing local estimates into account, we design here a novel approach to astronomical PSF field estimation. It is based on two simple observations relating to the fundamental characteristics of astronomical imagery, as opposed to typical photographic images.
First, astronomical images typically contain an abundance of stars, which can be modeled well as ideal point light sources. While only a narrow set of typical photographic images present enough repeated patterns, such as bright distant lights, to be used directly for characterizing the PSF field, the presence of stars in astronomical images allows for a particularly simple way to locally estimate the PSF, by weighted averaging, after subpixel localization. Note that, even under the smoothness assumption, obtaining a more or less dense set of local PSF estimates is not enough to fully characterize the blur, and a certain type of regularization and interpolation of that information to the rest of the image locations is necessary.
This is where our second observation comes in: we take advantage of the fact that the PSF field in astronomical images is usually simpler compared to typical photographic images, as there is no foreground–background structure. To a first approximation, PSFs result from the composition of the telescope and atmosphere PSF fields (for Earthbased observations), and they usually vary smoothly across the image field. Typical photographic images, on the other hand, may present a very rich variability in PSF field structures, due to differences in focus and/or relative speed of objects in the scene. Only in particular cases (e.g., camera movement and/or defocus, still long distance objects) one may expect to find a smooth PSF field. Furthermore, only in an even narrower set of cases, typical photographic images present enough repeated patterns (e.g., bright distant lights) to be used directly for characterizing the PSF field, similarly to the star field case.
Hence, our approach is twofold, and sequential. The first step consists of estimating the most likely local PSF for each block of a set of overlapping image blocks covering the whole image, according to a simple local image model. The second step extends and refines these local PSF estimates through linear dimensionality reduction, nonlinear filtering of outliers, and spatial interpolation. Whereas the first part is completely modelbased, the second part is presented here, to a large extent, in an ad hoc way. This does not diminish its strong connection with the ideas presented in the companion paper, Part I, about using the concept of deformable kernels to deal with smooth PSF fields.^{a}
There is a third fundamental characteristic of astronomical images that needs to be mentioned. Astronomical images have a huge dynamic range compared to typical photographic images, and very often, the search for relevant information requires processing lowcontrast details in a contrastadaptive fashion. Therefore, traditional error measures, such as quadratic error, must fail to describe the quality of a restoration method for star fields. To evaluate the quality of a restoration method on this kind of images is, thus, challenging. Quantitatively assessing the quality of the deblurring results has therefore not been tried here.
The rest of the article is organized as follows. Section 2. presents a simple image model for SV blurred star fields to be used as a local approximation for image blocks. Then, Section 3. describes an alternate optimization algorithm for obtaining, through maximum likelihood, an estimate of the local PSFs. Section 4. describes a generic approach for estimating a smooth PSF field from a regular grid of local PSF estimates. A broad set of experiments, and their corresponding results, are described and discussed in Section 5. First (Subsection 5.1), objective measurements of the PSF field estimation quality are made by means of simulated star fields using two synthetic PSF fields. Then (Subsection 5.2), real astronomical images are analyzed to estimate their PSF field. These estimates are used for restoration of real images in Section 5.2a. Section 6. concludes the article.
A smooth pattern field model for images
In this section, we describe a simple and yet effective image model for star fields subject to smoothly varying SV blur, e.g., due to a combination of atmospheric blur and nonideal telescope optics. The model is conceptually much wider and potentially applicable to very different situations, like when observing repeated patterns in typical photographic images, possibly subject to certain variations depending on their image location. Although future instantiations of this model may attack the problem of directly characterizing the smooth pattern field for the whole image, we follow a simpler approach here, consisting of characterizing local regions of the image first by using a single reference pattern. Despite this simplification, we do consider in the local model that there is an error due to modeling the actual pattern within a block as constant. Figure1 illustrates the idea of obtaining local PSF estimates on a regular spatial grid, using overlapping blocks.
Image model
We define a pattern field p_{ d }(x;d(x^{0})) as a special type of fourdimensional function which associates a certain spatial pattern p_{ d }(x) to each image location x^{0}. The spatial pattern p_{ d }(x) depends on a vector of hidden parameters d(x^{0})(usually unknown a priori) which, in turn, depends on the image location x^{0}. If the pattern depends smoothly (i.e., continuously and with continuous first derivative) on the parameter vector d, and each component of d is a smooth function of x^{0}, then p_{ d }(x;d(x^{0})) can be viewed as a smooth pattern field. The reader familiar with SV blur will already have noted the applicability of such smoothly varying pattern fields to characterize the PSF at any given image location, in some favorable cases (as discussed in the introduction). The star field, then, can be modeled as the result of blurring a set of ideal point sources (Dirac delta functions) with a shift variant PSF field:
This image model y(x) is composed of a number of instances of the pattern field, each representing a local PSF, in this case, located at a position x_{ i } and scaled by a factor a_{ i } corresponding to the strength of the ideal point source. In order to make the model robust to changes of the background level, it is highly convenient to include a very smooth additional background term y_{B}(x).
In this application of the pattern field model, we have taken the approach of representing the patches by finitedimensional vectors, corresponding to finite and discrete sampled patches in the image. A proper pixel size of the patch can easily be assigned by inspection of the most blurred areas of the image. Also, for algorithmic convenience, we have imposed that the patches do not overlap (in other applications of the model this may respond to an actual physical constraint). Although this is not a realistic assumption for stars, we have experienced that it does not significantly harm the estimation.
Note that, seen as a generative model, Equation 1 is fairly simple: a highly sparse input distorted by an SV blur, even linear for a given pattern field. However, from an analysis point of view, fitting an observation to this model involves a highly nonlinear procedure, mainly because of the sparse character of its input. Although powerful techniques for sparse estimation exist (e.g.,[19]), they usually assume that the input is sparse, but not necessarily in such a way that the input is a set of (isolated) deltas, and much less that the “basis functions” are nonoverlapping. On the other hand, our approach takes advantage of this particular structure of the problem.
Pattern observation model
The pattern field p generally is a nonlinear function for which there is no prior information except that it is smooth with respect to the image coordinates. If it is smooth enough, the Nyquist theorem limits the amount of information that is lost when we spatially sample the parameter vector d. This takes us to a blockbased approach, in which we model the PSF at a certain block location as a component along the PSF at the block center, plus a certain perturbation increasing with the distance to the center: p_{ d }(x;d(x^{0})) = b p_{ d }(x;d(x_{ C })) + ε, where x_{ C } is the central location of the block B(x_{ C }), and x^{0} ∈ B(x_{ C }). Consequently, a given pattern observation, numbered i, at some position x_{ i } in a given block, is modeled as:
Here, p is the vectorized pattern at x_{ C }, b_{ i } represents the fading of the component along the central pattern towards the boundaries of the block,${\mathit{\epsilon}}_{i}\sim \mathrm{N}(\mathbf{0},{\sigma}_{\mathrm{\epsilon i}}^{2}\mathbf{I})$ represents the deviation of the pattern with respect to the p direction, and${\mathit{n}}_{i}\sim \mathrm{N}(\mathbf{0},{\sigma}_{n}^{2}\mathbf{I})$ is a noise term. Note that b_{ i } will be 1 (or less, in practice, if our knowledge of p is approximate) at the block center, and will decrease as we move away from it, while σ_{ εi } is modeled as an ever increasing but upper bounded function of x−x_{ C }, thus behaving in a complementary way to b_{ i }.
Despite of the apparent intricacy of this model, it can be reformulated into a much simpler one. First, observe that we can decompose p into
where μ_{ p }1 = [μ_{ p },…,μ_{ p }]^{T} represents the sample pixel mean of the reference pattern, and p_{0}, consequently, its zero mean version.^{b} Second, as the background y_{B}(x) is smooth, it can locally be approximated by a constant:
Since we are unconcerned about estimating the image background or the ground level of the zeromean pattern, we may encapsulate these components of the model, thus simplifying the model as well as the computational procedure:
The simplified patch model can be restated as follows:
where${\mathit{w}}_{i}\sim \mathrm{N}(\mathbf{0},({a}_{i}^{2}{\sigma}_{{\epsilon}_{i}}^{2}+{\sigma}_{n}^{2}\left)\mathbf{I}\right)$ now is a joint “error” term representing the uncertainty of the observation.
Local PSF estimation by alternating marginal likelihood optimization
Since the observed star patterns are independent, the likelihood of observing a number M of patterns in the block is
where K is the dimensionality of the vectorized neighborhoods, N is the number of pixels in the block, and R is the set of pixels that are not included in any pattern. Optimizing the likelihood directly for all the unknowns is a difficult problem. However, it can effectively be attacked by alternating between optimizing the marginal likelihood as a function of p_{0} and the set of pattern locations, {x_{ i },i = 1…M}, respectively.
The algorithm we propose essentially comprises three stages:

1.
taking an initial guess at the pattern;

2.
finding approximate (fullpixel precision) locations of the pattern, as well as the number of locations, with respect to that guess; and

3.
alternating between reestimating the pattern and refining the locations with subpixel precision.
Likelihood maximization with respect to each of μ_{ i }, c_{ i } is a simple leastsquares fitting, and can be done independently for each image location x_{ i }:
For estimating a_{ i } we observe in (5) that:
Assuming E{∥b_{ i }p_{0} + ε∥^{2}} = 1, we simply choose the estimate
where the brackets indicate nonnegative clipping. For those selected patches with norms well above the background noise level ($K{\sigma}_{n}^{2}$), the effect of a_{ i }on the variance is to normalize the local energy of the patch, so effectively giving a stronger weight to candidates with low relative error (high SNR). The low energy patches, on the other hand, are effectively damped by the larger influence of${\sigma}_{n}^{2}$ in their associated variance.
Also, for a given set of scaled and noisy pattern candidates,$\left\{\mathit{y}\right({\mathit{x}}_{i}){\widehat{\mu}}_{i}\mathbf{1}\}$, the solution for the most likely p_{0} is the standard leastsquares one:
Note that${\u0109}_{i}{\mathit{p}}_{0}$ is invariant with respect to ∥p_{0}∥. Hence, we only need to find the direction of${\widehat{\mathit{p}}}_{0}$ in (12). Consequently, our method is invariant with respect to the normalization of the PSF. In order to avoid numerical error, it is still useful to constrain ∥p_{0}∥. As it appears most natural in this context, we choose ∥p_{0}∥ = 1. Recall that, in spite of this nonstandard choice when dealing with PSF patterns, it is always possible to go back to standard PSF form (nonnegative, normalized volume) by estimating μ_{ p } and renormalizing.
Methodically, the optimization for the locations {x_{ i }} takes a different form for the initial (full pixel precision) and the subsequent (subpixel) estimates.
Estimation of the approximate locations of star patterns
When the {x_{ i }}are estimated for the first time, we can make use of the observation that the last two terms in (7) (the “nonpattern” terms) are constant if extended to all pixels of the block. Then, we can remove that constant from the optimization functional to obtain
where we have applied the approximation (4) again.^{c} Consider the expression inside the parentheses, evaluated as a function of a generic image location x and holding p_{0} fixed:
where we have applied
and where μ(x) = y^{T}(x)1/K is the sample pixel average of the patch. This function represents the loglikelihood ratio between two hypotheses: the patch centered on x corresponding to an instance of the pattern versus corresponding to the background. When the patch centered at x, y(x), contains significantly more energy than that of the background noise, we have a pattern candidate. In that case, all but one of the terms in Equation (14) in the vicinity of x are smooth functions of x, because they correspond to energy measurements which are insensitive to the exact location of the pattern, whenever it remains entirely within the patch. The exception is the covariance factor term, which is very selective to the exact pattern location, even under noise. Furthermore, in this vicinity A(x) must yield high positive values (because$1/{\sigma}_{n}^{2}\gg 1/\left({a}^{2}\right(\mathit{x}\left){\sigma}_{\epsilon}^{2}\right(\mathit{x})+{\sigma}_{n}^{2})$), peaking in an isolated maximum where the covariance factor gets closest to 1, i.e., when the pattern candidate is aligned with respect to the pattern guess p_{0}. Therefore, a greedy algorithm, under the assumption of no pattern overlapping in the image, is optimal for finding the most likely locations for the current pattern guess. To estimate the set of M locations {x_{ i }}, we first pick the global maximum of A(x) in the block as x_{1} and set M to 1. Next, we rule out the neighborhood around that maximum as a potential location for subsequent picks (to enforce the nonoverlapping constraint in the analysis) and find the nexttooptimal maximum x_{2}, increasing M by 1; and so on, until no further locations can be selected, or the next minimum attains a closetozero value, indicating that nothing is gained by considering a star pattern at this location.
Practically, this algorithm fails to choose the correct locations only in the case where the initial assumption of nonoverlapping patterns does not hold, i.e., in our case, when two or more stars of similar brightness are too close to each other (if one star clearly dominates, then the ML choice corresponds to centering that dominant star). However, these wrong choices can be easily detected by thresholding the covariance cos∠(p_{0},y(x) − μ(x)1) between the current guess of the pattern and the image, and thus preventing these locations to contribute to the pattern estimate (in our experiments we used a threshold c_{min} = 0.2).
Subpixel refinement
When estimating x_{ i }in the subsequent iterations, we do not need to adjust M, but we would like to improve the locations x_{ i } to subpixel precision. Observing that σ_{ εi } and ∥y(x_{ i })−μ_{ i }1∥ are approximately constant for such small changes of x_{ i }(as the pattern is assumed to be approximately centered in the patch, with some background space around it), the optimization functional can be simplified to obtain the following estimate:
As this functional is very simple to compute, we can afford to perform a local search around the previous location estimates simply by computing a standard, bicubic subpixel interpolation of the patch and evaluating the functional.^{d} We choose to do this in a hierarchical fashion, that is, performing local search in [−1,1] first with a step size of Δx = 1/2in each coordinate, updating, then performing a local search for the relative shift in [−1/2,1/2] with Δx = 1/4, and so on, down to a precision of one eighth of a pixel.
Variance estimates
The overall variance of the pixels of a patch including a pattern depends on the additive background noise term (${\sigma}_{n}^{2}$), and the multiplicative error term (${a}_{i}^{2}{\sigma}_{{\epsilon}_{i}}^{2}$) due to the error on the pattern itself. The latter depends solely on the distance to the block center in relative terms; in absolute terms, it also depends on the scale factor a_{ i } of the instance of the pattern. For estimating${\sigma}_{n}^{2}$, we use the Median Absolute Deviation method with a simple difference filter along each dimension[20].
To model the variance of the error ε_{ i }, we assume that the pattern random vector field has an isotropic covariance structure similar to an AR(1) process. In particular,
and E{∥b_{ i }p_{0} + ε_{ i }∥^{2}} = 1, such that the energy is stationary across the pattern field. ρ is the parameter that determines the smoothness of the field, and L/f_{ r } is the distance between neighboring block centers (L being the side length of the block, and f_{ r } the overlapping block redundancy factor). Thus, E{∥ε_{ i }∥^{2}} = 0 at the block center itself and E{∥ε_{ i }∥^{2}} = 1 − ρ^{2} at the neighboring block center. In our experiments, we chose ρ = 0.98 as a reasonable smoothness measurement for adjacent PSF in the grid made of the overlapping blocks’ centers. In the estimation procedure (12), this structure of${\sigma}_{\mathrm{\epsilon i}}^{2}$ leads to the desirable effect of giving a stronger weighting of the detected patterns that are near the block center.
In addition to uncertainty due to noise and spatial variation of the pattern, there is a methodical uncertainty with respect to the current estimate of p_{0}. Particularly the first guess may be somewhat inaccurate; and since the weights in (12) are based on covariance measurements with the (guessed) pattern, subsequent applications of the estimator may weight useful instances of the star pattern lower than necessary, and vice versa. To make the method adapt to the quality of the current patch estimate, we add the term${\sigma}_{g}^{2}$ to the r.h.s. of (16) when first performing the estimation of the full pixel pattern locations, defined as
This value corresponds exactly to the quadratic error in our guess if (1) there is an instance of the exact pattern in the image, and (2) that instance is the closest one in the image, in a covariance factor sense, to our guess. The use of this extra term permits a less shapeselective weighting of candidate patterns when the quality of the estimated pattern is still relatively low.
Initial and subsequent pattern guesses
In our experiments, for the first block we used as a guess a Gaussian PSF, with a spatial dispersion σ=2, using a 19×19 patch support. For subsequent blocks, processed in a horizontal raster scan order, we use the average of the estimated PSFs for previously estimated blocks above and on the left (or only one of them, for the first row or the first column).
Summary of the local PSF estimation method
In Figure 2 (left) we have depicted a scheme summarizing our local PSF estimation method. We also provide an algorithmic description below:
Algorithm 1. Local PSF estimation in star fields

1:
Obtain an initial guess${\widehat{\mathit{p}}}_{0}$

2:
for all blocks in the image do

3:
Estimate${\sigma}_{n}^{2}$ and${\sigma}_{g}^{2}$

4:
for all x in the block do

5:
Compute${\sigma}_{\epsilon}^{2}$, a, μ, and c

6:
Compute covariance factor among patch and pattern cos∠(p_{0},y(x) − μ(x)1)

7:
Compute A(x)using (14)

8:
end for

9:
X ← {}

10:
loop

11:
${\widehat{\mathit{x}}}_{i}\leftarrow \text{arg}\underset{{\mathit{x}}^{\prime}}{max}A\left({\mathit{x}}^{\prime}\right)$ s.t.${\mathit{x}}^{\prime}\notin \mathcal{N}\left(\mathit{x}\right),\forall \mathit{x}\in X$

12:
if$A\left({\widehat{\mathit{x}}}_{i}\right)<T$ or${\widehat{\mathit{x}}}_{i}$ has no solution then

13:
M←X

14:
break

15:
else

16:
if cos∠(p_{0},y(x_{ i }) − μ_{ i }1) > c_{ min }then

17:
$X\hspace{0.17em}\leftarrow \hspace{0.17em}X\cup \left\{{\widehat{\mathit{x}}}_{i}\right\}$

18:
end if

19:
end if

20:
end loop

21:
repeat

22:
Reestimate${\widehat{\mathit{p}}}_{0}$ using (12)

23:
for all ${\widehat{\mathit{x}}}_{i}\in X$ do

24:
Improve${\widehat{\mathit{x}}}_{i}$ by local search using (15)

25:
end for

26:
until convergence

27:
Update the guess${\widehat{\mathit{p}}}_{0}$ for the next block

28:
end for
Estimating smooth PSF fields from local estimates using SVD
When we observe a set of patterns under noise, relevant features can be extracted by means of finding the orthogonal basis that optimally (in a leastsquares sense) compresses that particular set of patterns as a weighted sum of orthogonal components. The SVD has been in widespread use for many years to solve this problem.^{e} In a high dimensional space, such as the one used to represent rich unconstrained patterns, like our PSFs, the noise effect on the obtained dominant terms is small, because, if it is white, it distributes its energy equally among all the orthogonal components. The terms above the noise level, on the other hand, may carry a large proportion of the total pattern energy. Therefore, those terms provide an approximation to the original (uncorrupted) set of patterns, when the components below the noise level are filtered out before reverting the linear transform.
This kind of linear spectral filtering can be effective to reliably estimate the dominant eigenPSFs of the PSF field, but it may not be enough to characterize the field by itself, when there are strong spurious fluctuations in the estimated patterns. These fluctuations may be caused by imperfections of the algorithm and/or the model (in our case, we do some incorrect assumptions, like that the PSFs do not overlap), or due to the presence of “outliers” (image contents that fit neither the dominant pattern nor the background statistics). Then, using prior information about the typical features of the pattern field is essential to increase robustness. In our problem, the local PSF estimates are far from being equally reliable (e.g., because of inhomogeneities of the local density of stars, different brightness, background effects, etc.). Using the prior knowledge of the PSF field being smooth helps us to discount the outliers’ effect on the estimation, and, thus, to improve the results.
Estimating a smooth PSF field from a grid of local estimates
As the method of the previous section does not ensure alignment of the local estimates with respect to each other, they need to be aligned prior to being subjected to dimensionality reduction, as illustrated in Figure 2 (right).
Local PSF subpixel alignment
Note that the subpixel alignment performed here is conceptually slightly different from the intrablock alignment of patterns, as we do not want to optimize a similarity measures with respect to a certain reference pattern here, but rather maximize the similarity of each pattern with respect to the others. A sensible solution is to find the subpixel shifts that minimize the Euclidean error to the projection of each pattern onto the first eigenvector of the set, which is recomputed at each iteration (using a hierarchical iterative approach using cubic splines, as before). It is straightforward to prove that, following this procedure we will reach a stationary solution for the spatial shifts which maximize (locally, at least) the first eigenvalue of the covariance matrix of the set of patterns, as it effectively minimizes the sum of squares of the components along orthogonal directions. Similarly to the intrablock alignment case, we have experienced a very significant decrease in the energy of the components orthogonal to the first eigenvector (around 4–5 dB) by doing this interblock alignment. We observe a spatial drift of the estimated local pattern across image blocks. This undesirable effect does not affect the model fitting negatively.
Dimensionality reduction and nonlinear filtering of the spatial weights
After the alignment, we proceed to perform a linear dimensionality reduction of the set, using the SVD. Although it is not difficult to obtain a suitable number of singular vectors/values automatically, we have chosen the number simply by visual inspection of the eigenvalues profile. We obtain a set of eigenvectors (eigenPSFs), plus the corresponding optimal spatial weights to represent the grid of extracted local patterns as a linear combination of them. Then, the “outliers” can easily be detected and suppressed by iteratively detecting the spatial weights that largely deviate with respect to their neighbor values, and substituting them by the local mean of their neighbors. Once the spatial weights are regularized, we revert the orthogonal transform and obtain a spatial grid of regularized patterns. The ground level of these must be shifted back to zero and their volume normalized, in order to force both positivity and unit volume, as required for PSFs. We use a mode estimation to estimate the ground level of each pattern.
PSF field interpolation to intermediate spatial locations
We may want to obtain PSF estimates for each pixel of the image; for example, if we want to use the PSF field estimation to perform deblurring on the same or similarly captured images. Given a lowdimensional description of a vector at different image locations, such as obtained through the SVD, it is easy to achieve that by interpolating the scalar coefficient corresponding to each basis vector, provided the vector field is smooth. However, as we enforce the PSF constraints (positivity and unit volume) after reverting the transform, the resulting set of local PSF estimates are no longer confined to a lowdimensional linear subspace. One solution to this is to interpolate before reintroducing the constraints, and then fixing the ground level and volume for each pixel location in the image. However, a more efficient manner consists of performing a second SVD on the previously estimated set of positive and volume normalized PSFs, and retaining enough singular components for an excellent approximation (let us say, more than 60–80 dB). After this second dimensionality reduction, we can perform a vector interpolation very efficiently by simply interpolating the obtained weights as scalar functions, using cubic splines. This method ensures the right form of our PSF field estimate: a number of eigenPSFs, plus their corresponding smooth spatial weights (one weight for each eigenPSF–pixel pair). We may then apply the deformable filtering techniques explained in Part I to restore the observation using this estimate.
Relationship to the image estimation method proposed in Part I
The dimensionality reduction approach on the (aligned) PSFs is a shared approach of Parts I and II of this double article. In both cases, the local PSF is modeled as a linearly deformable kernel with a reduced number of dimensions. Whereas the main motivation for using deformable kernels in Part I was computational efficiency (as the linear deformable filtering can be implemented as a sum of masked convolutions), our main motivation in this second part is robustness in the characterization of a PSF field from which we only have partial and noisy information. More importantly, both techniques are connected in practice by the use one would give to them for image SV blind restoration: first estimating the PSF field (Part II), then using that estimate for applying the SV image restoration estimation described in Part I. We also believe that, by developing the PSF field estimation and the image estimation in the context of a single framework, we add value to each of the described techniques.
Experiments: results and discussion
PSF field estimation in simulated star fields
In order to obtain an objective reference to evaluate our PSF field estimation results, we have simulated both PSF fields and star field images. We degraded the latter by noise after the SV blur.
Simulating SV blurred star fields
We consider three different Poisson noise levels, corresponding to having maximum values for 8 bits (from 0 to 255), 12 bits (0 to 4095), and 16 bits (0 to 65535) in the simulated star fields. Also, we add zero mean Gaussian noise of unit variance. We have implemented a simulation for the star field giving roughly the same (relative, observed from the Earth) magnitude statistics as the observed stars, which approximately fit the law
n being the number of stars of a given apparent magnitude m. According to actual statistics of observable stars[21], the number of stars with apparent magnitude between m and m + 1 (at least for the studied interval 0 ≤ m ≤ 10) is around three times larger than the number of stars with apparent magnitude between m − 1 and m. Considering that an increase by one unit of magnitude corresponds to a decrease by 2.15 times the apparent brightness, this yields α = log(3)/log(2.15) ≈ 1.43. For simulating apparent brightness samples, we use the standard technique of integrating and inverting the pdf (a power law in this case^{f}) with this coefficient, and then sampling from an uniform density in ∈,1] and applying the obtained function to the uniform samples. In this case, the resulting function is still a power law with β = − 1/(α − 1) ≈ − 2.3. This yields an output brightness range of [1,∈^{−2.3}, which is normalized afterwards to the desired maximum value. We have chosen ∈ = 0.01.
We have implemented two synthetic PSF fields, BLUR1 and BLUR2, which are quite similar to the ones used in Part I. BLUR1 is defined by:
with x and y ranging now from −9 to 9. Except for keeping a constant spatial support and the offset correction, this PSF field can approximately be interpreted as a spatial magnification of the PSF from the center (where it is sharpest), with the zoom factor being$k({x}^{0},{y}^{0})={2}^{1/2}\sqrt{1+({\left({x}^{0}\right)}^{2}+{\left({y}^{0}\right)}^{2})/({L}^{2}/6)}$, being L the side of the pattern (assumed squared). The second PSF field, BLUR2, is a Gaussian function whose width (square root of variance along the horizontal axis) is fixed and equal to σ_{ x } = 1.6, whereas its height changes exponentially along the vertical axis by${\sigma}_{y}({x}^{0},{y}^{0})=1.6\times {2}^{{y}^{0}L/2}$. Note that these two PSF fields are defined through uniparametric PSF functions, with the parameter being a smooth function of the image coordinates. Thus, they are smooth uniparametric PSF fields.
A degraded star field simulation is obtained by (1) choosing an image size (2048×2048 in our case) and a number of stars for the image (2×10^{4},4×10^{4},1.6×10^{5}or 6.4×10^{5}), locating each at noninteger random coordinates, and then associating to each of them a random brightness according to the power law explained above; (2) applying the synthetic PSF field (BLUR1 or BLUR2); (3) normalizing the resulting maximum to the desired value (255, 4095, or 65535, for 8, 12, or 16 bits, respectively), (4) simulating Poisson statistics on the previously noisefree image, and (5) adding white Gaussian noise (zero mean, unit variance), rounded to integer values (plus a constant, to avoid negative values).
Results for simulated blurred star fields
Figure3 shows the eigenvalues profile of the two synthetic PSF fields (BLUR1 and BLUR2), and compares them to the profiles obtained by using our method on the grid of initial PSF estimates to the simulated images, for the three noise levels considered (for comparison purposes, global energy has been normalized in all cases). It is very interesting to see: (1) the rapid decay of the ranked eigenvalues; (2) the estimated profiles fitting the theoretical curves for the first two or three eigenvalues, the rest staying effectively beneath observation noise; (3) the influence of different Poisson count levels being quite small for the chosen rank (8, 12, and 16 bits). Observing these eigenvalues profiles, it seems legitimate to use just two dimensions for the reconstruction, in both blur cases (BLUR1 and BLUR2).
Figures4 and5 show the structures obtained by applying the SVD to the original PSF fields BLUR1 and BLUR2, respectively. It is worth noting that the weights associated to the extracted eigenPSFs for each PSF field all have the same isolevel structure (circular for BLUR1, and vertical for BLUR2), indicating a PSF field with a single degree of freedom, as expected. In addition, weighting functions are smooth, as expected for smooth PSF fields. The basic shape of the first two eigenPSFs and their isolevel structures are quite well reproduced in the PSF field estimation from the simulated observations, as shown in Figure6. Here, one can fully appreciate the importance of nonlinearly removing the outliers, i.e., local weight values departing from the generally spatially smooth behavior.
Table1 shows a quantitative comparison of the estimated PSF field to the original. It shows Signaltonoise ratio values, in decibels, of PSF field estimation in BLUR 1 and BLUR 2 with 4 Poisson noise levels and 3 different star densities: low density LD (20,000 simulated stars), medium density MD (40,000 simulated stars), high density HD (160,000 stars), and ultra high density UHD (640,000 stars). While the obtained figures are relatively modest, it is worth noting the high robustness of the method against noise and changes of the star density. We only notice a slight performance decrease when the density becomes too high, which causes a high amount of PSF overlap. Figures7,8 and9 further illustrate this robustness, by comparing the obtained results to samples of the original PSF fields, for different Poisson levels and star densities. They also indicate that the blur is slightly overestimated. However, more importantly, a big part of the error comes from obtaining kernels with different relative locations. This is not strictly an error in the estimate, as there is an intrinsic ambiguity for the pattern location in terms of its internal coordinates x in the model, but it still affects the numerical results negatively.^{g} Disregarding this spurious shift effect, it seems that the most important source of error in these results comes from the blurring of the local pattern field caused by obtaining our local PSF estimates on relative large blocks. More sophisticated choices for the shapes of the spatial regions upon which making our local estimates, instead of square blocks, should improve the classical biasvariance tradeoff (bigger regions produce higher bias and lower variance, and vice versa) present at any estimation problem on nonstationary random fields, as pointed out in the conclusions.
PSF field estimation in real astronomical images
We apply our algorithm to three raw images, which were taken with the historic Cassegrain 0.9m (36inch) telescope at the National Science Foundation’s Kitt Peak National Observatory: Crescent Nebula (Caldwell27), Swan Nebula (M17), and Ring Nebula (M57). All these images are coded in FITS format. They are publicly available for academic and research purposes in the observatory educational websitehttp://www.noao.edu/education/arbse/arpd/ia.
The apparent magnitude scale, which we apply in the previous simulations, is adapted to the sensitivity of the human visual system. For these experiments, we have used Vfiltered raw images, because the formal photoelectric V peak (around 540 nm) is closest to the peak of the human eye’s (darkadapted) detection efficiency. The images have not been preprocessed in order to show the robustness of the proposed algorithm.
The images are star fields of 2048 × 2048 pixels, and have ranges approximately corresponding to 16 bits. Figure10 shows the eigenvalue profiles for the initial grid of estimated PSF fields, which advise us to use three significant (above noise level) components for the Caldwell27 image, and only two for M17 and M57. In Figure11 we show a 15 × 15 PSF panel summarizing the result of the PSF field estimation, for Caldwell27. We observe that the PSF varies smoothly across the image, becoming more concentrated (less blurred) as we get closer to the bottom right corner. By looking at the three significant eigenPSFs (Figure12, top left) we see how they are modulating the spatial extension of the PSF, which basically follows an offsymmetric closetocircular pattern. There is something remarkable about the behavior of the associated weights: to a first approximation, they share the same isolevel curves. This means that, in this real example, the modeled PSF field is not only smooth, but also (approximately) uniparametric. This surprising result is further illustrated in Figure13, where the three computed weights have been visualized as dots in a 3D space. The result clearly shows a one dimensional structure (a curve), reflecting a single effective degree of freedom in the PSF field.
Interestingly enough, the other two studied images have rather different PSF fields, which translate into different eigenPSF shapes, especially for the second component (see Figure12, top, center, and right). Nevertheless, the processed weights follow remarkably the same approximated pattern of isolevel weights for this significant second component. Note that the lack of a clear isolevel structure of the weights associated to the first eigenPSF is not significant in those examples, as the first component has a fairly constant weight all over the image. Such a pattern appears in other images (data not shown here) from the same telescope. This pattern is not just due to an artifact of the estimation method. We know that, because we have seen a radically different (and, by comparison to the known reference, correct) behavior in the simulations. Our provisional explanation is that such isolevel structure is related to the approximately fixed PSF field component caused by the telescope structure (which uses an offcenter sensor with a parabolic mirror, in this case), whereas significant differences in the overall PSF field would be caused by the atmospheric PSF at the moment and sky location of the capture. The observed PSF field can be modeled as the result of convolving atmospheric PSF field (approximately spatially invariant, for the small angle subtended, but variable for each image capture) and the telescope PSF field component (strongly spatially variant, in this case, but approximately constant for each image on the set). However, we do not have a solid explanation yet for the approximately uniparametric behavior of the PSF fields.
Image blind restoration in real astronomical images
Finally, we apply the SV restoration method from Part I using the estimated PSF fields to compensate for the blur. Because the dominant noise source in these images is not Gaussian, but Poissonian, we use a variant of the L0AbS method[22]. Regarding all other aspects, the image estimation method and its implementation is the same as the one explained in Part I. The resulting images gain a lot of sharpness, which allows to resolve very close stars that appear as a single cloud in the observations (as shown in Figure14). The spatial “averaging” effect due to the blockwise estimation of the local PSFs may cause a slight overestimation of the blur, which also appears in the simulation results. That would explain some oversharpening effects such as the lowcontrast halo shown in the bottom right subfigure.
Conclusions and future work
We propose an efficient and robust method for PSF field estimation on star fields. The method is based on careful detection and subpixel alignment of the stars according to a simple statistical local model and the ML principle, followed by a combination of dimensionality reduction and nonlinear filtering of outliers on the weights of the eigenPSFs. We have demonstrated that the proposed method is able to robustly capture the structure of the spatially variant blur of smooth PSF fields, in spite of the simplifying assumption that the local PSFs do not overlap. This is demonstrated by image and PSF field simulations, using different Poisson counts and star densities.
Beyond the introduction of the new local model, the article offers interesting results obtained from applying the method to real images (from a medium quality, Earthbased, telescope image dataset), for which the method has revealed significant information about the PSF field structure. We have also demonstrated the applicability of this technique in conjunction with the SV restoration described in our companion paper (Part I), to jointly build a blind SV restoration method assuming Poisson statistics.
We believe that the repeatedpattern local image model presented here has a big potential to be extended and improved in several directions, even for tasks different than PSF field estimation (e.g., texture characterization, or other forms of intelligent combinations of image analysis, processing and synthesis). Perhaps the most promising and natural among these possibilities could come from formalizing the pattern explicitly as a lowdimensional deformable kernel in a global image model, instead of making the local approximation of considering it constant on certain regions, and then building a smooth field from the resulting local estimates, as we did here. It may be both very interesting and potentially useful to explore, at least for the cases where a simple and robust structure of the PSF field is detected (like the approximately uniparametric behavior observed in the real astronomical images studied here), alternative estimation strategies for exploiting the spatial structure of this nonlocal redundancy. This should result in reducing the bias of the estimates without increasing the associated variance. Finally, another natural evolution of this type of models could arrive by substituting the useful, but limited concept of linear dimensionality reduction by a much more powerful (and also more difficult to deal with) nonlinear dimensionality reduction concept (see, e.g.,[23]) which aims at modeling the underlying manifold associated to a varying pattern, low dimensional only in a local sense. Evidence of lowdimensional curved manifolds, like the one shown in Figure13 are very motivating to address this conceptual and technical challenge for practical purposes.
Endnotes
^{a}To jointly attack the deformable PSF field estimation requires, even in the favorable case of star fields, a substantially more sophisticated approach. We are currently working on that, given the applicability of such an approach to a wider range of images and estimation problems.^{b}If the neighborhood size is chosen appropriately, it is a safe assumption that p(x) goes to zero at the neighborhood boundaries (μ_{ p }is the offset to this “ground level”). Consequently, it is easy to estimate μ_{ p } from p_{0} and revert this decomposition, even though μ_{ p }is lost in p_{0}.^{c}Although the sample mean may depart significantly from the background level when there is a star in the neighborhood, we have tested that this effect, in this case, is small compared to that of the variance in the denominator.^{d}The interpolation is always computed on the original fullpixel samples to avoid accumulating error.^{e}As in the previous section, we are keeping the PSF patterns zeromean and L_{2}normalized. We, as other authors (see, e.g.,[15]) have experienced that mean subtraction followed by L_{2}normalization generally helps to produce a higher spectral concentration as compared to leaving the PSFs positive and normalized in volume.^{f}Although this is an improper pdf, as it is not integrable around zero, this does not prevent us from using it for this purpose.^{g}We intend to remove this ambiguity in future versions of this method.
References
 1.
Almeida MSC, Almeida LB: Blind and semiblind deblurring of natural images. IEEE Trans. Image Process 2010, 19: 3652.
 2.
Gupta A, Joshi N, Lawrence Zitnick C, Cohen M, Curless B: Computer Vision–ECCV 2010. Single image deblurring using motion density functions. (2010), pp. 171–184
 3.
Tai YW, Du H, Brown MS, Lin S: Correction of spatially varying image and video motion blur using a hybrid camera. IEEE Trans. Pattern Anal. Mach. Intell 2010, 32(6):10121028.
 4.
Joshi N, Szeliski R, Kriegman DJ: IEEE Conference on Computer Vision and Pattern Recognition, 2008. CVPR 2008. PSF estimation using sharp edge prediction. (IEEE, 2008), pp. 1–8
 5.
Kee E, Paris S, Chen S, Wang J: IEEE International Conference on Computational Photography (ICCP 2011). Modeling and removing spatiallyvarying optical blur. (IEEE, 2011), pp. 1–8
 6.
Shan Q, Curless B, Kohno T: Computer Vision  ECCV 2010, vol. 6316 of Lecture Notes in Computer Science. Seeing through obscure glass, (Springer, Berlin, 2010), pp. 364–378
 7.
Bernstein GM, Jarvis M: Shapes and shears, stars and smears: optimal measurements for weak lensing. Astronom. J 2002, 123(2):583. 10.1086/338085
 8.
Refregier A: Shapelets—I. A method for image analysis. Month. Notices R. Astronom. Soc 2003, 338(1):3547. 10.1046/j.13658711.2003.05901.x
 9.
Denis L, Thiebaut E, Soulez F: 2011 18th IEEE International Conference on Image Processing (ICIP). Fast model of spacevariant blurring and its application to deconvolution in astronomy. (September 2011), pp. 2817–2820
 10.
Jee MJ, Ford HC, Illingworth GD, White RL, Broadhurst TJ, Coe DA, Meurer GR, Van Der Wel A, Benítez N, Blakeslee JP, et al.: Discovery of a ringlike dark matter structure in the core of the galaxy cluster Cl 0024+ 17. Astrophys. J 2007, 661: 728. 10.1086/517498
 11.
Perona P: Deformable kernels for early vision. IEEE Trans. Pattern Anal. Mach. Intell 1995, 17(5):488499. 10.1109/34.391394
 12.
Golub GH, Heath M, Wahba G: Generalized crossvalidation as a method for choosing a good ridge parameter. Technometrics 1979, 21: 215223. 10.1080/00401706.1979.10489751
 13.
Lupton R, Gunn JE, Ivezic Z, Knapp GR, Kent S: Proceedings of Astronomical Data Analysis Software and Systems X, vol. 238, 12–15 November 2000. The SDSS imaging pipelines. (Astronomical Society of the Pacific, Boston, MA, USA, 2001), pp. 269 279
 14.
Jee MJ, Blakeslee JP, Sirianni M, Martel AR, White RL, Ford HC: Principal component analysis of the timeand positiondependent pointspread function of the advanced camera for surveys. Publications Astronom. Soc. Pacific 2007, 119(862):14031419. 10.1086/524849
 15.
Jee MJ, Tyson JA: Towards precision LSST weaklensing measurementI: impacts of atmospheric turbulence and optical aberration. arXiv:1011.1913v3 [astroph.IM], 2011
 16.
Jarvis M, Jain B: Principal component analysis of PSF variation in weak lensing surveys. arXiv:astroph/0412234v2, 2004
 17.
Schrabback T, Hartlap J, Joachimi B, Kilbinger M, Simon P, Benabed K, Bradac M, Eifler T, Erben T, Fassnacht CD, William High F, Hilbert S, Hildebrandt H, Hoekstra H, Kuijken K, Marshall P, Mellier Y, Morganson E, Schneider P, Semboloni E, Van Waerbeke L, Velander M: Evidence of the accelerated expansion of the universe from weak lensing tomography with COSMOS. Astronom. Astrophys 2010, 516: 126.
 18.
Lauer TR: Deconvolution with a spatiallyvariant PSF. arXiv:astroph/0208247v1, 2002
 19.
Elad M, Figueiredo MAT, Ma Y: On the role of sparse and redundant representations in image processing. IEEE Proc. (Special Issue on Applications of Sparse Representation and Compressive Sensing) 2010, 98(6):972982.
 20.
Donoho DL, Johnstone IM: Ideal spatial adaptation by wavelet shrinkage. Biometrika 1994, 81: 425455. 10.1093/biomet/81.3.425
 21.
Strous L: Mr. Sunspot’s Answer Book. National Solar Observatory/Sacramento Peak, Sunspot, NM, USA (data available through the Wikipedia article: “Apparent magnitude”, and in the web archive of the original source), 1996
 22.
GilRodrigo E, Portilla J, Miraut D, SuarezMesa R: 18th IEEE International Conference on Image Processing (ICIP). Efficient joint PoissonGauss restoration using multiframe L2relaxedL0 analysisbased sparsity. (September 2011), pp. 1385–1388
 23.
Lee JA, Verleysen M: Nonlinear Dimensionality Reduction. (Series: Information Science and Statistics), Springer, New York, 2007
Acknowledgements
Astronomical real images used to illustrate the algorithm (Caldwell27, M17, and M57) are copyrighted by National Optical Astronomy Observatory/Association of Universities for Research in Astronomy/National Science Foundation. They have been used strictly for educational and research purposes, as NOAO/AURA Image Library Conditions of Use states. We also thank the anonymous reviewers, for their meaningful comments and constructive suggestions. David Miraut has been funded by grant CICYT TIN201021289C0201. Javier Portilla has been funded by grants CICYT TEC200913696 and CSIC PIE201050E021. All grants from the Spanish Education, and Science&Innovation Ministries. Johannes Ballé stay in Madrid has been partially funded by the CSIC.
Author information
Affiliations
Corresponding author
Additional information
Competing interests
The authors declare that they have no competing interests.
Authors’ original submitted files for images
Below are the links to the authors’ original submitted files for images.
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License ( https://creativecommons.org/licenses/by/2.0 ), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
About this article
Cite this article
Miraut, D., Ballé, J. & Portilla, J. Efficient shiftvariant image restoration using deformable filtering (Part II): PSF field estimation. EURASIP J. Adv. Signal Process. 2012, 193 (2012). https://doi.org/10.1186/168761802012193
Received:
Accepted:
Published:
Keywords
 PSF estimation
 PSF field
 PSF field estimation
 Shift variant blur
 Deformable kernel
 Dimensionality reduction
 Maximum likelihood
 Sparsity
 Star fields