摘要 :
This paper introduces Manhattan sampling in two and higher dimensions, and proves sampling theorems for them. In 2-D, Manhattan sampling, which takes samples densely along a Manhattan grid of lines, can be viewed as sampling on th...
展开
This paper introduces Manhattan sampling in two and higher dimensions, and proves sampling theorems for them. In 2-D, Manhattan sampling, which takes samples densely along a Manhattan grid of lines, can be viewed as sampling on the union of two rectangular lattices, one dense horizontally and the other vertically, with the coarse spacing of each being a multiple of the fine spacing of the other. The sampling theorem shows that the images bandlimited to the union of the Nyquist regions of the two rectangular lattices can be recovered from their Manhattan samples, and an efficient procedure for doing so is given. Such recovery is possible even though there is an overlap among the spectral replicas induced by Manhattan sampling. In three and higher dimensions, there are many possible configurations for Manhattan sampling, each consisting of the union of special rectangular lattices called bi-step lattices. This paper identifies them, proves a sampling theorem showing that the images bandlimited to the union of the Nyquist regions of the bi-step rectangular lattices are recoverable from Manhattan samples, presents an efficient onion-peeling procedure for doing so, and shows that the union of Nyquist regions is as large as any bandlimited region, such that all images supported by such can be stably reconstructed from samples taken at the rate of the Manhattan sampling. It also develops a special representation for the bi-step lattices with a number of useful properties. While most of this paper deals with continuous-space images, Manhattan sampling of discrete-space images is also considered, for infinite, as well as finite, support images.
收起
摘要 :
Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open...
展开
Variable density random sampling patterns have recently become increasingly popular for accelerated imaging strategies, as they lead to incoherent aliasing artifacts. However, the design of these sampling patterns is still an open problem. Current strategies use model assumptions like polynomials of different order to generate a probability density function that is then used to generate the sampling pattern. This approach relies on the optimization of design parameters which is very time consuming and therefore impractical for daily clinical use. Materials and methods: This work presents a new approach that generates sampling patterns by making use of power spectra of existing reference data sets and hence requires neither parameter tuning nor an a priori mathematical model of the density of sampling points. Results: The approach is validated with downsampling experiments, as well as with accelerated in vivo measurements. The proposed approach is compared with established sampling patterns, and the generalization potential is tested by using a range of reference images. Quantitative evaluation is performed for the downsampling experiments using RMS differences to the original, fully sampled data set. Conclusion: Our results demonstrate that the image quality of the method presented in this paper is comparable to that of an established model-based strategy when optimization of the model parameter is carried out and yields superior results to non-optimized model parameters. However, no random sampling pattern showed superior performance when compared to conventional Cartesian subsampling for the considered reconstruction strategy.
收起
摘要 :
The paper concerns the problem of fast image processing in the low computational power systems with limited memory, which are typical for robot vision and embedded systems. Assuming the necessity of decision based on incomplete in...
展开
The paper concerns the problem of fast image processing in the low computational power systems with limited memory, which are typical for robot vision and embedded systems. Assuming the necessity of decision based on incomplete information when the amount of visual data is too big for an efficient processing, the reduction of their amount becomes a crucial element of the processing system. A good classical example may be histogram based image binarization which requires the knowledge of the distribution of intensities for the whole grey-scale image. Applying the Monte Carlo method for the reduction of the amount of data, much smaller images with similar statistical properties may be obtained, which can be further used for thresholding and binarization e.g. using Otsu algorithm. A relevant problem in this approach is the proper choice of the number of samples for the Monte Carlo method which influences the result of binarization. In this paper the method based on the analysis of image entropy or energy changes is proposed for this purpose. Obtained results, verified for various images, are promising even for relatively small number of samples used for the estimation of the histogram.
收起
摘要 :
Digital processing techniques are based on representing a continuous-time signal by a discrete set of samples. This paper treats the problem of reconstructing a periodic bandlimited signal from a finite number of its nonuniform sa...
展开
Digital processing techniques are based on representing a continuous-time signal by a discrete set of samples. This paper treats the problem of reconstructing a periodic bandlimited signal from a finite number of its nonuniform samples. In practical applications, only a finite number of values are given. Extending the samples periodically across the boundaries, and assuming that the underlying continuous time signal is bandlimited, provides a simple way to deal with reconstruction from finitely many samples. Two algorithms for reconstructing a periodic bandlimited signal from an even and an odd number of nonuniform samples are developed. In the first, the reconstruction functions constitute a basis while in the second, they form a frame so that there are more samples than needed for perfect reconstruction. The advantages and disadvantages of each method are analyzed. Specifically, it is shown that the first algorithm provides consistent reconstruction of the signal while the second is shown to be more stable in noisy environments. Next, we use the theory of finite dimensional frames to characterize the stability of our algorithms. We then consider two special distributions of sampling points: uniform and recurrent nonuniform, and show that for these cases, the reconstruction formulas as well as the stability analysis are simplified significantly. The advantage of our methods over conventional approaches is demonstrated by numerical experiments.
收起
摘要 :
Diffraction imaging is the science of imaging samples under diffraction conditions. Diffraction imaging techniques are well established in visible light and electron microscopy, and have also been widely employed in X-ray science ...
展开
Diffraction imaging is the science of imaging samples under diffraction conditions. Diffraction imaging techniques are well established in visible light and electron microscopy, and have also been widely employed in X-ray science in the form of X-ray topography. Over the past two decades, interest in X-ray diffraction imaging has taken flight and resulted in a wide variety of methods. This article discusses a new full-field imaging method, which uses polymer compound refractive lenses as a microscope objective to capture a diffracted X-ray beam coming from a large illuminated area on a sample. This produces an image of the diffracting parts of the sample on a camera. It is shown that this technique has added value in the field, owing to its high imaging speed, while being competitive in resolution and level of detail of obtained information. Using a model sample, it is shown that lattice tilts and strain in single crystals can be resolved simultaneously down to 10~(-3°) and △a/a = 10~(-5), respectively, with submicrometre resolution over an area of 100 × 100 mm and a total image acquisition time of less than 60 s.
收起
摘要 :
Printing from an NTSC source and conversion of NTSC source material to high-definition television (HDTV) format are some of the applications that motivate superresolution (SR) image and video reconstruction from low-resolution (LR...
展开
Printing from an NTSC source and conversion of NTSC source material to high-definition television (HDTV) format are some of the applications that motivate superresolution (SR) image and video reconstruction from low-resolution (LR) and possibly blurred sources. Existing methods for SR image reconstruction are limited by the assumptions that the input LR images are sampled progressively, and that the aperture time of the camera is zero, thus ignoring the motion blur occurring during the aperture time. Because of the observed adverse effects of these assumptions for many common video sources, this paper proposes (i) a complete model of video acquisition with an arbitrary input sampling lattice and a nonzero aperture time, and (ii) an algorithm based on this model using the theory of projections onto convex sets to reconstruct SR still images or video from an LR time sequence of images. Experimental results with real video are provided, which clearly demonstrate that a significant increase in the image resolution can be achieved by taking the motion blurring into account especially when there exists large interframe motion.
收起
摘要 :
Sampling a radiographic film containing grid line patterns during digitization may produce aliasing artifacts (Moire pattern). The authors propose a mathematical model based on the laser spot size of the digitizer, the distance an...
展开
Sampling a radiographic film containing grid line patterns during digitization may produce aliasing artifacts (Moire pattern). The authors propose a mathematical model based on the laser spot size of the digitizer, the distance and the angle between the grid line and the sampling direction to predict the amplitudes and the frequencies of aliasing artifacts. The predicted results are compared to the experimental results. Effective ways of avoiding or reducing aliasing artifacts without sacrificing too much image quality are proposed.
收起
摘要 :
Ordered dither is considered a simple and effective method of image binarization. For rectangular lattices, it has been shown that this method of spatially periodic quantization causes aliasing in the frequency domain, resulting i...
展开
Ordered dither is considered a simple and effective method of image binarization. For rectangular lattices, it has been shown that this method of spatially periodic quantization causes aliasing in the frequency domain, resulting in a degradation of halftone image quality. This result is extended by deriving an expression for the spectrum of the halftoned image, generalized for any sampling lattice and any screen periodicity. This expression will show the explicit relationship between screen function and halftone image quality. Using this expression, the nonlinearities associated with four optimized screens are analyzed to determine which screens are best suited to halftone rectangularly and hexagonally sampled images.
收起
摘要 :
Methods are reviewed for choosing regularized restorations in image processing. In particular, a method developed by Galatsanos and Katsaggelos (see ibid., vol.1, p.322-336, 1992) is given a Bayesian interpretation and is compared...
展开
Methods are reviewed for choosing regularized restorations in image processing. In particular, a method developed by Galatsanos and Katsaggelos (see ibid., vol.1, p.322-336, 1992) is given a Bayesian interpretation and is compared with other Bayesian and non-Bayesian alternatives. A small illustrative example is provided and a complement is provided for the discussion of noise variance estimation of Galatsanos et al.
收起
摘要 :
The authors have developed a new imaging geometry-the asymmetric-fan-beam (AsF)-to expand the imaging field of view (FOV) for transmission imaging on current SPECT systems. The AsF geometry samples slightly more than one-half of t...
展开
The authors have developed a new imaging geometry-the asymmetric-fan-beam (AsF)-to expand the imaging field of view (FOV) for transmission imaging on current SPECT systems. The AsF geometry samples slightly more than one-half of the FOV in each projection and yields half-truncated projection data. Although each projection profile is not complete, the combined acquired data set meets the minimum sampling requirement because the other half of the FOV is sampled in the opposite projections after a full 360/spl deg/ rotation of the detector system. To take advantage of the simple convolution-backprojection algorithm for reconstruction, the key is in the handling of the projection profile for convolution. The authors have investigated such a technique to process the truncated projection profile for reconstruction without the side effects caused by the truncation. This technique entails filling in the truncated portion of each profile with interpolated data derived from other projections. After convolution, only the corresponding half of the original projection profile is backprojected in reconstruction. This is done to minimize propagation of the errors of interpolation in the reconstruction. Reconstructed images from phantoms and human subjects demonstrate that this processing technique yields good quality transmission images.
收起