摘要 :
Image-based 3D reconstruction or 3D photogrammetry of small-scale objects including insects and biological specimens is challenging due to the use of a high magnification lens with inherently limited depth of field, and the object...
展开
Image-based 3D reconstruction or 3D photogrammetry of small-scale objects including insects and biological specimens is challenging due to the use of a high magnification lens with inherently limited depth of field, and the object's fine structures. Therefore, the traditional 3D reconstruction techniques cannot be applied without additional image preprocessing. One such preprocessing technique is multifocus stacking/fusion that combines a set of partially focused images captured at different distances from the same viewing angle to create a single in-focus image. We found that the image formation is not properly considered by the traditional multifocus image capture and stacking techniques. The resulting in-focus images contain artifacts that violate the perspective projection. A 3D reconstruction using such images often fails to produce accurate 3D models of the captured objects. This paper shows how this problem can be solved effectively by a new multifocus multiview 3D reconstruction procedure which includes a new Fixed-Lens multifocus image capture and a calibrated image registration technique using analytic homography transformation. The experimental results using the real and synthetic images demonstrate the effectiveness of the proposed solutions by showing that both the fixed-lens image capture and multifocus stacking with calibrated image alignment significantly reduce the errors in the camera poses and produce more complete 3D reconstructed models as compared with those by the conventional moving lens image capture and multifocus stacking.
收起
摘要 :
Yarn hairiness has been an important indication of yarn quality that affects weaving production and fabric appearance. In addition to many dedicated instruments, various image analysis systems have been adopted to measure yarn hai...
展开
Yarn hairiness has been an important indication of yarn quality that affects weaving production and fabric appearance. In addition to many dedicated instruments, various image analysis systems have been adopted to measure yarn hairiness for potential values of high accuracy and low cost. However, there is a common problem in acquiring yarn images; that is, hairy fibers protruding beyond the depth of field of the imaging system cannot be fully focused. Fuzzy fibers in the image inevitably introduce errors to the hairiness data. This paper presents a project that attempts to solve the off-focus problem of hairy fibers by applying a new imaging scheme-multifocus image fusion. This new scheme uses compensatory information in sequential images taken at the same position but different depths to construct a new image whose pixels have the highest sharpness among the sequential images. The fused image possesses clearer fiber edges, permitting more complete fiber segmentation and tracing. In the experiments, we used six yarns of different fiber contents and spinning methods to compare the hairiness measurements from the fused images with those from unfused images and from the Uster tester.
收起
摘要 :
In multiscale transform (MST)-based multifocus image fusion, the fusion rules of different subbands are a significant factor that affects the fusion performance. However, dependence only on new fusion rule will see no significant ...
展开
In multiscale transform (MST)-based multifocus image fusion, the fusion rules of different subbands are a significant factor that affects the fusion performance. However, dependence only on new fusion rule will see no significant performance gain for a MST-based method. To address this problem, this paper proposes two novel multifocus image fusion techniques based on multi-scale and multi-direction neighbor distance (MMND), in which the improvements of the fusion performance are respectively achieved by two new developed updating schemes. These two schemes are constructed according to the fact that the difference between a low quality fused result and the source image in the focused region is sharper than those generated by a high quality fused result. Based on this fact, the pixels of the source images are classified into three types in the updating mechanism, namely, pixels of focused significant regions, pixels of smooth regions, pixels of transition area between the focused and defocused regions. According to the categories of source images pixels, we can update the fused result produced by the MMND method in spatial and the MMND domain. Extensive experimental results validate that the proposed two fusion schemes can achieve better results than some state-of-the art algorithms.
收起
摘要 :
Wavelet and guided filter based fusion for multifocus images is proposed. Existing guided filter based scheme has limited performance for images having noise. Wavelet based denoising and guided filter based weight maps are propose...
展开
Wavelet and guided filter based fusion for multifocus images is proposed. Existing guided filter based scheme has limited performance for images having noise. Wavelet based denoising and guided filter based weight maps are proposed to overcome this limitation. Simulation results when analysed visually and quantitatively depict the significance of the suggested scheme. (C) 2015 Elsevier GmbH. All rights reserved.
收起
摘要 :
Image fusion has been receiving increasing attention in the research community with the aim of investigating general formal solutions to a wide spectrum of applications such as multifocus, multiexposure, multispectral (\(IR\)-visi...
展开
Image fusion has been receiving increasing attention in the research community with the aim of investigating general formal solutions to a wide spectrum of applications such as multifocus, multiexposure, multispectral (\(IR\)-visible) and multimodal medical (CT and MRI) image and video fusion. While there exist many fusion techniques for each of these applications, it is difficult to formulate a common fusion technique that works equally well for all these applications. This is mainly because of the different characteristics of the images involved in various applications and the correspondingly different requirements on the fused image. In this work, we propose a common generalized fusion framework for all these classes, based on the statistical properties of local neighborhood of a pixel. As the eigenvalue of the unbiased estimate of the covariance matrix of an image block depends on the strength of edges in that block, we propose to employ it to compute a quantity we call as the significance of a pixel. This generalized pixel significance in turn can be used as a measure of the useful information content in that block, and hence can be used in the fusion process. Several data sets were fused to compare the results with various recently published methods. The analysis shows that for all the image types into consideration, the proposed methods improve the quality of the fused image, both visually and quantitatively, by preserving all the relevant information.
收起
摘要 :
This paper presents a dynamic-segmented morphological wavelet fusion method (DSMWF) and a dynamic-segmented cut and paste fusion method (DSCP). Non-focus regions tend to spread around within multifocus images. The proposed methods...
展开
This paper presents a dynamic-segmented morphological wavelet fusion method (DSMWF) and a dynamic-segmented cut and paste fusion method (DSCP). Non-focus regions tend to spread around within multifocus images. The proposed methods firstly divide each multifocus image into segments and select each sharpest segment at the same location within all images as the "focus segment", based on DCT spectrum concentration on high-frequency sub-band. Each focus segment is further divided into smaller blocks having uniform visual complexity d based on Laplacian edge density. Finally, method DSMWF applies a single-level variable size morphological wavelet fusion method to each block of 2~d × 2~d and method DSCP applies direct cut and paste of the sharpest block to each block of 2~d × 2~d, respectively, to obtain a fused image. The experimental results demonstrate that (a) the PSNR of the fused image using DSMWF is 2-3 dB better than that of MMWF in an average, (b) the occurrence of reconstructing both pixels with position error and underflow value is greatly reduced with DSMWF, (c) the performance of DSCP is much superior to that of both MMWF and DSMWF, and (d) block sharpness assessment based on DCT spectrum concentration on high frequency sub-band performs better than DWT and Laplacian edge for this application.
收起
摘要 :
Image fusion has been receiving increasing attention in the research community with the aim of investigating general formal solutions to a wide spectrum of applications. The objective of this work is to formulate a method that can...
展开
Image fusion has been receiving increasing attention in the research community with the aim of investigating general formal solutions to a wide spectrum of applications. The objective of this work is to formulate a method that can efficiently fuse multifocus as well as multispectral images for context enhancement and thus can be used by different applications. We propose a novel pixel fusion rule based on multiresolution decomposition of the source images using wavelet, wavelet-packet, and contourlet transform. To compute fused pixel value, we take weighted average of the source pixels, where the weight to be given to the pixel is adaptively decided based on the significance of the pixel, which in turn is decided by the corresponding children pixels in the finer resolution bands. The fusion performance has been extensively tested on different types of images viz. multifocus images, medical images (CT and MRI), as well as IR − visible surveillance images. Several pairs of images were fused to compare the results quantitatively as well as qualitatively with various recently published methods. The analysis shows that for all the image types into consideration, the proposed method increases the quality of the fused image significantly, both visually and quantitatively, by preserving all the relevant information. The major achievement is average 50% reduction in artifacts.
收起
摘要 :
Multifocus image fusion has emerged as a challenging research area due to the availability of various image-capturing devices. The optical lenses that are widely utilized in image-capturing devices have limited depth-of-focus' and...
展开
Multifocus image fusion has emerged as a challenging research area due to the availability of various image-capturing devices. The optical lenses that are widely utilized in image-capturing devices have limited depth-of-focus' and, therefore, only the objects that lie within a particular depth remain in-focus', whereas all the other objects go out-of-focus'. In order to obtain an image where all the objects are well focused, multifocus image fusion method based on waveatom transform is proposed. The core idea is to decompose all input images using waveatom transform and perform fusion of resultant waveatom coefficients. The waveatom coefficients with higher visibility, corresponding to sharper image intensities, are used to perform the process of image fusion. Finally, the fused image is obtained by performing inverse waveatom transform. The performance of the proposed method is demonstrated by performing fusion on different sets of multifocus images and comparing the results of the proposed method to the results of existing image fusion methods.
收起
摘要 :
A spatial domain multifocus image fusion method is proposed using a structure-preserving filter. In particular, the latest recursive filter (RF) is introduced as the structure-preserving filter in the proposed spatial domain metho...
展开
A spatial domain multifocus image fusion method is proposed using a structure-preserving filter. In particular, the latest recursive filter (RF) is introduced as the structure-preserving filter in the proposed spatial domain method. Moreover, a focused region detection method is presented to determine initial weight maps based on an average low-pass filter. Then a fused image can be generated by the final weight maps, which are obtained using the RF to refine the initial weight maps and can well preserve the structures of source images. Experimental results show that the proposed method is superior to the state-of-the-art multifocus fusion methods in terms of subjective and objective evaluation. (C) 2019 SPIE and IS&T
收起
摘要 :
The main issue with the multi-focus images lies in obtaining the relative information about the identification of objects in the individual images with less resolution. Hence the image fusion methods have attracted attention to ob...
展开
The main issue with the multi-focus images lies in obtaining the relative information about the identification of objects in the individual images with less resolution. Hence the image fusion methods have attracted attention to obtain high resolute image with a pair of multifocus images. An attempt has been made in the present work to develop an image fusion methodology designing on multiresolution for the feature extraction and for better morphological details, the paper discussed about the Laplacian pyramid algorithm. Five sets of multifocus images obtained with different formats have been introduced to the sixteen different image fusion algorithms including the proposed method. Various statistical metrics were evaluated for each image fusion method. The careful comparison of the visual and objective metrics reveals that the proposed method shows best performance with not only having visual quality and also confirmed based on the variation of the statistical metrics.
收起