摘要 :
Purpose of the Software: Image reconstruction at the scanner's console is to some extend a black box and no offline out-of-the-box image reconstruction pipeline is publicly available. While e.g. Gadgetron~1 and BART~2 exists, they...
展开
Purpose of the Software: Image reconstruction at the scanner's console is to some extend a black box and no offline out-of-the-box image reconstruction pipeline is publicly available. While e.g. Gadgetron~1 and BART~2 exists, they do offer a set of tools to reconstruct data rather than an easy to use reconstruction pipeline. The presented pipeline (Fig. 1) can handle Siemens raw data following a Cartesian trajectory at the moment only. However, it is easily extendible, e.g. dealing with radial undersampled data using BART or reading in the ISMRM raw data format3. Using a readily available open source reconstruction pipeline promotes reproducible research. Therefore, the pipeline will be made freely available soon.
收起
摘要 :
Purpose People have been wandering for a long time whether a filtered backprojection (FBP) algorithm is able to incorporate measurement noise in image reconstruction. The purpose of this tutorial is to develop such an FBP algorith...
展开
Purpose People have been wandering for a long time whether a filtered backprojection (FBP) algorithm is able to incorporate measurement noise in image reconstruction. The purpose of this tutorial is to develop such an FBP algorithm that is able to minimize an objective function with an embedded noise model. Methods An objective function is first set up to model measurement noise and to enforce some constraints so that the resultant image has some pre-specified properties. An iterative algorithm is used to minimize the objective function, and then the result of the iterative algorithm is converted into the Fourier domain, which in turn leads to an FBP algorithm. The model based FBP algorithm is almost the same as the conventional FBP algorithm, except for the filtering step. Results The model based FBP algorithm has been applied to low-dose x-ray CT, nuclear medicine, and real-time MRI applications. Compared with the conventional FBP algorithm, the model based FBP algorithm is more effective in reducing noise. Even though an iterative algorithm can achieve the same noise-reducing performance, the model based FBP algorithm is much more computationally efficient. Conclusions The model based FBP algorithm is an efficient and effective image reconstruction tool. In many applications, it can replace the state-of-the-art iterative algorithms, which usually have a heavy computational cost. The model based FBP algorithm is linear and it has advantages over a nonlinear iterative algorithm in parametric image reconstruction and noise analysis.
收起
摘要 :
For nearly four decades, adaptive beamforming (ABF) algorithms have been applied in RADAR and SONAR signal processing. These algorithms reduce the contribution of undesired off-axis signals while maintaining a desired response alo...
展开
For nearly four decades, adaptive beamforming (ABF) algorithms have been applied in RADAR and SONAR signal processing. These algorithms reduce the contribution of undesired off-axis signals while maintaining a desired response along a specific look direction. Typically, higher resolution and contrast is attainable using adaptive beamforming at the price of an increased computational load. In this paper, we describe a novel ABF designed for medical ultrasound, named the Time-domain Optimized Near-field Estimator (TONE). We performed a series of simulations using synthetic ultrasound data to test the performance of this algorithm and compared it to conventional, data independent, delay and sum beamforming (CBF) method. We also performed experiments using a Philips SONOS 5500 phased array imaging system. CBF was applied using the default parameters of the Philips scanner, whereas TONE was applied on per channel, unfocused data using an unfocused transmit beam. TONE images were reconstructed at a sampling of 67 $mu$m laterally and 19 $mu$m axially. The results obtained for a series of five 20- $mu$m wires in a water tank show a significant improvement in spatial resolution when compared to CBF. We also analyzed the performance of TONE as a function of speed of sound errors and array sparsity, finding it robust to both.
收起
摘要 :
Filtered backprojection (FBP) algorithms reduce image noise by smoothing the image. Iterative algorithms reduce image noise by noise weighting and regularization. It is believed that iterative algorithms are able to reduce noise w...
展开
Filtered backprojection (FBP) algorithms reduce image noise by smoothing the image. Iterative algorithms reduce image noise by noise weighting and regularization. It is believed that iterative algorithms are able to reduce noise without sacrificing image resolution, and thus iterative algorithms, especially maximum-likelihood expectation maximization (MLEM), are used in nuclear medicine to replace FBP algorithms. Methods: This short paper uses counter examples to show that this belief is not true. We compare image noise variance for FBP and MLEM reconstructions having; the same spatial resolution. Results: The truth is that although MLEM suppresses image noise, it does so by sacrificing image resolution as well; the performance of windowed FBP may be better than that of MLEM in our case study. Conclusion: The myth of the superiority of iterative algorithms is caused by comparing them with conventional FBP instead of with windowed FBP. However, we do not intend to generalize the comparison results to imply which algorithm is more favorable.
收起
摘要 :
Fluorescence molecular tomography (FMT) plays an important role in small animal imaging. However, due to the diffusive nature of photon propagation in biological tissues, FMT suffers from a low spatial resolution, which limits it...
展开
Fluorescence molecular tomography (FMT) plays an important role in small animal imaging. However, due to the diffusive nature of photon propagation in biological tissues, FMT suffers from a low spatial resolution, which limits its capability of resolving the distribution of fluorescent biomarkers. In this paper, we investigate the effect of functional and structural information on the accuracy of FMT reconstruction by a hybrid FMT and X-ray computed tomography imaging system. The results from numerical simulation and phantom experiments suggest that the fluorescence targets embedded in heterogeneous medium can be localized when structural information is utilized to constrain the reconstruction process. In addition, both the functional and structural information are essential for the recovery of fluorophore concentration.
收起
摘要 :
Convergence of iterative algorithms can be improved by updating groups of voxels sequentially rather than updating the whole image simultaneously. The optimal way is to choose groups of uncoupled voxels, i.e. voxels spread over th...
展开
Convergence of iterative algorithms can be improved by updating groups of voxels sequentially rather than updating the whole image simultaneously. The optimal way is to choose groups of uncoupled voxels, i.e. voxels spread over the reconstruction volume. While this is most efficient for convergence reasons, updating groups of spread voxels is less efficient regarding memory access and computational burden. In this work, an image-block update scheme is presented that updates relatively large groups of voxels simultaneously while keeping a considerable gain in convergence. The sequential image-block update can also be combined with ordered subsets. This image-block or patchwork scheme is applied both to transmission and emission maximum likelihood algorithms.
收起
摘要 :
Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is...
展开
Ordered subsets expectation maximization (OS-EM) is widely used to accelerate image reconstruction in single photon emission computed tomography (SPECT). Speedup of OS-EM over maximum likelihood expectation maximization (ML-EM) is close to the number of subsets used. Although a high number of subsets can shorten reconstruction times significantly, it can also cause severe image artifacts such as improper erasure of reconstructed activity if projections contain few counts. We recently showed that such artifacts can be prevented by using a count-regulated OS-EM (CR-OS-EM) algorithm which automatically adapts the number of subsets for each voxel based on the estimated number of counts that the voxel contributed to the projections. While CR-OS-EM reached high speed-up over ML-EM in high-activity regions of images, speed in low-activity regions could still be very slow. In this work we propose similarity-regulated OS-EM (SR-OS-EM) as a much faster alternative to CR-OS-EM. SR-OS-EM also automatically and locally adapts the number of subsets, but it uses a different criterion for subset regulation: the number of subsets that is used for updating an individual voxel depends on how similar the reconstruction algorithm would update the estimated activity in that voxel with different subsets. Reconstructions of an image quality phantom and in vivo scans show that SR-OS-EM retains all of the favorable properties of CR-OS-EM, while reconstruction speed can be up to an order of magnitude higher in low-activity regions. Moreover our results suggest that SR-OS-EM can be operated with identical reconstruction parameters (including the number of iterations) for a wide range of count levels, which can be an additional advantage from a user perspective since users would only have to post-filter an image to present it at an appropriate noise level.
收起
摘要 :
It is established that a signal, either continuous or discrete, is determined up to a multiplicative constant by its windowed Fourier phase (WFP) at any frequency. This result indicates that the WFP contains richer information tha...
展开
It is established that a signal, either continuous or discrete, is determined up to a multiplicative constant by its windowed Fourier phase (WFP) at any frequency. This result indicates that the WFP contains richer information than zero crossings and peaks. It also provides insight into why WFP-based image matching may achieve highly accurate and stable results. To experimentally demonstrate the completeness of the WFP, an algorithm is developed to reconstruct signals from the WFP.
收起
摘要 :
Many approaches for high-resolution image reconstruction have been proposed in some literatures. One of the most commonly ways is to reconstruct a high-resolution image from a number of rotated and translated images with low resol...
展开
Many approaches for high-resolution image reconstruction have been proposed in some literatures. One of the most commonly ways is to reconstruct a high-resolution image from a number of rotated and translated images with low resolution. In this process, the exposure difference among original images will decrease the quality of the reconstructed image. In order to remove the influence of the exposure difference, a light energy matching method is proposed in this paper. The theoretical analysis is illustrated in details. Experimental results show that the theoretical analysis is correct and the proposed method is valid.
收起
摘要 :
Image reconstruction from partial observations has attracted increasing attention. Conventional image reconstruction methods with hand-crafted priors often fail to recover fine image details due to the poor representation capabili...
展开
Image reconstruction from partial observations has attracted increasing attention. Conventional image reconstruction methods with hand-crafted priors often fail to recover fine image details due to the poor representation capability of the hand-crafted priors. Deep learning methods attack this problem by directly learning mapping functions between the observations and the targeted images can achieve much better results. However, most powerful deep networks lack transparency and are nontrivial to design heuristically. This paper proposes a novel image reconstruction method based on the Maximum a Posterior (MAP) estimation framework using learned Gaussian Scale Mixture (GSM) prior. Unlike existing unfolding methods that only estimate the image means (i.e., the denoising prior) but neglected the variances, we propose characterizing images by the GSM models with learned means and variances through a deep network. Furthermore, to learn the long-range dependencies of images, we develop an enhanced variant based on the Swin Transformer for learning GSM models. All parameters of the MAP estimator and the deep network are jointly optimized through end-to-end training. Extensive simulation and real data experimental results on spectral compressive imaging and image super-resolution demonstrate that the proposed method outperforms existing state-of-the-art methods.
收起