Index of content:
Volume 40, Issue 3, March 2013
Segmentation of breast lesions on dynamic contrast enhanced (DCE) magnetic resonance imaging (MRI) is the first step in lesion diagnosis in a computer-aided diagnosis framework. Because manual segmentation of such lesions is both time consuming and highly susceptible to human error and issues of reproducibility, an automated lesion segmentation method is highly desirable. Traditional automated image segmentation methods such as boundary-based active contour (AC) models require a strong gradient at the lesion boundary. Even when region-based terms are introduced to an AC model, grayscale image intensities often do not allow for clear definition of foreground and background region statistics. Thus, there is a need to find alternative image representations that might provide (1) strong gradients at the margin of the object of interest (OOI); and (2) larger separation between intensity distributions and region statistics for the foreground and background, which are necessary to halt evolution of the AC model upon reaching the border of the OOI.Methods:
In this paper, the authors introduce a spectral embedding (SE) based AC (SEAC) for lesion segmentation on breast DCE-MRI. SE, a nonlinear dimensionality reduction scheme, is applied to the DCE time series in a voxelwise fashion to reduce several time point images to a single parametric image where every voxel is characterized by the three dominant eigenvectors. This parametric eigenvector image (PrEIm) representation allows for better capture of image region statistics and stronger gradients for use with a hybrid AC model, which is driven by both boundary and region information. They compare SEAC to ACs that employ fuzzy c-means (FCM) and principal component analysis (PCA) as alternative image representations. Segmentation performance was evaluated by boundary and region metrics as well as comparing lesion classification using morphological features from SEAC, PCA+AC, and FCM+AC.Results:
On a cohort of 50 breast DCE-MRI studies, PrEIm yielded overall better region and boundary-based statistics compared to the original DCE-MR image, FCM, and PCA based image representations. Additionally, SEAC outperformed a hybrid AC applied to both PCA and FCM image representations. Mean dice similarity coefficient (DSC) for SEAC was significantly better (DSC = 0.74 ± 0.21) than FCM+AC (DSC = 0.50 ± 0.32) and similar to PCA+AC (DSC = 0.73 ± 0.22). Boundary-based metrics of mean absolute difference and Hausdorff distance followed the same trends. Of the automated segmentation methods, breast lesion classification based on morphologic features derived from SEAC segmentation using a support vector machine classifier also performed better (AUC = 0.67 ± 0.05;p < 0.05) than FCM+AC (AUC = 0.50 ± 0.07), and PCA+AC (AUC = 0.49 ± 0.07).Conclusions:
In this work, we presented SEAC, an accurate, general purpose AC segmentation tool that could be applied to any imaging domain that employs time series data. SE allows for projection of time series data into a PrEIm representation so that every voxel is characterized by the dominant eigenvectors, capturing the global and local time-intensity curve similarities in the data. This PrEIm allows for the calculation of strong tensor gradients and better region statistics than the original image intensities or alternative image representations such as PCA and FCM. The PrEIm also allows for building a more accurate hybrid AC scheme.
40(2013); http://dx.doi.org/10.1118/1.4773027View Description Hide Description
- MEDICAL PHYSICS LETTER
Development of XFCT imaging strategy for monitoring the spatial distribution of platinum-based chemodrugs: Instrumentation and phantom validation40(2013); http://dx.doi.org/10.1118/1.4789917View Description Hide DescriptionPurpose:
Developing an imaging method to directly monitor the spatial distribution of platinum-based (Pt) drugs at the tumor region is of critical importance for early assessment of treatment efficacy and personalized treatment. In this study, the authors investigated the feasibility of imaging platinum (Pt)-based drug distribution using x-ray fluorescence (XRF, a.k.a. characteristic x ray) CT (XFCT).Methods:
A 5-mm-diameter pencil beam produced by a polychromatic x-ray source equipped with a tungsten anode was used to stimulate emission of XRF photons from Pt drug embedded within a water phantom. The phantom was translated and rotated relative to the stationary pencil beam in a first-generation CT geometry. The x-ray energy spectrum was collected for 18 s at each position using a cadmium telluride detector. The spectra were then used for the K-shell XRF peak isolation and sinogram generation for Pt. The distribution and concentration of Pt were reconstructed with an iterative maximum likelihood expectation maximization algorithm. The capability of XFCT to multiplexed imaging of Pt, gadolinium (Gd), and iodine (I) within a water phantom was also investigated.Results:
Measured XRF spectrum showed a sharp peak characteristic of Pt with a narrow full-width at half-maximum (FWHM) (FWHMKα1 = 1.138 keV, FWHMKα2 = 1.052 keV). The distribution of Pt drug in the water phantom was clearly identifiable on the reconstructed XRF images. Our results showed a linear relationship between the XRF intensity of Pt and its concentrations (R 2 = 0.995), suggesting that XFCT is capable of quantitative imaging. A transmission CT image was also obtained to show the potential of the approach for providing attenuation correction and morphological information. Finally, the distribution of Pt, Gd, and I in the water phantom was clearly identifiable in the reconstructed images from XFCT multiplexed imaging.Conclusions:
XFCT is a promising modality for monitoring the spatial distribution of Pt drugs. The technique may be useful in tailoring tumor treatment regimen in the future.
- X-RAY COMPUTED TOMOGRAPHY: 2012 ADVANCES IN IMAGE FORMATION (Online only)
40(2013); http://dx.doi.org/10.1118/1.4789588View Description Hide DescriptionPurpose:
To discuss options in designing detector shapes in third generation CT and to quantify potential cost savings for compact third generation CT systems, and to extend the work from two-dimensional fan-beam CT to three-dimensional cone-beam CT for circular, sequential, and spiral scan trajectories.Methods:
Third generation CT scanners typically comprise detectors which are flat or whose shape is the segment of a cylinder or a sphere that is focused onto the focal spot of the x-ray source. There appear to be two design criteria that favor this choice of detector shape. One is the possibility of performing fan-beam and cone-beam filtered backprojection in the native geometry (without rebinning) and the other criterion could be to enable the early use of focused antiscatter grids. It is less known, however, that other detector shapes may also have these properties. While these designs have been evaluated for 2D CT from a theoretical standpoint more than one decade ago the authors revisit and generalize these considerations, extend them to 3D circular, sequential, and spiral cone-beam CT and propose an optimal design in terms of detector costs while keeping image quality constant. Their considerations and conclusions are based on considering the sampling density of the x-rays, including the effects of finite focal spot and finite detector element size. Proposing image reconstruction algorithms or numerically evaluating the results by reconstructing simulated projection data is not within the scope of this work.Results:
If the detector arc is curved to be nearly concentric with the circle describing the edge of the field of measurement significantly less detector area and detector pixels are required compared to today's third generation CT systems where the detector arc is centered about the focal spot. Combined with a detector that just covers the spiral Tam window cost savings of 60% or more are possible in compact CT systems. In terms of practicability the new designs appear to be nearly as easy to realize as today's third generation systems.Conclusions:
Compact CT systems, which require the focal spot to be mounted close to the edge of the field of measurement, may significantly benefit from using detector shapes other than the typical equiangular detector that is focused onto the focal spot.
40(2013); http://dx.doi.org/10.1118/1.4789628View Description Hide DescriptionPurpose:
This paper introduces a new strategy for simulating low-dose computed tomography (CT) scans using real scans of a higher dose as an input. The tool is verified against simulations and real scans and compared to other approaches found in the literature.Methods:
The conditional variance identity is used to properly account for the variance of the input high-dose data, and a formula is derived for generating a new Poisson noise realization which has the same mean and variance as the true low-dose data. The authors also derive a formula for the inclusion of real samples of detector noise, properly scaled according to the level of the simulated x-ray signals.Results:
The proposed method is shown to match real scans in number of experiments. Noise standard deviation measurements in simulated low-dose reconstructions of a 35 cm water phantom match real scans in a range from 500 to 10 mA with less than 5% error. Mean and variance of individual detector channels are shown to match closely across the detector array. Finally, the visual appearance of noise and streak artifacts is shown to match in real scans even under conditions of photon-starvation (with tube currents as low as 10 and 80 mA). Additionally, the proposed method is shown to be more accurate than previous approaches (1) in achieving the correct mean and variance in reconstructed images from pure-Poisson noise simulations (with no detector noise) under photon-starvation conditions, and (2) in simulating the correct noise level and detector noise artifacts in real low-dose scans.Conclusions:
The proposed method can accurately simulate low-dose CT data starting from high-dose data, including effects from photon starvation and detector noise. This is potentially a very useful tool in helping to determine minimum dose requirements for a wide range of clinical protocols and advanced reconstruction algorithms.
40(2013); http://dx.doi.org/10.1118/1.4789589View Description Hide DescriptionPurpose:
Proton CT (pCT) has the potential to accurately measure the electron density map of tissues at low doses but the spatial resolution is prohibitive if the curved paths of protons in matter is not accounted for. The authors propose to account for an estimate of the most likely path of protons in a filtered backprojection (FBP) reconstruction algorithm.Methods:
The energy loss of protons is first binned in several proton radiographs at different distances to the proton source to exploit the depth-dependency of the estimate of the most likely path. This process is named the distance-driven binning. A voxel-specific backprojection is then used to select the adequate radiograph in the distance-driven binning in order to propagate in the pCT image the best achievable spatial resolution in proton radiographs. The improvement in spatial resolution is demonstrated using Monte Carlo simulations of resolution phantoms.Results:
The spatial resolution in the distance-driven binning depended on the distance of the objects from the source and was optimal in the binned radiograph corresponding to that distance. The spatial resolution in the reconstructed pCT images decreased with the depth in the scanned object but it was always better than previous FBP algorithms assuming straight line paths. In a water cylinder with 20 cm diameter, the observed range of spatial resolutions was 0.7 − 1.6 mm compared to 1.0 − 2.4 mm at best with a straight line path assumption. The improvement was strongly enhanced in shorter 200° scans.Conclusions:
Improved spatial resolution was obtained in pCT images with filtered backprojection reconstruction using most likely path estimates of protons. The improvement in spatial resolution combined with the practicality of FBP algorithms compared to iterative reconstruction algorithms makes this new algorithm a candidate of choice for clinical pCT.
40(2013); http://dx.doi.org/10.1118/1.4789590View Description Hide DescriptionPurpose:
A human observer study was performed for a signal detection task for the case of fan-beam x-ray computed tomography. Hotelling observer (HO) performance was calculated for the same detection task without the use of efficient channels. By considering the full image covariance produced by the filtered backprojection (FBP) algorithm and avoiding the use of channels in the computation of HO performance, the authors establish an absolute upper bound on signal detectability. Therefore, this study serves as a baseline for relating human and ideal observer performance in the case of fan-beam CT.Methods:
Eight human observers participated in a two-alternative forced choice experiment where the signal of interest was a small simulated ellipsoid in the presence of independent, identically distributed Gaussian detector noise. Theoretical performance of the HO, which is equivalent to the ideal observer in this case (see Sec. 13.2.12 inBarrett and Myers [Foundations of Image Science (Wiley, Hoboken, NJ, Year: 2004)], was also computed and compared to the performance of the human observers. In addition to a reference FBP implementation, two FBP implementations with inherent loss of HO signal detectability (e.g., by apodizing the ramp filter) were also investigated. Each of these latter two implementations takes the form of a discrete-to-discrete linear operator (i.e., a matrix), which has a nontrivial null-space resulting in the loss of detectability.Results:
Estimated observer detectability index ( ) values for the human observers and SNR values for the HO were obtained. While Hanning filtering in the FBP implementation with a cutoff frequency of 1/4 of the Nyquist frequency reduces HO SNR (due to the reconstruction matrix's nontrivial null-space), this filtering was shown to consistently improve human observer performance. By contrast, increasing the image pixel size was seen to have a comparable effect on both the HO and the human observers, degrading performance.Conclusions:
These results, which characterize HO and human observer performance for a signal detection task in fan-beam FBP noise, form a basis for applying model observer metrics to fan-beam CT when knowledge of the full image-domain noise statistics is important. Further, by calculating HO performance without relying on channels, these results are particularly relevant when an information theoretic approach is considered, e.g., in optimization of the image reconstruction algorithm with respect to preservation of signal detectability. Finally, the HO (which is here equivalent to the ideal observer) provides an absolute upper bound on detection performance, and our results therefore provide insight into the performance of human observers relative to the optimum for two different cases wherein ideal observer performance is compromised through degradation of the data quality. In one case (regularization), human performance is improved to practically ideal performance, and in the other (larger pixel size), ideal and human observer performance are approximately degraded equivalently.
40(2013); http://dx.doi.org/10.1118/1.4789591View Description Hide DescriptionPurpose:
Digital breast tomosynthesis is a relatively new diagnostic x-ray modality that allows high resolution breast imaging while suppressing interference from overlapping anatomical structures. However, proper visualization of microcalcifications remains a challenge. For the subset of systems considered by the authors, the main cause of deterioration is movement of the x-ray source during exposures. They propose a modified grouped coordinate ascent algorithm that includes a specific acquisition model to compensate for this deterioration.Methods:
A resolution model based on the movement of the x-ray source during image acquisition is created and combined with a grouped coordinate ascent algorithm. Choosing planes parallel to the detector surface as the groups enables efficient implementation of the position dependent resolution model. In the current implementation, the resolution model is approximated by a Gaussian smoothing kernel. The effect of the resolution model on the iterative reconstruction is evaluated by measuring contrast to noise ratio (CNR) of spherical microcalcifications in a homogeneous background. After this, the new reconstruction method is compared to the optimized filtered backprojection method for the considered system, by performing two observer studies: the first study simulates clusters of spherical microcalcifications in a power law background for a free search task; the second study simu-lates smooth or irregular microcalcifications in the same type of backgrounds for a classification task.Results:
Including the resolution model in the iterative reconstruction methods increases the CNR of microcalcifications. The first observer study shows a significant improvement in detection of microcalcifications (p = 0.029), while the second study shows that performance on a classification task remains the same (p = 0.935) compared to the filtered backprojection method.Conclusions:
The new method shows higher CNR and improved visualization of microcalcifications in an observer experiment on synthetic data. Further study of the negative results of the classification task showed performance variations throughout the volume linked to the changing noise structure introduced by the combination of the resolution model and the smoothing prior.
Weighted simultaneous algebraic reconstruction technique for tomosynthesis imaging of objects with high-attenuation features40(2013); http://dx.doi.org/10.1118/1.4789592View Description Hide DescriptionPurpose:
This paper introduces a nonlinear weighting scheme into the backprojection operation within the simultaneous algebraic reconstruction technique (SART). It is designed for tomosynthesis imaging of objects with high-attenuation features in order to reduce limited angle artifacts.Methods:
The algorithm estimates which projections potentially produce artifacts in a voxel. The contribution of those projections into the updating term is reduced. In order to identify those projections automatically, a four-dimensional backprojected space representation is used. Weighting coefficients are calculated based on a dissimilarity measure, evaluated in this space. For each combination of an angular view direction and a voxel position an individual weighting coefficient for the updating term is calculated.Results:
The feasibility of the proposed approach is shown based on reconstructions of the following real three-dimensional tomosynthesis datasets: a mammography quality phantom, an apple with metal needles, a dried finger bone in water, and a human hand. Datasets have been acquired with a Siemens Mammomat Inspiration tomosynthesis device and reconstructed using SART with and without suggested weighting. Out-of-focus artifacts are described using line profiles and measured using standard deviation (STD) in the plane and below the plane which contains artifact-causing features. Artifacts distribution in axial direction is measured using an artifact spread function (ASF). The volumes reconstructed with the weighting scheme demonstrate the reduction of out-of-focus artifacts, lower STD (meaning reduction of artifacts), and narrower ASF compared to nonweighted SART reconstruction. It is achieved successfully for different kinds of structures: point-like structures such as phantom features, long structures such as metal needles, and fine structures such as trabecular bone structures.Conclusions:
Results indicate the feasibility of the proposed algorithm to reduce typical tomosynthesis artifacts produced by high-attenuation features. The proposed algorithm assigns weighting coefficients automatically and no segmentation or tissue-classification steps are required. The algorithm can be included into various iterative reconstruction algorithms with an additive updating strategy. It can also be extended to computed tomography case with the complete set of angular data.
Evaluation of interpolation methods for surface-based motion compensated tomographic reconstruction for cardiac angiographic C-arm data40(2013); http://dx.doi.org/10.1118/1.4789593View Description Hide DescriptionPurpose:
For interventional cardiac procedures, anatomical and functional information about the cardiac chambers is of major interest. With the technology of angiographic C-arm systems it is possible to reconstruct intraprocedural three-dimensional (3D) images from 2D rotational angiographic projection data (C-arm CT). However, 3D reconstruction of a dynamic object is a fundamental problem in C-arm CT reconstruction. The 2D projections are acquired over a scan time of several seconds, thus the projection data show different states of the heart. A standard FDK reconstruction algorithm would use all acquired data for a filtered backprojection and result in a motion-blurred image. In this approach, a motion compensated reconstruction algorithm requiring knowledge of the 3D heart motion is used. The motion is estimated from a previously presented 3D dynamic surface model. This dynamic surface model results in a sparse motion vector field (MVF) defined at control points. In order to perform a motion compensated reconstruction, a dense motion vector field is required. The dense MVF is generated by interpolation of the sparse MVF. Therefore, the influence of different motion interpolation methods on the reconstructed image quality is evaluated.Methods:
Four different interpolation methods, thin-plate splines (TPS), Shepard's method, a smoothed weighting function, and a simple averaging, were evaluated. The reconstruction quality was measured on phantom data, a porcine model as well as onin vivo clinical data sets. As a quality index, the 2D overlap of the forward projected motion compensated reconstructed ventricle and the segmented 2D ventricle blood pool was quantitatively measured with the Dice similarity coefficient and the mean deviation between extracted ventricle contours. For the phantom data set, the normalized root mean square error (nRMSE) and the universal quality index (UQI) were also evaluated in 3D image space.Results:
The quantitative evaluation of all experiments showed that TPS interpolation provided the best results. The quantitative results in the phantom experiments showed comparable nRMSE of ≈0.047 ± 0.004 for the TPS and Shepard's method. Only slightly inferior results for the smoothed weighting function and the linear approach were achieved. The UQI resulted in a value of ≈ 99% for all four interpolation methods. On clinical human data sets, the best results were clearly obtained with the TPS interpolation. The mean contour deviation between the TPS reconstruction and the standard FDK reconstruction improved in the three human cases by 1.52, 1.34, and 1.55 mm. The Dice coefficient showed less sensitivity with respect to variations in the ventricle boundary.Conclusions:
In this work, the influence of different motion interpolation methods on left ventricle motion compensated tomographic reconstructions was investigated. The best quantitative reconstruction results of a phantom, a porcine, and human clinical data sets were achieved with the TPS approach. In general, the framework of motion estimation using a surface model and motion interpolation to a dense MVF provides the ability for tomographic reconstruction using a motion compensation technique.
40(2013); http://dx.doi.org/10.1118/1.4790692View Description Hide DescriptionPurpose:
To quantify the concentration of soft-tissue components of water, fat, and calcium through the decomposition of the x-ray spectral signatures in multi-energy CT images.Methods:
Decomposition of dual-energy and multi-energy x-ray data into basis materials can be performed in the projection domain, image domain, or during image reconstruction. In this work, the authors present methodology for the decomposition of multi-energy x-ray data in the image domain for the application of soft-tissue characterization. To demonstrate proof-of-principle, the authors apply several previously proposed methods and a novel content-aware method to multi-energy images acquired with a prototype photon counting CT system. Data from phantom andex vivo specimens are evaluated.Results:
The number and type of materials in a region can be limited based ona priori knowledge or classification strategies. The proposed difference classifier successfully classified the image into air only, water+fat, water+fat+iodine, and water+calcium regions. Then, the content-aware material decomposition based on weighted least-square optimization generated quantitative maps of concentration. Bias in the estimation of the concentration of water and oil components in a phantom study was <0.10 ± 0.15 g/cc on average. Decomposition of ex vivo carotid endarterectomy specimens suggests the presence of water, lipid, and calcium deposits in the plaque walls.Conclusions:
Initial application of the proposed methodology suggests that it can decompose multi-energy CT images into quantitative maps of water, adipose, iodine, and calcium concentrations.
40(2013); http://dx.doi.org/10.1118/1.4790693View Description Hide DescriptionPurpose:
Acquiring data for CT at low radiation doses has become a pressing goal. Unfortunately, the reduced data quality adversely affects the quality of the reconstructions, impeding their readability. In previous work, the authors showed how a prior regular-dose scan of the same patient can efficiently be used to mitigate low-dose artifacts. However, since a prior is not always available, the authors now extend the authors’ method to use a database of images of other patients.Methods:
The authors’ framework first matches the low-dose (target) scan with the images in the database and then selects a set of images that contain anatomical content similar to the target. These “priors” are then registered to the target and form the set of regular-dose priors for restoration via an extended nonlocal means (NLM) filtering framework. To accommodate the larger spatial variability of the patient scans, the authors subdivide the image area into blocks and perform the filtering locally. The database itself is first preprocessed to map each image from its 2D image space to a corresponding high-D image feature space. From this encoding a visual vocabulary is learned that assists in the query of the database.Results:
The authors demonstrate the authors’ framework via a lung scan example, for both streak artifacts (resulting from smaller projection sets) as well as noise artifacts (resulting from lower mA settings). The authors find that in the authors’ particular example case three priors were sufficient to restore all features faithfully. The authors also observe that the authors’ method is quite robust in that it generates good results even when the noise conditions significantly worsen (here by 20%). Finally, the authors find that the restoration quality is significantly better than with conventional NLM filtering.Conclusions:
The authors image restoration algorithm successfully restores images to high quality when the registration is well performed and also when the priors match the target well. When the priors do not contain sufficient information, the affected image regions can only be restored to the quality achieved with conventional regularization. Hence, a sufficiently rich database is a key for successful artifact mitigation with this approach. Finally, the blockwise scheme demonstrates the potential of using small patches of images to form the database.
40(2013); http://dx.doi.org/10.1118/1.4773045View Description Hide DescriptionPurpose:
CT reconstruction algorithms implemented on the GPU are highly sensitive to their implementation details and the hardware they run on. Fine-tuning an implementation for optimal performance can be a time consuming task and require many updates when the hardware changes. There are some techniques that do automatic fine-tuning of GPU code. These techniques, however, are relatively narrow in their fine-tuning and are often based on heuristics which can be inaccurate. The goal of this paper is to present a framework that will automate the process of code optimization with maximum flexibility and produce a final result that is efficient and readable to the user.Methods:
The authors propose a method that is able to tune high level implementation details by using the ant colony optimization algorithm to find the optimal implementation in a relatively short amount of time. Our framework does this by taking as input, a file that describes a graph, such that a path through this graph represents a potential implementation. They then use the ant colony optimization algorithm to find the optimal path through this graph based on the execution time and the quality of the image.Results:
Two experimental studies are carried out. Using the presented framework, they optimize the performance of a GPU accelerated FDK backprojection implementation and a GPU accelerated separable footprint backprojection implementation. The authors demonstrate that the resulting optimal implementation can be different depending on the hardware specifications. They then compare the results of the framework produced with the results produced by manual optimization.Conclusions:
The framework they present is a useful tool for increasing programmer productivity and reducing the overhead of leveraging hardware specific resources. By performing an intelligent search, our framework produces a more efficient image reconstruction implementation in a shorter amount of time.
40(2013); http://dx.doi.org/10.1118/1.4790694View Description Hide DescriptionPurpose:
The appearance of x-ray luminescence computed tomography (XLCT) opens new possibilities to perform molecular imaging by x ray. In the previous XLCT system, the sample was irradiated by a sequence of narrow x-ray beams and the x-ray luminescence was measured by a highly sensitive charge coupled device (CCD) camera. This resulted in a relatively long sampling time and relatively low utilization of the x-ray beam. In this paper, a novel cone beam x-ray luminescence computed tomography strategy is proposed, which can fully utilize the x-ray dose and shorten the scanning time. The imaging model and reconstruction method are described. The validity of the imaging strategy has been studied in this paper.Methods:
In the cone beam XLCT system, the cone beam x ray was adopted to illuminate the sample and a highly sensitive CCD camera was utilized to acquire luminescent photons emitted from the sample. Photons scattering in biological tissues makes it an ill-posed problem to reconstruct the 3D distribution of the x-ray luminescent sample in the cone beam XLCT. In order to overcome this issue, the authors used the diffusion approximation model to describe the photon propagation in tissues, and employed the sparse regularization method for reconstruction. An incomplete variables truncated conjugate gradient method and permissible region strategy were used for reconstruction. Meanwhile, traditional x-ray CT imaging could also be performed in this system. The x-ray attenuation effect has been considered in their imaging model, which is helpful in improving the reconstruction accuracy.Results:
First, simulation experiments with cylinder phantoms were carried out to illustrate the validity of the proposed compensated method. The experimental results showed that the location error of the compensated algorithm was smaller than that of the uncompensated method. The permissible region strategy was applied and reduced the reconstruction error to less than 2 mm. The robustness and stability were then evaluated from different view numbers, different regularization parameters, different measurement noise levels, and optical parameters mismatch. The reconstruction results showed that the settings had a small effect on the reconstruction. The nonhomogeneous phantom simulation was also carried out to simulate a more complex experimental situation and evaluated their proposed method. Second, the physical cylinder phantom experiments further showed similar results in their prototype XLCT system. With the discussion of the above experiments, it was shown that the proposed method is feasible to the general case and actual experiments.Conclusions:
Utilizing numerical simulation and physical experiments, the authors demonstrated the validity of the new cone beam XLCT method. Furthermore, compared with the previous narrow beam XLCT, the cone beam XLCT could more fully utilize the x-ray dose and the scanning time would be shortened greatly. The study of both simulation experiments and physical phantom experiments indicated that the proposed method was feasible to the general case and actual experiments.
40(2013); http://dx.doi.org/10.1118/1.4790695View Description Hide DescriptionPurpose:
The temporal resolution of a given image in cardiac computed tomography (CT) has so far mostly been determined from the amount of CT data employed for the reconstruction of that image. The purpose of this paper is to examine the applicability of such measures to the newly introduced modality of dual-source CT as well as to methods aiming to provide improved temporal resolution by means of an advanced image reconstruction algorithm.Methods:
To provide a solid base for the examinations described in this paper, an extensive review of temporal resolution in conventional single-source CT is given first. Two different measures for assessing temporal resolution with respect to the amount of data involved are introduced, namely, either taking the full width at half maximum of the respective data weighting function (FWHM-TR) or the total width of the weighting function (total TR) as a base of the assessment. Image reconstruction using both a direct fan-beam filtered backprojection with Parker weighting as well as using a parallel-beam rebinning step are considered. The theory of assessing temporal resolution by means of the data involved is then extended to dual-source CT. Finally, three different advanced iterative reconstruction methods that all use the same input data are compared with respect to the resulting motion artifact level. For brevity and simplicity, the examinations are limited to two-dimensional data acquisition and reconstruction. However, all results and conclusions presented in this paper are also directly applicable to both circular and helical cone-beam CT.Results:
While the concept of total TR can directly be applied to dual-source CT, the definition of the FWHM of a weighting function needs to be slightly extended to be applicable to this modality. The three different advanced iterative reconstruction methods examined in this paper result in significantly different images with respect to their motion artifact level, despite exactly the same amount of data being used in the reconstruction process.Conclusions:
The concept of assessing temporal resolution by means of the data employed for reconstruction can nicely be extended from single-source to dual-source CT. However, for advanced (possibly nonlinear iterative) reconstruction algorithms the examined approach fails to deliver accurate results. New methods and measures to assess the temporal resolution of CT images need to be developed to be able to accurately compare the performance of such algorithms.
40(2013); http://dx.doi.org/10.1118/1.4790696View Description Hide DescriptionPurpose:
This paper derives a ray-by-ray weighted filtered backprojection (rFBP) algorithm, based on our recently developed view-by-view weighted, filtered backprojection (vFBP) algorithm.Methods:
The rFBP algorithm directly extends the vFBP algorithm by letting the noise weighting vary from channel to channel within each view. The projection data can be weighted in inverse proportion to their noise variances. Also, an edge-preserving bilateral filter is suggested to perform post filtering to further reduce the noise. The proposed algorithm has been implemented for the circular-orbit cone-beam geometry based on Feldkamp's algorithm.Results:
Image reconstructions with computer simulations and clinical cadaver data are presented to illustrate the effectiveness and feasibility of the proposed algorithm. The new FBP-type algorithm is able to significantly reduce or remove the noise texture, which the conventional FBP is unable to do. The computation time of the proposed rFBP algorithm is approximately the same as the conventional FBP algorithm.Conclusions:
A ray-based noise-weighting scheme is introduced to the FBP algorithm. This new FBP-type algorithm significantly reduces or removes the streaking artifacts in low-dose CT.
40(2013); http://dx.doi.org/10.1118/1.4790697View Description Hide DescriptionPurpose:
In Micro-CT systems based on optical coupling detectors, the defects of scintillator or CCD-camera would lead to heavy artifacts in reconstructed CT images. Meanwhile, different detector units usually suffer from inhomogeneous response, which also leads to artifacts in the CT images. Detector shifting is a simple and efficient method to remove the artifacts due to inhomogeneous responses of detector units. However, it does not work well for heavy artifacts due to defects in scintillator or CCD. The purpose of this paper is to develop a data preprocessing method to reduce both kinds of artifacts.Methods:
A hybrid method which involves detector random shifting and data inpainting is proposed to correct the projection data, so as to suppress the artifacts in the reconstructed CT images. The defects in scintillator or CCD-camera lead to data lost in some areas of the projection data. The Criminisi algorithm is employed to recover the lost data. By detector random shifting, the location of the lost data in one view might be shifted away in adjacent views. This feature is utilized to design the search window, such that the best match patch shall be searched across adjacent views. By this way, the best match patches should really enjoy high similarity. As a result, the heavy artifacts due to defects of scintillator or CCD-camera should be suppressed. Furthermore, a multiscale tessellation method is proposed to locate the defects and similarity patches, which makes the Criminisi algorithm very fast.Results:
The authors tested the proposed method on both simulated projection data and real projection data. Experiments show that the proposed method could correct the bad data in the projections quite well. Compared to other popular methods, such as linear interpolation, wavelet combining Fourier transform, and TV-inpainting, experimental results suggest that the CT images reconstructed from the preprocessed data sets by our method is significantly better in quality.Conclusions:
They have proposed a hybrid method for projection data preprocessing which fits well to typical Micro-CT systems. The hybrid method could suppress the ring artifacts in the reconstructed CT images efficiently, while the spatial resolution is not reduced even with a critical eye.
40(2013); http://dx.doi.org/10.1118/1.4790698View Description Hide DescriptionPurpose:
Iterative image reconstruction (IIR) algorithms in computed tomography (CT) are based on algorithms for solving a particular optimization problem. Design of the IIR algorithm, therefore, is aided by knowledge of the solution to the optimization problem on which it is based. Often times, however, it is impractical to achieve accurate solution to the optimization of interest, which complicates design of IIR algorithms. This issue is particularly acute for CT with a limited angular-range scan, which leads to poorly conditioned system matrices and difficult to solve optimization problems. In this paper, we develop IIR algorithms which solve a certain type of optimization called convex feasibility. The convex feasibility approach can provide alternatives to unconstrained optimization approaches and at the same time allow for rapidly convergent algorithms for their solution—thereby facilitating the IIR algorithm design process.Methods:
An accelerated version of the Chambolle−Pock (CP) algorithm is adapted to various convex feasibility problems of potential interest to IIR in CT. One of the proposed problems is seen to be equivalent to least-squares minimization, and two other problems provide alternatives to penalized, least-squares minimization.Results:
The accelerated CP algorithms are demonstrated on a simulation of circular fan-beam CT with a limited scanning arc of 144°. The CP algorithms are seen in the empirical results to converge to the solution of their respective convex feasibility problems.Conclusions:
Formulation of convex feasibility problems can provide a useful alternative to unconstrained optimization when designing IIR algorithms for CT. The approach is amenable to recent methods for accelerating first-order algorithms which may be particularly useful for CT with limited angular-range scanning. The present paper demonstrates the methodology, and future work will illustrate its utility in actual CT application.
- RADIATION THERAPY PHYSICS
An accuracy assessment of different rigid body image registration methods and robotic couch positional corrections using a novel phantom40(2013); http://dx.doi.org/10.1118/1.4789490View Description Hide DescriptionPurpose:
Image guided radiotherapy (IGRT) using cone beam computed tomography (CBCT) images greatly reduces interfractional patient positional uncertainties. An understanding of uncertainties in the IGRT process itself is essential to ensure appropriate use of this technology. The purpose of this study was to develop a phantom capable of assessing the accuracy of IGRT hardware and software including a 6 degrees of freedom patient positioning system and to investigate the accuracy of the Elekta XVI system in combination with the HexaPOD robotic treatment couch top.Methods:
The constructed phantom enabled verification of the three automatic rigid body registrations (gray value, bone, seed) available in the Elekta XVI software and includes an adjustable mount that introduces known rotational offsets to the phantom from its reference position. Repeated positioning of the phantom was undertaken to assess phantom rotational accuracy. Using this phantom the accuracy of the XVI registration algorithms was assessed considering CBCT hardware factors and image resolution together with the residual error in the overall image guidance process when positional corrections were performed through the HexaPOD couch system.Results:
The phantom positioning was found to be within 0.04 (σ = 0.12)°, 0.02 (σ = 0.13)°, and −0.03 (σ = 0.06)° in X, Y, and Z directions, respectively, enabling assessment of IGRT with a 6 degrees of freedom patient positioning system. The gray value registration algorithm showed the least error in calculated offsets with maximum mean difference of −0.2(σ = 0.4) mm in translational and −0.1(σ = 0.1)° in rotational directions for all image resolutions. Bone and seed registration were found to be sensitive to CBCT image resolution. Seed registration was found to be most sensitive demonstrating a maximum mean error of −0.3(σ = 0.9) mm and −1.4(σ = 1.7)° in translational and rotational directions over low resolution images, and this is reduced to −0.1(σ = 0.2) mm and −0.1(σ = 0.79)° using high resolution images.Conclusions:
The phantom, capable of rotating independently about three orthogonal axes was successfully used to assess the accuracy of an IGRT system considering 6 degrees of freedom. The overall residual error in the image guidance process of XVI in combination with the HexaPOD couch was demonstrated to be less than 0.3 mm and 0.3° in translational and rotational directions when using the gray value registration with high resolution CBCT images. However, the residual error, especially in rotational directions, may increase when the seed registration is used with low resolution images.