Index of content:
Volume 38, Issue S1, July 2011
- X-RAY COMPUTED TOMOGRAPHY: ADVANCES IN IMAGE FORMATION
38(2011); http://dx.doi.org/10.1118/1.3591339View Description Hide Description
38(2011); http://dx.doi.org/10.1118/1.3574885View Description Hide DescriptionPurpose:
Selecting the appropriate imaging technique in computed tomography(CT) inherently involves balancing the tradeoff between image quality and imagingdose. Modulation of the x-ray fluence field, laterally across the beam, and independently for each projection, may potentially meet user-prescribed, regional image quality objectives, while reducing radiation to the patient. The proposed approach, called fluence field modulated CT (FFMCT), parallels the approach commonly used in intensity-modulated radiation therapy(IMRT), except “image quality plans” replace the “dose plans” of IMRT. This work studies the potential noise and dose benefits of FFMCT via objective driven optimization of fluence fields.Methods:
Experiments were carried out in simulation. Image quality plans were defined by specifying signal-to-noise ratio(SNR) criteria for regions of interest (ROIs) in simulated cylindrical and oblong water phantoms, and an anthropomorphic phantom with bone, air, and water equivalent regions. X-ray fluence field patterns were generated using a simulated annealingoptimization method that attempts to achieve the spatially-dependent prescribed SNR criteria in the phantoms while limiting dose (to the volume or subvolumes). The resulting SNR and dose distributions were analyzed and compared to results using a bowtie filtered fluence field.Results:
Compared to using a fixed bowtie filtered fluence, FFMCT achieved superior agreement with the target image quality objectives, and resulted in integral dose reductions ranging from 39 to 52%. Prioritizing dose constraints for specific regions of interest resulted in a preferential reduction of dose to those regions with some tradeoff in SNR, particularly where the target low dose regions overlapped with regions where high SNR was prescribed. The method appeared fairly robust under increased complexity and heterogeneity of the object structure.Conclusions:
These results support that FFMCT has the potential to meet prescribed image quality objectives, while decreasing radiation exposure to the patient. Tradeoffs between SNR and dose may not be eliminated, but might be more efficiently managed using FFMCT.
38(2011); http://dx.doi.org/10.1118/1.3577743View Description Hide DescriptionPurpose:
The authors investigate the CB artifact behavior of the factorization approach recently suggested for image reconstruction in circular cone-beam computed tomography. This investigation is carried out in a typical C-arm geometry and involves simulated data and for the first time also phantom and clinical CB data acquired with a commercially available angiographic system.Methods:
The CB artifact level is first measured using quantitative figures-of-merit that are computed from the reconstructions of the mathematical FORBILD head phantom and of a modified disk phantom. The authors then show reconstructions from a physical thorax phantom and clinical head data sets for a visual assessment of image quality. The performance of the factorization method is primarily compared to that of short-scan FDK, but the authors also show the results obtained with the full-scan FDK and the virtual PI-line BPF method for the simulation studies, as a benchmark.Results:
Quantitatively, the FORBILD head phantom reconstructions of both FDK methods show a spatially averaged bias of up to 1.2% in the axial slices about 9 cm away from the plane of the scan, which is placed 4 cm below the central slice through the phantom. The artifact level for the short-scan FDK method and the virtual PI-line BPF method noticeably depends on the scan orientation. The factorization approach can significantly reduce both, this dependency as well as the reconstruction bias. It also shows visually an improved quality of the clinical images compared to short-scan FDK, particularly close to the spine and in the subcranial regions of the clinical data sets.Conclusions:
The factorization approach comes with noticeably lower reconstruction bias than the FDK methods and is least sensitive to the scan orientation among all considered short-scan methods. The data inconsistencies contained in the real data sets, such as scatter, beam hardening, or data truncation, show only little impact on the factorization results. Hence, in both, reconstructions from real and simulated data, the factorization method yields better image quality than short-scan FDK, albeit at the cost of some slight, directed high-frequency artifacts that are mostly visible in axial slices.
38(2011); http://dx.doi.org/10.1118/1.3577757View Description Hide DescriptionPurpose:
Low contrast sensitivity of CT scanners is regularly assessed by subjective scoring of low contrast detectability within phantom CTimages. Since in these phantoms low contrast objects are arranged in known fixed patterns, subjective rating of low contrastvisibility might be biased. The purpose of this study was to develop and validate a software for automated objective low contrast detectability based on a model observer.Methods:
Images of the low contrast module of the Catphan 600 phantom were used for the evaluation of the software. This module contains two subregions: the supraslice region with three groups of low contrast objects (each consisting of nine circular objects with diameter 2–15 mm and contrast 0.3, 0.5, and 1.0%, respectively) and the subslice region with three groups of four circular objects each (diameter 3–9 mm; contrast 1.0%). The software method offered automated determination of low contrast detectability using a NPWE (nonprewhitening matched filter with an eye filter) model observer for the supraslice region. The model observer correlated templates of the low contrast objects with the acquired images of the Catphan phantom and a discrimination indexd′ was calculated. This index was transformed into a proportion correct (PC) value. In the two-alternative forced choice (2-AFC) experiments used in this study, a PC ≥ 75% was proposed as a threshold to decide whether objects were visible. As a proof of concept, influence of kVp (between 80 and 135 kV), mAs (25–200 mAs range) and reconstruction filter (four filters, two soft and two sharp) on low contrast detectability was investigated. To validate the outcome of the software in a qualitative way, a human observer study was performed.Results:
The expected influence of kV, mAs and reconstruction filter on image quality are consistent with the results of the proposed automated model. Higher values ford′ (or PC) are found with increasing mAs or kV values and for the soft reconstruction filters. For the highest contrast group (1%), PC values were fairly above 75% for all object diameters >2 mm, for all conditions. For the 0.5% contrast group, the same behavior was observed for object diameters >3 mm for all conditions. For the 0.3% contrast group, PC values were higher than 75% for object diameters >6 mm except for the series acquired at the lowest dose (25 mAs), which gave lower PC values. In the human observer study similar trends were found.Conclusions:
We have developed an automated method to objectively investigate image quality using the NPWE model in combination with images of the Catphan phantom low contrast module. As a first step, low contrast detectability as a function of both acquisition and reconstruction parameter settings was successfully investigated with the software. In future work, this method could play a role in image reconstruction algorithms evaluation, dose reduction strategies or novel CT technologies, and other model observers may be implemented as well.
38(2011); http://dx.doi.org/10.1118/1.3577758View Description Hide DescriptionPurpose:
To reduce beam hardening artifacts in CT in case of an unknown x-ray spectrum and unknown material properties.Methods:
The authors assume that the object can be segmented into a few materials with different attenuation coefficients, and parameterize the spectrum using a small number of energy bins. The corresponding unknown spectrum parameters and material attenuation values are estimated by minimizing the difference between the measured sinogram data and a simulated polychromatic sinogram. Three iterative algorithms are derived from this approach: two reconstruction algorithms IGR and IFR, and one sinogram precorrection method ISP.Results:
The methods are applied on real x-ray data of a high and a low-contrast phantom. All three methods successfully reduce the cupping artifacts caused by the beam polychromaticity in such a way that the reconstruction of each homogeneous region is to good accuracy homogeneous, even in case the segmentation of the preliminary reconstruction image is poor. In addition, the results show that the three methods tolerate relatively large variations in uniformity within the segments.Conclusions:
We show that even without prior knowledge about materials or spectrum, effective beam hardening correction can be obtained.
38(2011); http://dx.doi.org/10.1118/1.3577759View Description Hide DescriptionPurpose:
To develop a 4D [three-dimensional (3D) + time] CT technique to capture high spatial and temporal resolution images of wrist joint motion so that dynamic joint instabilities can be detected before the development of static joint instability and onset of osteoarthritis (OA).Methods:
A cadaveric wrist was mounted onto a custom motion simulator and scanned with a dual source CTscanner during radial–ulnar deviation. A dynamic 4D CT technique was utilized to reconstructimages at 20 equidistant time points from one motion cycle. 3D images of carpal bones were generated using volume rendering techniques (VRT) at each of the 20 time points and then 4D movies were generated to depict the dynamic joint motion. The same cadaveric wrist was also scanned after cutting all portions of the scapholunate interosseus ligament to simulate scapholunate joint instability. Image quality were assessed on an ordinal scale (1–4, 4 being excellent) by three experienced orthopedic surgeons (specialized in hand surgery) by scoring 2D axial images. Dynamic instability was evaluated by the same surgeons by comparing the two 4D movies of joint motion. Finally, dose reduction was investigated using the cadaveric wrist by scanning at different dose levels to determine the lowest radiation dose that did not substantially alter diagnostic image quality.Results:
The mean image quality scores for dynamic and static CTimages were 3.7 and 4.0, respectively. The carpal bones, distal radius and ulna, and joint spaces were clearly delineated in the 3D VRT images, without motion blurring or banding artifacts, at all time points during the motion cycle. Appropriate viewing angles could be interactively selected to view any articulating structure using different 3D processing techniques. The motion of each carpal bone and the relative motion among the carpal bones were easily observed in the 4D movies. Joint instability was correctly and easily detected in the scan performed after the ligament was cut by observing the relative motion between the scaphoid and lunate bones. Diagnostic capability was not sacrificed with a volume CT dose index (CTDIvol) as low as 18 mGy for the whole scan, with estimated skin dose of approximately 33 mGy, which is much lower than the threshold for transient skin erythema (2000 mGy).Conclusions:
The proposed dynamic 4D CTimaging technique generated high spatial and high temporal resolution images without requiring periodic joint motion. Preliminary results from this cadaveric study demonstrate the feasibility of detecting joint instability using this technique.
38(2011); http://dx.doi.org/10.1118/1.3577764View Description Hide DescriptionPurpose:
This work seeks to develop exact confidence interval estimators for figures of merit that describe the performance of linear observers, and to demonstrate how these estimators can be used in the context of x-raycomputed tomography(CT). The figures of merit are the receiver operating characteristic (ROC) curve and associated summary measures, such as the area under the ROC curve. Linear computerized observers are valuable for optimization of parameters associated with image reconstruction algorithms and data acquisition geometries. They provide a means to perform assessment of image quality with metrics that account not only for shift-variant resolution and nonstationary noise but that are also task-based.Methods:
We suppose that a linear observer with fixed template has been defined and focus on the problem of assessing the performance of this observer for the task of deciding if an unknown lesion is present at a specific location. We introduce a point estimator for the observer signal-to-noise ratio(SNR) and identify its sampling distribution. Then, we show that exact confidence intervals can be constructed from this distribution. The sampling distribution of our SNR estimator is identified under the following hypotheses: (i) the observer ratings are normally distributed for each class of images and (ii) the variance of the observer ratings is the same for each class of images. These assumptions are, for example, appropriate in CT for ratings produced by linear observers applied to low-contrast lesion detection tasks.Results:
Unlike existing approaches to the estimation of ROC confidence intervals, the new confidence intervals presented here have exactly known coverage probabilities when our data assumptions are satisfied. Furthermore, they are applicable to the most commonly used ROC summary measures, and they may be easily computed (a computer routine is supplied along with this article on the Medical Physics Website). The utility of our exact interval estimators is demonstrated through an image quality evaluation example using real x-rayCTimages. Also, strong robustness is shown to potential deviations from the assumption that the ratings for the two classes of images have equal variance. Another aspect of our interval estimators is the fact that we can calculate their mean length exactly for fixed parameter values, which enables precise investigations of sampling effects. We demonstrate this aspect by exploring the potential reduction in statistical variability that can be gained by using additional images from one class, if such images are readily available. We find that when additional images from one class are used for an ROC study, the mean AUC confidence interval length for our estimator can decrease by as much as 35%.Conclusions:
We have shown that exact confidence intervals can be constructed for ROC curves and for ROC summary measures associated with fixed linear computerized observers applied to binary discrimination tasks at a known location. Although our intervals only apply under specific conditions, we believe that they form a valuable tool for the important problem of optimizing parameters associated with image reconstruction algorithms and data acquisition geometries, particularly in x-rayCT.
38(2011); http://dx.doi.org/10.1118/1.3577765View Description Hide Description
Purpose: Gel’fand and Graev performed classical work on the inversion of integral transforms in different spaces [Gel’fand and Graev, Funct. Anal. Appl. 25(1) 1–5 (1991)]. This paper discusses their key results for further research and development.Methods: The Gel’fand–Graev inversion formula reveals a fundamental relationship between projection data and the Hilbert transform of an image to be reconstructed. This differential backprojection (DBP)/backprojection filtration (BPF) approach was rediscovered in the CT field, and applied in important applications such as reconstruction from truncated projections, interior tomography, and limited-angle tomography. Here the authors present the Gel’fand–Graev inversion formula in a 3D setting assuming the 1D x-ray transform.Results: The pseudodifferential operator is a powerful theoretical tool. There is a fundamental mathematical link between the Gel’fand–Graev formula and the DBP (or BPF) approach in the case of the 1D x-ray transform in a 3D real space.Conclusions: This paper shows the power of mathematics for tomographic imaging and the value of a pure theoretical finding, which may appear quite irrelevant to daily healthcare at the first glance.
38(2011); http://dx.doi.org/10.1118/1.3577766View Description Hide Description
Purpose: To develop a new design of an x-ray beam shaper for helical computed tomography(CT) that increases the dose utilization.Methods: For typical reconstruction algorithms in helical CT, different data are utilized with different weights during back-projection. In particular, data of the outer detector rows, i.e., data acquired at larger cone-angles, are used with smaller weights than data from the central detector rows. Given this spatial variation of the back-projection weights, a beam shaper is designed that creates a spatial variation of the noise variance across the detector such that the used back-projection weights are the statistically optimal weights. The effect of the beam shaper on the reconstructed images are studied using simulated data and analytical as well as iterative reconstruction algorithms.Results: For a particular analytical reconstruction algorithm, we obtain an average reduction of the noise by 12% within the object. In combination with iterative reconstruction, the beam shaper creates an insensitivity to patient motion without introducing any heuristic data weighting.Conclusions: The demonstrated noise reduction by 12% is equivalent to a possible dose saving of 25%. This dose saving can be achieved by a relatively minor hardware change in the CT system and it does not require any changes to the reconstruction algorithm.
38(2011); http://dx.doi.org/10.1118/1.3578342View Description Hide Description
Purpose: Circular scanning with an off-center planar detector is an acquisition scheme that allows to save detector area while keeping a large field of view (FOV). Several filtered back-projection (FBP) algorithms have been proposed earlier. The purpose of this work is to present two newly developed back-projection filtration (BPF) variants and evaluate the image quality of these methods compared to the existing state-of-the-art FBP methods.Methods: The first new BPF algorithm applies redundancy weighting of overlapping opposite projections before differentiation in a single projection. The second one uses the Katsevich-type differentiation involving two neighboring projections followed by redundancy weighting and back-projection. An averaging scheme is presented to mitigate streak artifacts inherent to circular BPF algorithms along the Hilbert filter lines in the off-center transaxial slices of the reconstructions. The image quality is assessed visually on reconstructed slices of simulated and clinical data. Quantitative evaluation studies are performed with the Forbild head phantom by calculating root-mean-squared-deviations (RMSDs) to the voxelized phantom for different detector overlap settings and by investigating the noise resolution trade-off with a wire phantom in the full detector and off-center scenario.Results: The noise-resolution behavior of all off-center reconstruction methods corresponds to their full detector performance with the best resolution for the FDK based methods with the given imaging geometry. With respect to RMSD and visual inspection, the proposed BPF with Katsevich-type differentiation outperforms all other methods for the smallest chosen detector overlap of about 15 mm. The best FBP method is the algorithm that is also based on the Katsevich-type differentiation and subsequent redundancy weighting. For wider overlap of about 40–50 mm, these two algorithms produce similar results outperforming the other three methods. The clinical case with a detector overlap of about 17 mm confirms these results.Conclusions: The BPF-type reconstructions with Katsevich differentiation are widely independent of the size of the detector overlap and give the best results with respect to RMSD and visual inspection for minimal detector overlap. The increased homogeneity will improve correct assessment of lesions in the entire field of view.
38(2011); http://dx.doi.org/10.1118/1.3528218View Description Hide Description
Purpose: Analytic CTimage reconstruction is a computationally demanding task. Currently, the even more demanding iterative reconstruction algorithms find their way into clinical routine because their image quality is superior to analytic image reconstruction. The authors thoroughly analyze a so far unconsidered but valuable tool of tomorrow’s reconstructionhardware (CPU and GPU) that allows implementing the forward projection and backprojection steps, which are the computationally most demanding parts of any reconstruction algorithm, much more efficiently.Methods: Instead of the standard 32 bit floating-point values (float), a recently standardized floating-point value with 16 bit (half) is adopted for data representation in image domain and in rawdata domain. The reduction in the total data amount reduces the traffic on the memory bus, which is the bottleneck of today’s high-performance algorithms, by 50%. In CT simulations and CT measurements, floatreconstructions (gold standard) and halfreconstructions are visually compared via difference images and by quantitative image quality evaluation. This is done for analytical reconstruction (filtered backprojection) and iterative reconstruction (ordered subset SART).Results: The magnitude of quantization noise, which is caused by a reduction in the data precision of both rawdata and image data during image reconstruction, is negligible. This is clearly shown for filtered backprojection and iterative ordered subset SART reconstruction. In filtered backprojection, the implementation of the backprojection should be optimized for low data precision if the image data are represented in half format. In ordered subset SART image reconstruction, no adaptations are necessary and the convergence speed remains unchanged.Conclusions: Half precision floating-point values allow to speed up CTimage reconstruction without compromising image quality.
38(2011); http://dx.doi.org/10.1118/1.3532396View Description Hide Description
Purpose: To investigate the properties of tomographic grating-based phase contrastimaging with respect to its noise power spectrum and the energy dependence of the achievable contrast to noise ratio.Methods: Tomographic simulations of an object with 11 cm diameter constituted of materials of biological interest were conducted at different energies ranging from 25 to 85 keV by using a wave propagation approach. Using a Monte Carlo simulation of the x-ray attenuation within the object, it is verified that the simulated measurement deposits the same dose within the object at each energy.Results: The noise in reconstructed phase contrast computed tomography images shows a maximum at low spatial frequencies. The contrast to noise ratio reaches a maximum around 45 keV for the simulated object. The general dependence of the contrast to noise on the energy appears to be independent of the material. Compared with reconstructed absorption contrastimages, the reconstructed phase contrastimages show sometimes better, sometimes worse, and sometimes similar contrast to noise, depending on the material and the energy.Conclusions: Phase contrastimages provide additional information to the conventional absorption contrastimages and might thus be useful for medical applications. However, the observed noise power spectrum in reconstructed phase contrastimages implies that the usual trade-off between noise and resolution is less efficient for phase contrastimaging compared with absorption contrastimaging. Therefore, high-resolution imaging is a strength of phase contrastimaging, but low-resolution imaging is not. This might hamper the clinical application of the method, in cases where a low spatial resolution is sufficient for diagnosis.
38(2011); http://dx.doi.org/10.1118/1.3560887View Description Hide DescriptionPurpose:
The authors developed an iterative image-reconstruction algorithm for application to low-intensity computed tomography projection data, which is based on constrained, total-variation (TV) minimization. The algorithm design focuses on recovering structure on length scales comparable to a detector bin width.Methods:
Recovering the resolution on the scale of a detector bin requires that pixel size be much smaller than the bin width. The resulting image array contains many more pixels than data, and this undersampling is overcome with a combination of Fourier upsampling of each projection and the use of constrained, TV minimization, as suggested by compressive sensing. The presented pseudocode for solving constrained, TV minimization is designed to yield an accurate solution to this optimization problem within 100 iterations.Results:
The proposed image-reconstruction algorithm is applied to a low-intensity scan of a rabbit with a thin wire to test the resolution. The proposed algorithm is compared to filtered backprojection (FBP).Conclusions:
The algorithm may have some advantage over FBP in that the resulting noise level is lowered at equivalent contrast levels of the wire.