Index of content:
Volume 43, Issue 3, March 2016
43(2016); http://dx.doi.org/10.1118/1.4939262View Description Hide Description
- VISION 20/20
Vision 20/20: Magnetic resonance imaging-guided attenuation correction in PET/MRI: Challenges, solutions, and opportunities43(2016); http://dx.doi.org/10.1118/1.4941014View Description Hide Description
Attenuation correction is an essential component of the long chain of data correction techniques required to achieve the full potential of quantitative positron emission tomography(PET)imaging. The development of combined PET/magnetic resonance imaging (MRI) systems mandated the widespread interest in developing novel strategies for deriving accurate attenuation maps with the aim to improve the quantitative accuracy of these emerging hybrid imaging systems. The attenuation map in PET/MRI should ideally be derived from anatomical MRimages; however, MRI intensities reflect proton density and relaxation time properties of biological tissues rather than their electron density and photon attenuation properties. Therefore, in contrast to PET/computed tomography, there is a lack of standardized global mapping between the intensities of MRI signal and linear attenuation coefficients at 511 keV. Moreover, in standard MRI sequences, bones and lungtissues do not produce measurable signals owing to their low proton density and short transverse relaxation times. MRimages are also inevitably subject to artifacts that degrade their quality, thus compromising their applicability for the task of attenuation correction in PET/MRI. MRI-guided attenuation correction strategies can be classified in three broad categories: (i) segmentation-based approaches, (ii) atlas-registration and machine learning methods, and (iii) emission/transmission-based approaches. This paper summarizes past and current state-of-the-art developments and latest advances in PET/MRI attenuation correction. The advantages and drawbacks of each approach for addressing the challenges of MR-based attenuation correction are comprehensively described. The opportunities brought by both MRI and PETimaging modalities for deriving accurate attenuation maps and improving PET quantification will be elaborated. Future prospects and potential clinical applications of these techniques and their integration in commercial systems will also be discussed.
- THERAPEUTIC INTERVENTIONS
- Research Articles
43(2016); http://dx.doi.org/10.1118/1.4940350View Description Hide DescriptionPurpose:
Iterative methods for beam angle selection (BAS) for intensity-modulated radiation therapy(IMRT) planning sequentially construct a beneficial ensemble of beam directions. In a naïve implementation, the nth beam is selected by adding beam orientations one-by-one from a discrete set of candidates to an existing ensemble of (n − 1) beams. The best beam orientation is identified in a time consuming process by solving the fluence map optimization (FMO) problem for every candidate beam and selecting the beam that yields the largest improvement to the objective function value. This paper evaluates two alternative methods to accelerate iterative BAS based on surrogates for the FMO objective function value.Methods:
We suggest to select candidate beams not based on the FMO objective function value after convergence but (1) based on the objective function value after five FMO iterations of a gradient based algorithm and (2) based on a projected gradient of the FMO problem in the first iteration. The performance of the objective function surrogates is evaluated based on the resulting objective function values and dose statistics in a treatment planning study comprising three intracranial, three pancreas, and three prostate cases. Furthermore, iterative BAS is evaluated for an application in which a small number of noncoplanar beams complement a set of coplanar beam orientations. This scenario is of practical interest as noncoplanar setups may require additional attention of the treatment personnel for every couch rotation.Results:
Iterative BAS relying on objective function surrogates yields similar results compared to naïve BAS with regard to the objective function values and dose statistics. At the same time, early stopping of the FMO and using the projected gradient during the first iteration enable reductions in computation time by approximately one to two orders of magnitude. With regard to the clinical delivery of noncoplanar IMRT treatments, we could show that optimized beam ensembles using only a few noncoplanar beam orientations often approach the plan quality of fully noncoplanar ensembles.Conclusions:
We conclude that iterative BAS in combination with objective function surrogates can be a viable option to implement automated BAS at clinically acceptable computation times.
43(2016); http://dx.doi.org/10.1118/1.4941007View Description Hide DescriptionPurpose:
The planning of an intensity modulated radiation therapytreatment requires the optimization of the fluence intensities. The fluence map optimization (FMO) is many times based on a nonlinear continuous programming problem, being necessary for the planner to define a priori weights and/or lower bounds that are iteratively changed within a trial-and-error procedure until an acceptable plan is reached. In this work, the authors describe an alternative approach for FMO that releases the human planner from trial-and-error procedures, contributing for the automation of the planning process.Methods:
The FMO is represented by a voxel-based convex penalty continuous nonlinear model. This model makes use of both weights and lower/upper bounds to guide the optimization process toward interesting solutions that are able to satisfy all the constraints defined for the treatment. All the model’s parameters are iteratively changed by resorting to a fuzzy inference system. This system analyzes how far the current solution is from a desirable solution, changing in a completely automated way both weights and lower/upper bounds. The fuzzy inference system is based on fuzzy reasoning that enables the use of common-sense rules within an iterative optimization process. The method is built in two stages: in a first stage, an admissible solution is calculated, trying to guarantee that all the treatment planning constraints are being satisfied. In this first stage, the algorithm tries to improve as much as possible the irradiation of the planning target volumes. In a second stage, the algorithm tries to improve organ sparing, without jeopardizing tumor coverage.Results:
The proposed methodology was applied to ten head-and-neck cancer cases already treated in the Portuguese Oncology Institute of Coimbra (IPOCFG) and signalized as complex cases. IMRTtreatment was considered, with 7, 9, and 11 equidistant beam angles. It was possible to obtain admissible solutions for all the patients considered and with no human planner intervention. The results obtained were compared with the optimized solution using a similar optimization model but with human planner intervention. For the vast majority of cases, it was possible to improve organ sparing and at the same time to assure better tumor coverage.Conclusions:
Embedding a fuzzy inference system into FMO allows human planner reasoning to be used in the guidance of the optimization process toward interesting regions in a truly automated way. The proposed methodology is capable of calculating high quality plans within reasonable computational times and can be an important contribution toward fully automated radiation therapy treatment planning.
43(2016); http://dx.doi.org/10.1118/1.4940789View Description Hide DescriptionPurpose:
To develop methods for evaluation and mitigation of dosimetric impact due to respiratory and diaphragmatic motion during free breathing in treatment of distal esophageal cancers using intensity-modulated proton therapy (IMPT).Methods:
This was a retrospective study on 11 patients with distal esophageal cancer. For each patient, four-dimensional computed tomography (4D CT) data were acquired, and a nominal dose was calculated on the average phase of the 4D CT. The changes of water equivalent thickness (ΔWET) to cover the treatment volume from the peak of inspiration to the valley of expiration were calculated for a full range of beam angle rotation. Two IMPT plans were calculated: one at beam angles corresponding to small ΔWET and one at beam angles corresponding to large ΔWET. Four patients were selected for the calculation of 4D-robustness-optimized IMPT plans due to large motion-induced dose errors generated in conventional IMPT. To quantitatively evaluate motion-induced dose deviation, the authors calculated the lowest dose received by 95% (D95) of the internal clinical target volume for the nominal dose, the D95 calculated on the maximum inhale and exhale phases of 4D CT, the 4D composite dose, and the 4D dynamic dose for a single fraction.Results:
The dose deviation increased with the average ΔWET of the implemented beams, ΔWETave. When ΔWETave was less than 5 mm, the dose error was less than 1 cobalt gray equivalent based on DCT0 and DCT50. The dose deviation determined on the basis of DCT0 and DCT50 was proportionally larger than that determined on the basis of the 4D composite dose. The 4D-robustness-optimized IMPT plans notably reduced the overall dose deviation of multiple fractions and the dose deviation caused by the interplay effect in a single fraction.Conclusions:
In IMPT for distal esophageal cancer, ΔWET analysis can be used to select the beam angles that are least affected by respiratory and diaphragmatic motion. To further reduce dose deviation, the 4D-robustness optimization can be implemented for IMPT planning. Calculation of DCT0 and DCT50 is a conservative method to estimate the motion-induced dose errors.
43(2016); http://dx.doi.org/10.1118/1.4941363View Description Hide DescriptionPurpose:
To determine how training set size affects the accuracy of knowledge-based treatment planning (KBP) models.Methods:
The authors selected four models from three classes of KBP approaches, corresponding to three distinct quantities that KBP models may predict: dose–volume histogram (DVH) points, DVH curves, and objective function weights. DVH point prediction is done using the best plan from a database of similar clinical plans; DVH curve prediction employs principal component analysis and multiple linear regression; and objective function weights uses either logistic regression or K-nearest neighbors. The authors trained each KBP model using training sets of sizes n = 10, 20, 30, 50, 75, 100, 150, and 200. The authors set aside 100 randomly selected patients from their cohort of 315 prostate cancer patients from Princess Margaret Cancer Center to serve as a validation set for all experiments. For each value of n, the authors randomly selected 100 different training sets with replacement from the remaining 215 patients. Each of the 100 training sets was used to train a model for each value of n and for each KBT approach. To evaluate the models, the authors predicted the KBP endpoints for each of the 100 patients in the validation set. To estimate the minimum required sample size, the authors used statistical testing to determine if the median error for each sample size from 10 to 150 is equal to the median error for the maximum sample size of 200.Results:
The minimum required sample size was different for each model. The DVH point prediction method predicts two dose metrics for the bladder and two for the rectum. The authors found that more than 200 samples were required to achieve consistent model predictions for all four metrics. For DVH curve prediction, the authors found that at least 75 samples were needed to accurately predict the bladder DVH, while only 20 samples were needed to predict the rectum DVH. Finally, for objective function weight prediction, at least 10 samples were needed to train the logistic regression model, while at least 150 samples were required to train the K-nearest neighbor methodology.Conclusions:
In conclusion, the minimum required sample size needed to accurately train KBP models for prostate cancer depends on the specific model and endpoint to be predicted. The authors’ results may provide a lower bound for more complicated tumor sites.
- DIAGNOSTIC IMAGING (IONIZING AND NON-IONIZING)
43(2016); http://dx.doi.org/10.1118/1.4941015View Description Hide DescriptionPurpose:
Image lag in the flat-panel detector used for Linac integrated cone beam computed tomography(CBCT) has a degrading effect on CBCTimage quality. The most prominent visible artifact is the presence of bright semicircular structure in the transverse view of the scans, known also as radar artifact. Several correction strategies have been proposed, but until now the clinical introduction of such corrections remains unreported. In November 2013, the authors have clinically implemented a previously proposed image lag correction on all of their machines at their main site in Amsterdam. The purpose of this study was to retrospectively evaluate the effect of the correction on the quality of CBCTimages and evaluate the required calibration frequency.Methods:
Image lag was measured in five clinical CBCT systems (Elekta Synergy 4.6) using an in-house developed beam interrupting device that stops the x-ray beam midway through the data acquisition of an unattenuated beam for calibration. A triple exponential falling edge response was fitted to the measured data and used to correct image lag from projection images with an infinite response. This filter, including an extrapolation for saturated pixels, was incorporated in the authors’ in-house developed clinical cbctreconstruction software. To investigate the short-term stability of the lag and associated parameters, a series of five image lag measurement over a period of three months was performed. For quantitative analysis, the authors have retrospectively selected ten patients treated in the pelvic region. The apparent contrast was quantified in polar coordinates for scans reconstructed using the parameters obtained from different dates with and without saturation handling.Results:
Visually, the radar artifact was minimal in scans reconstructed using image lag correction especially when saturation handling was used. In patient imaging, there was a significant reduction of the apparent contrast from 43 ± 16.7 to 15.5 ± 11.9 HU without the saturation handling and to 9.6 ± 12.1 HU with the saturation handling, depending on the date of the calibration. The image lag correction parameters were stable over a period of 3 months. The computational load was increased by approximately 10%, not endangering the fast in-line reconstruction.Conclusions:
The lag correction was successfully implemented clinically and removed most image lag artifacts thus improving the image quality. Image lag correction parameters were stable for 3 months indicating low frequency of calibration requirements.
Scatter radiation intensities around a clinical digital breast tomosynthesis unit and the impact on radiation shielding considerations43(2016); http://dx.doi.org/10.1118/1.4940352View Description Hide DescriptionPurpose:
To measure the scattered radiation intensity around a clinical digital breast tomosynthesis (DBT) unit and to provide updated data for radiation shielding design for DBT systems with tungsten-anode x-ray tubes.Methods:
The continuous distribution of scatteredx-rays from a clinical DBT system (Hologic Selenia Dimensions) was measured within an angular range of 0°–180° using a linear-array x-ray detector (X-Scan 0.8f3-512, Detection Technology, Inc., Finland), which was calibrated for the x-ray spectrum range of the DBT unit. The effects of x-rayfield size, phantom size, and x-ray kVp/filter combination were investigated. Following a previously developed methodology by Simpkin, scatter fraction was determined for the DBT system as a function of angle around the phantom center. Detailed calculations of the scatter intensity from a DBT system were demonstrated using the measured scatter fraction data.Results:
For the 30 and 35 kVp acquisition, the scatter-to-primary-ratio and scatter fraction data closely matched with data previously measured by Simpkin. However, the measured data from this study demonstrated the nonisotropic distribution of the scattered radiation around a DBT system, with two strong peaks around 25° and 160°. The majority scatter radiation (>70%) originated from the imaging detector assembly, instead of the phantom. With a workload from a previous survey performed at MGH, the scatter air kerma at 1 m from the phantom center for wall/door is 1.76 × 10−2 mGy patient−1, for floor is 1.64 × 10−1 mGy patient−1, and for ceiling is 3.66 × 10−2 mGy patient−1.Conclusions:
Comparing to previously measured data for mammographic systems, the scatter air kerma from Holgoic DBT is at least two times higher. The main reasons include the harder primary beam with higher workload (measured with total mAs/week), added tomosynthesis acquisition, and strong small angle forward scattering. Due to the highly conservative initial assumptions, the shielding recommendation from NCRP Report 147 is still sufficient for the Hologic DBT system given the workload from a previous survey at MGH. With the data provided from this study, accurate shielding calculation can be performed for Hologic DBT systems with specific workload and barrier distance.
Contour interpolated radial basis functions with spline boundary correction for fast 3D reconstruction of the human articular cartilage from MR images43(2016); http://dx.doi.org/10.1118/1.4941076View Description Hide DescriptionPurpose:
The aim of this work is to demonstrate a new image processing technique that can provide a “near real-time” 3Dreconstruction of the articular cartilage of the human knee from MRimages which is user friendly. This would serve as a point-of-care 3Dvisualization tool which would benefit a consultant radiologist in the visualization of the human articular cartilage.Methods:
The authors introduce a novel fusion of an adaptation of the contour method known as “contour interpolation (CI)” with radial basis functions (RBFs) which they describe as “CI-RBFs.” The authors also present a spline boundary correction which further enhances volume estimation of the method. A subject cohort consisting of 17 right nonpathological knees (ten female and seven male) is assessed to validate the quality of the proposed method. The authors demonstrate how the CI-RBF method dramatically reduces the number of data points required for fitting an implicit surface to the entire cartilage, thus, significantly improving the speed of reconstruction over the comparable RBF reconstruction method of Carr. The authors compare the CI-RBF method volume estimation to a typical commercial package (3d doctor), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages.Results:
The authors demonstrate how the CI-RBF method significantly reduces the number of data points (p-value < 0.0001) required for fitting an implicit surface to the cartilage, by 48%, 31%, and 44% for the patellar, tibial, and femoral cartilages, respectively. Thus, significantly improving the speed of reconstruction (p-value < 0.0001) by 39%, 40%, and 44% for the patellar, tibial, and femoral cartilages over the comparable RBF model of Carr providing a near real-time reconstruction of 6.49, 8.88, and 9.43 min for the patellar, tibial, and femoral cartilages, respectively. In addition, it is demonstrated how the CI-RBF method matches the volume estimation of a typical commercial package (3d doctor), Carr’s RBF method, and a benchmark manual method for the reconstruction of the femoral, tibial, and patellar cartilages. Furthermore, the performance of the segmentation method used for the extraction of the femoral, tibial, and patellar cartilages is assessed with a Dice similarity coefficient, sensitivity, and specificity measure providing high agreement to manual segmentation.Conclusions:
The CI-RBF method provides a fast, accurate, and robust 3Dmodelreconstruction that matches Carr’s RBF method, 3d doctor, and a manual benchmark method in accuracy and significantly improves upon Carr’s RBF method in data requirement and computational speed. In addition, the visualization tool has been designed to quickly segment MRimages requiring only four mouse clicks per MRimage slice.
43(2016); http://dx.doi.org/10.1118/1.4941012View Description Hide DescriptionPurpose:
To allow for a purely image-basedmotion estimation and compensation in weight-bearing cone-beam computed tomography of the knee joint.Methods:
Weight-bearing imaging of the knee joint in a standing position poses additional requirements for the image reconstruction algorithm. In contrast to supine scans, patient motion needs to be estimated and compensated. The authors propose a method that is based on 2D/3D registration of left and right femur and tibia segmented from a prior, motion-free reconstruction acquired in supine position. Each segmented bone is first roughly aligned to the motion-corrupted reconstruction of a scan in standing or squatting position. Subsequently, a rigid 2D/3D registration is performed for each bone to each of K projection images, estimating 6 × 4 × K motion parameters. The motion of individual bones is combined into global motion fields using thin-plate-spline extrapolation. These can be incorporated into a motion-compensated reconstruction in the backprojection step. The authors performed visual and quantitative comparisons between a state-of-the-art marker-based (MB) method and two variants of the proposed method using gradient correlation (GC) and normalized gradient information (NGI) as similarity measure for the 2D/3D registration.Results:
The authors evaluated their method on four acquisitions under different squatting positions of the same patient. All methods showed substantial improvement in image quality compared to the uncorrected reconstructions. Compared to NGI and MB, the GC method showed increased streaking artifacts due to misregistrations in lateral projection images. NGI and MB showed comparable image quality at the bone regions. Because the markers are attached to the skin, the MB method performed better at the surface of the legs where the authors observed slight streaking of the NGI and GC methods. For a quantitative evaluation, the authors computed the universal quality index (UQI) for all bone regions with respect to the motion-free reconstruction. The authors quantitative evaluation over regions around the bones yielded a mean UQI of 18.4 for no correction, 53.3 and 56.1 for the proposed method using GC and NGI, respectively, and 53.7 for the MB reference approach. In contrast to the authors registration-based corrections, the MB reference method caused slight nonrigid deformations at bone outlines when compared to a motion-free reference scan.Conclusions:
The authors showed that their method based on the NGI similarity measure yields reconstruction quality close to the MB reference method. In contrast to the MB method, the proposed method does not require any preparation prior to the examination which will improve the clinical workflow and patient comfort. Further, the authors found that the MB method causes small, nonrigid deformations at the bone outline which indicates that markers may not accurately reflect the internal motion close to the knee joint. Therefore, the authors believe that the proposed method is a promising alternative to MB motion management.
- Technical Notes
Technical Note: Method to correlate whole-specimen histopathology of radical prostatectomy with diagnostic MR imaging43(2016); http://dx.doi.org/10.1118/1.4941016View Description Hide DescriptionPurpose:
Validation of MRI-guided tumor boundary delineation for targeted prostate cancer therapy is achieved via correlation with gold-standard histopathology of radical prostatectomy specimens. Challenges to accurate correlation include matching the pathology sectioning plane with the in vivoimaging slice plane and correction for the deformation that occurs between in vivoimaging and histology. A methodology is presented for matching of the histological sectioning angle and position to the in vivoimaging slices.Methods:
Patients (n = 4) with biochemical failure following external beam radiotherapy underwent diagnostic MRI to confirm localized recurrence of prostate cancer, followed by salvage radical prostatectomy. High-resolution 3-D MRI of the ex vivo specimens was acquired to determine the pathology sectioning angle that best matched the in vivoimaging slice plane, using matching anatomical features and implanted fiducials. A novel sectioning device was developed to guide sectioning at the correct angle, and to assist the insertion of reference dye marks to aid in histopathology reconstruction.Results:
The percentage difference in the positioning of the urethra in the ex vivopathology sections compared to the positioning in in vivoimages was reduced from 34% to 7% through slicing at the best match angle. Reference dye marks were generated, which were visible in ex vivoimaging, in the tissue sections before and after processing, and in histology sections.Conclusions:
The method achieved an almost fivefold reduction in the slice-matching error and is readily implementable in combination with standard MRI technology. The technique will be employed to generate datasets for correlation of whole-specimen prostate histopathology with in vivo diagnostic MRI using 3-D deformable registration, allowing assessment of the sensitivity and specificity of MRI parameters for prostate cancer. Although developed specifically for prostate, the method is readily adaptable to other types of whole tissue specimen, such as mastectomy or liver resection.
Technical Note: Compact three-tesla magnetic resonance imager with high-performance gradients passes ACR image quality and acoustic noise tests43(2016); http://dx.doi.org/10.1118/1.4941362View Description Hide DescriptionPurpose:
A compact, three-tesla magnetic resonance imaging(MRI) system has been developed. It features a 37 cm patient aperture, allowing the use of commercial receiver coils. Its design allows simultaneously for gradient amplitudes of 85 millitesla per meter (mT/m) sustained and 700 tesla per meter per second (T/m/s) slew rates. The size of the gradient system allows for these simultaneous performance targets to be achieved with little or no peripheral nerve stimulation, but also raises a concern about the geometric distortion as much of the imaging will be done near the system’s maximum 26 cm field-of-view. Additionally, the fast switching capability raises acoustic noise concerns. This work evaluates the system for both the American College of Radiology’s (ACR) MRIimage quality protocol and the Food and Drug Administration’s (FDA) nonsignificant risk (NSR) acoustic noise limits for MR. Passing these two tests is critical for clinical acceptance.Methods:
In this work, the gradient system was operated at the maximum amplitude and slew rate of 80 mT/m and 500 T/m/s, respectively. The geometric distortion correction was accomplished by iteratively determining up to the tenth order spherical harmonic coefficients using a fiducial phantom and position-tracking software, with seventh order correction utilized in the ACR test. Acoustic noise was measured with several standard clinical pulse sequences.Results:
The system passes all the ACR image quality tests. The acoustic noise as measured when the gradient coil was inserted into a whole-body MRI system conforms to the FDA NSR limits.Conclusions:
The compact system simultaneously allows for high gradient amplitude and high slew rate. Geometric distortion concerns have been mitigated by extending the spherical harmonic correction to higher orders. Acoustic noise is within the FDA limits.
- QUANTITATIVE IMAGING AND IMAGE PROCESSING
- Research Articles
43(2016); http://dx.doi.org/10.1118/1.4940792View Description Hide DescriptionPurpose:
The postoperative evaluation of scoliosis patients undergoing corrective treatment is an important task to assess the strategy of the spinal surgery. Using accurate 3D geometric models of the patient’s spine is essential to measure longitudinal changes in the patient’s anatomy. On the other hand, reconstructing the spine in 3D from postoperative radiographs is a challenging problem due to the presence of instrumentation (metallic rods and screws) occluding vertebrae on the spine.Methods:
This paper describes the reconstruction problem by searching for the optimal model within a manifold space of articulated spines learned from a training dataset of pathological cases who underwent surgery. The manifold structure is implemented based on a multilevel manifold ensemble to structure the data, incorporating connections between nodes within a single manifold, in addition to connections between different multilevel manifolds, representing subregions with similar characteristics.Results:
The reconstruction pipeline was evaluated on x-ray datasets from both preoperative patients and patients with spinal surgery. By comparing the method to ground-truth models, a 3Dreconstruction accuracy of 2.24 ± 0.90 mm was obtained from 30 postoperative scoliotic patients, while handling patients with highly deformed spines.Conclusions:
This paper illustrates how this manifoldmodel can accurately identify similar spine models by navigating in the low-dimensional space, as well as computing nonlinear charts within local neighborhoods of the embedded space during the testing phase. This technique allows postoperative follow-ups of spinal surgery using personalized 3D spine models and assess surgical strategies for spinal deformities.
43(2016); http://dx.doi.org/10.1118/1.4941011View Description Hide DescriptionPurpose:
Automatic brainimage labeling is highly demanded in the field of medical imageanalysis. Multiatlas-based approaches are widely used due to their simplicity and robustness in applications. Also, random forest technique is recognized as an efficient method for labeling, although there are several existing limitations. In this paper, the authors intend to address those limitations by proposing a novel framework based on the hierarchical learning of atlas forests.Methods:
Their proposed framework aims to train a hierarchy of forests to better correlate voxels in the MR images with their corresponding labels. There are two specific novel strategies for improving brainimage labeling. First, different from the conventional ways of using a single level of random forests for brain labeling, the authors design a hierarchical structure to incorporate multiple levels of forests. In particular, each atlas forest in the bottom level is trained in accordance with each individual atlas, and then the bottom-level forests are clustered based on their capabilities in labeling. For each clustered group, the authors retrain a new representative forest in the higher level by using all atlases associated with the lower-level atlas forests in the current group, as well as the tentative label maps yielded from the lower level. This clustering and retraining procedure is conducted iteratively to yield a hierarchical structure of forests. Second, in the testing stage, the authors also present a novel atlas forest selection method to determine an optimal set of atlas forests from the constructed hierarchical structure (by disabling those nonoptimal forests) for accurately labeling the test image.Results:
For validating their proposed framework, the authors evaluate it on the public datasets, including Alzheimer’s disease neuroimaging initiative, Internet brain segmentation repository, and LONI LPBA40. The authors compare the results with the conventional approaches. The experiments show that the use of the two novel strategies can significantly improve the labeling performance. Note that when more levels are constructed in the hierarchy, the labeling performance can be further improved, but more computational time will be also required.Conclusions:
The authors have proposed a novel multiatlas-based framework for automatic and accurate labeling of brain anatomies, which can achieve accurate labeling results for MR brainimages.
A Bayesian spatial temporal mixtures approach to kinetic parametric images in dynamic positron emission tomography43(2016); http://dx.doi.org/10.1118/1.4941010View Description Hide DescriptionPurpose:
Estimation of parametric maps is challenging for kinetic models in dynamic positron emission tomography. Since voxel kinetics tend to be spatially contiguous, the authors consider groups of homogeneous voxels together. The authors propose a novel algorithm to identify the groups and estimate kinetic parameters simultaneously. Uncertainty estimates for kinetic parameters are also obtained.Methods:
Mixture models were used to fit the time activity curves. In order to borrow information from spatially nearby voxels, the Potts model was adopted. A spatial temporal model was built incorporating both spatial and temporal information in the data. Markov chain Monte Carlo was used to carry out parameter estimation. Evaluation and comparisons with existing methods were carried out on cardiac studies using both simulated data sets and a pig study data. One-compartment kinetic modeling was used, in which K1 is the parameter of interest, providing a measure of local perfusion.Results:
Based on simulation experiments, the median standard deviation across all image voxels, of K1 estimates were 0, 0.13, and 0.16 for the proposed spatial mixture models (SMMs), standard curve fitting, and spatialK-means methods, respectively. The corresponding median mean squared biases for K1 were 0.04, 0.06, and 0.06 for abnormal region of interest (ROI); 0.03, 0.03, and 0.04 for normal ROI; and 0.007, 0.02, and 0.05 for the noise region.Conclusions:
SMM is a fully Bayesian algorithm which determines the optimal number of homogeneous voxel groups, voxel group membership, parameter estimation, and parameter uncertainty estimation simultaneously. The voxel membership can also be used for classification purposes. By borrowing information from spatially nearby voxels, SMM substantially reduces the variability of parameter estimates. In some ROIs, SMM also reduces mean squared bias.
43(2016); http://dx.doi.org/10.1118/1.4941307View Description Hide DescriptionPurpose:
High mammographic density is known to be associated with decreased sensitivity of mammography. Recent changes in the BI-RADS® density assessment address the effect of masking by densities, but the BI-RADS® assessment remains qualitative and achieves only moderate agreement between radiologists. An automated, quantitative algorithm that estimates the likelihood of masking of simulated masses in a mammogram by dense tissue has been developed. The algorithm considers both the effects of loss of contrast due to density and the distracting texture or appearance of dense tissue.Methods:
A local detectability (dL) map is created by tessellating the mammograms into overlapping regions of interest (ROIs), for which the detectability by a non-prewhitening observer is computed using local estimates of the noise power spectrum and volumetric breast density (VBD). The dL calculation was validated in a 4-alternative forced-choice observer study on the ROIs of 150 craniocaudal digital mammograms. The dL metric was compared against the inverse threshold contrast, (ΔμT)−1 from the observer study, the anatomic noise parameter β, the radiologist’s BI-RADS® density category, and a validated measure of VBD (Cumulus).Results:
The mean dL had a high correlation of r = 0.915 and r = 0.699 with (ΔμT)−1 in the computerized and human observer study, respectively. In comparison, the local VBD estimate had a low correlation of 0.538 with (ΔμT)−1. The mean dL had a correlation of 0.663, 0.835, and 0.696 with BI-RADS density, β, and Cumulus VBD, respectively.Conclusions:
The proposed dL metric may be useful in characterizing the potential for lesion masking by dense tissue. Because it uses information about the anatomic noise or tissue appearance, it is more closely linked to lesion detectability than VBD metrics.
Effect of reconstruction methods and x-ray tube current–time product on nodule detection in an anthropomorphic thorax phantom: A crossed-modality JAFROC observer study43(2016); http://dx.doi.org/10.1118/1.4941017View Description Hide DescriptionPurpose:
To evaluate nodule detection in an anthropomorphic chest phantom in computed tomography(CT)imagesreconstructed with adaptive iterative dose reduction 3D (AIDR3D) and filtered back projection (FBP) over a range of tube current–time product (mAs).Methods:
Two phantoms were used in this study: (i) an anthropomorphic chest phantom was loaded with spherical simulated nodules of 5, 8, 10, and 12 mm in diameter and +100, −630, and −800 Hounsfield units electron density; this would generate CTimages for the observer study; (ii) a whole-body dosimetry verification phantom was used to ultimately estimate effective dose and risk according to the model of the BEIR VII committee. Both phantoms were scanned over a mAs range (10, 20, 30, and 40), while all other acquisition parameters remained constant. Images were reconstructed with both AIDR3D and FBP. For the observer study, 34 normal cases (no nodules) and 34 abnormal cases (containing 1–3 nodules, mean 1.35 ± 0.54) were chosen. Eleven observers evaluated images from all mAs and reconstruction methods under the free-response paradigm. A crossed-modality jackknife alternative free-response operating characteristic (JAFROC) analysis method was developed for data analysis, averaging data over the two factors influencing nodule detection in this study: mAs and image reconstruction (AIDR3D or FBP). A Bonferroni correction was applied and the threshold for declaring significance was set at 0.025 to maintain the overall probability of Type I error at α = 0.05. Contrast-to-noise (CNR) was also measured for all nodules and evaluated by a linear least squares analysis.Results:
For random-reader fixed-case crossed-modality JAFROC analysis, there was no significant difference in nodule detection between AIDR3D and FBP when data were averaged over mAs [F(1, 10) = 0.08, p = 0.789]. However, when data were averaged over reconstruction methods, a significant difference was seen between multiple pairs of mAs settings [F(3, 30) = 15.96, p < 0.001]. Measurements of effective dose and effective risk showed the expected linear dependence on mAs. Nodule CNR was statistically higher for simulated nodules on imagesreconstructed with AIDR3D (p < 0.001).Conclusions:
No significant difference in nodule detection performance was demonstrated between imagesreconstructed with FBP and AIDR3D. mAs was found to influence nodule detection, though further work is required for dose optimization.
- EMERGING IMAGING AND THERAPY MODALITIES
A simple method for determining the coagulation threshold temperature of transparent tissue-mimicking thermal therapy gel phantoms: Validated by magnetic resonance imaging thermometry43(2016); http://dx.doi.org/10.1118/1.4941361View Description Hide DescriptionPurpose:
Tissue-mimicking thermal therapy phantoms that coagulate at specific temperatures are valuable tools for developing and evaluating treatment strategies related to thermal therapy. Here, the authors propose a simple and efficient method for determining the coagulation threshold temperature of transparent thermal therapy gel phantoms.Methods:
The authors used a previously published gel phantom recipe with 2% (w/v) of bovine serum albumin as the temperature-sensitiveprotein. Using the programmable heating settings of a polymerase chain reaction(PCR) machine, the authors heated 50 μl gel samples to various temperatures for 3 min and then imaged them using the BioRad Gel Doc system to determine the coagulation temperature using an opacity quantification method. The estimated coagulation temperatures were then validated for gel phantoms prepared with different pH levels using high-intensity focused ultrasound (HIFU) heating and magnetic resonance imaging(MRI) thermometry methods on a clinical MR-HIFU system.Results:
The PCR heating method produced consistent and reproducible coagulation of gel samples in precise correlation with the set incubation temperatures. The resulting coagulation threshold temperatures for gel phantoms of varying pH levels were found to be 44.1 ± 0.1, 53.4 ± 0.9, and 60.3 ± 0.9 °C for pH levels of 4.25, 4.50, and 4.75, respectively. This corresponded well with the coagulation threshold temperatures determined by MR-thermometry, with coagulation defined as a 95% decrease in T2 relaxation time, which were estimated at 53.6 ± 1.9 and 62.9 ± 2.4 °C for a pH of 4.50 and 4.75, respectively.Conclusions:
The opacity quantification method provides a fast and reproducible estimate of the coagulation threshold temperature of transparent temperature-sensitivegel phantoms. The temperatures determined using this method were well within the range of temperatures estimated using MR-thermometry. Due to the specific heating capabilities of the PCR machine, and the robust determination of coagulation threshold temperatures based on the statistically significant increase in the opacity of gel samples, coagulation temperatures can be determined more precisely and with less variability compared to MRI-based methods.
- COMPUTATIONAL AND EXPERIMENTAL DOSIMETRY
Monte Carlo calculated correction factors for the PTW microDiamond detector in the Gamma Knife-Model C43(2016); http://dx.doi.org/10.1118/1.4940790View Description Hide DescriptionPurpose:
Accurate dose measurements in small fields require correction factors when sufficient CPE is not present inside of the field. These factors adjust for perturbation, volume averaging, and other effects; as such, they are field size,detector, and phantom dependent. In this work, Monte Carlo(MC) methods were used to calculate correction factors for PTW’s microDiamond detector in Elekta’s Gamma Knife Model-C unit. These correction factors allow for accurate measurement of output factors—even in the smallest field sizes where CPE is not present.Methods:
The small field correction factors were calculated as correction factors according to the Alfonso formalism. The MCmodel of the Gamma Knife was built with the EGSnrc code system, using BEAMnrc and DOSRZnrc user codes. Efforts were made to validate the MCmodel against experimental measurements. Using the model, field output factors and measurement ratios for each of the four helmet sizes were simulated for an ABS plastic phantom and validated against film measurements, detector measurements, and treatment planning system (TPS) data. Once validated against the available ABS phantom, the model was applied to a more waterlike solid water phantom. Using MC results from the solid water phantom, the final k correction factors were determined relative to the machine specific reference field—the 18 mm helmet, which is the largest field size available on the unit.Results:
When validating against experimental measurements using the ABS phantom, all MC methods agreed with experiment within the stated uncertainties:MC determined field output factors agreed within 0.6% of the TPS and 1.4% of film; and MC simulated measurement ratios matched physically measured ratios within 1% for all helmet sizes. for the PTW microDiamond in the solid water phantom approached unity to within 0.4% ± 1.7% for all the helmet sizes except the 4 mm; the 4 mm helmet size over-responded by 3.2% ± 1.7%, resulting in a of 0.969.Conclusions:
Similar to what has been found in the Gamma Knife Perfexion, the PTW microDiamond over-responds in the smallest 4 mm field. The over-response can be corrected via the Alfonso formalism using the correction factors determined in this work. Using the MC calculated correction factors, the PTW microDiamond detector is an effective dosimeter in all available helmet sizes.
43(2016); http://dx.doi.org/10.1118/1.4940791View Description Hide DescriptionPurpose:
Dosimetry for the model S700 50 kV electronic brachytherapy (eBT) source (Xoft, Inc., a subsidiary of iCAD, San Jose, CA) was simulated using Monte Carlo(MC) methods by Rivard et al. [“Calculated and measuredbrachytherapydosimetry parameters in water for the Xoft Axxent x-ray source: An electronic brachytherapy source,” Med. Phys. 33, 4020–4032 (2006)] and recently by Hiatt et al. [“A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model,” Med. Phys. 42, 2764–2776 (2015)] with improved geometric characterization. While these studies examined the dose distribution in water, there have not previously been reports of the eBT source calibration methods beyond that recently reported by Seltzer et al. [“New national air-kerma standard for low-energy electronic brachytherapy sources,” J. Res. Natl. Inst. Stand. Technol. 119, 554–574 (2014)]. Therefore, the motivation for the current study was to provide an independent determination of air-kerma rate at 50 cm in air using MC methods for the model S700 eBT source.Methods:
Using CAD information provided by the vendor and disassembled sources, an MC model was created for the S700 eBT source. Simulations were run using the mcnp6radiation transport code for the NIST Lamperti air ionization chamber according to specifications by Boutillon et al. [“Comparison of exposure standards in the 10-50 kV x-ray region,” Metrologia 5, 1–11 (1969)], in air without the Lamperti chamber, and in vacuum without the Lamperti chamber. was determined using the *F4 tally with NIST values for the mass energy-absorption coefficients for air. Photon spectra were evaluated over 2π azimuthal sampling for polar angles of 0° ≤ θ ≤ 180° every 1°. Volume averaging was averted through tight radial binning. Photon energy spectra were determined over all polar angles in both air and vacuum using the F4 tally with 0.1 keV resolution. A total of 1011 simulated histories were run for the Lamperti chamber geometry (statistical uncertainty of 0.14%), with 1010 histories for the in-air and in-vacuum simulations (statistical uncertainty of 0.04%). The total standard uncertainty in the calculated air-kerma rate determination amounted to 6.8%.Results:
MC simulations determined the air-kerma rate at 50 cm from the source with the modeled Lamperti chamber to be (1.850 ± 0.126) × 10−4 Gy/s, which was within the range of values (1.67–2.11) × 10−4 Gy/s measured by NIST. The ratio of the photon spectra in air and in vacuum were in good agreement above 13 keV, and for θ < 150° where the influence of the Kovar sleeve and the Ag epoxy components caused increased scatter in air. Below 13 keV, the ratio of the photon spectra in air to vacuum exhibited a decrease that was attributed to increased attenuation of the photons in air. Across most of the energy range on the source transverse plane, there was good agreement between the authors’ simulated spectra and that measured by NIST. Discrepancies were observed above 40 keV where the NIST spectrum had a steeper fall-off towards 50 keV.Conclusions:
Through MC simulations of radiation transport, this study provided an independent validation of the measured air-kerma rate at 50 cm in air at NIST for the model S700 eBT source, with mean results in agreement within 3.3%. This difference was smaller than the range (i.e., 23%) of the measured values.