Volume 42, Issue 6, June 2015
Index of content:
42(2015); http://dx.doi.org/10.1118/1.4916670View Description Hide Description
- MEDICAL PHYSICS LETTER
42(2015); http://dx.doi.org/10.1118/1.4921022View Description Hide DescriptionPurpose:
Many expectations have been raised since the use of conventional x-ray tubes on grating-based x-ray phase-contrast imaging. Despite a reported increase in contrast-to-noise ratio (CNR) in many publications, there is doubt on whether phase-contrast computed tomography (CT) is advantageous in clinical CT scanners in vivo. The aim of this paper is to contribute to this discussion by analyzing the performance of a phase-contrast CT laboratory setup.Methods:
A phase-contrast CT performance analysis was done. Projection images of a phantom were recorded, and image slices were reconstructed using standard filtered back projection methods. The resulting image slices were analyzed by determining the CNRs in the attenuation and phase image. These results were compared to analytically calculated expectations according to the already published phase-contrast CT performance analysis by Raupach and Flohr [Med. Phys. 39, 4761–4774 (2012)]. There, a severe mistake was found leading to wrong predictions of the performance of phase-contrast CT. The error was corrected and with the new formulae, the experimentally obtained results matched the analytical calculations.Results:
The squared ratios of the phase-contrast CNR and the attenuation CNR obtained in the authors’ experiment are five- to ten-fold higher than predicted by Raupach and Flohr [Med. Phys. 39, 4761–4774 (2012)]. The effective lateral spatial coherence length deduced outnumbers the already optimistic assumption of Raupach and Flohr [Med. Phys. 39, 4761–4774 (2012)] by a factor of 3.Conclusions:
The authors’ results indicate that the assumptions made in former performance analyses are pessimistic. The break-even point, when phase-contrast CT outperforms attenuation CT, is within reach even with realistic, nonperfect gratings. Further improvements to state-of-the-art clinical CT scanners, like increasing the spatial resolution, could change the balance in favor of phase-contrast computed tomography even more. This could be done by, e.g., quantum-counting pixel detectors with four-fold smaller pixel pitches.
- RADIATION THERAPY PHYSICS
Robotic real-time translational and rotational head motion correction during frameless stereotactic radiosurgery42(2015); http://dx.doi.org/10.1118/1.4919279View Description Hide DescriptionPurpose:
To develop a control system to correct both translational and rotational head motion deviations in real-time during frameless stereotactic radiosurgery (SRS).Methods:
A novel feedback control with a feed-forward algorithm was utilized to correct for the coupling of translation and rotation present in serial kinematic robotic systems. Input parameters for the algorithm include the real-time 6DOF target position, the frame pitch pivot point to target distance constant, and the translational and angular Linac beam off (gating) tolerance constants for patient safety. Testing of the algorithm was done using a 4D (XY Z + pitch) robotic stage, an infrared head position sensing unit and a control computer. The measured head position signal was processed and a resulting command was sent to the interface of a four-axis motor controller, through which four stepper motors were driven to perform motion compensation.Results:
The control of the translation of a brain target was decoupled with the control of the rotation. For a phantom study, the corrected position was within a translational displacement of 0.35 mm and a pitch displacement of 0.15° 100% of the time. For a volunteer study, the corrected position was within displacements of 0.4 mm and 0.2° over 98.5% of the time, while it was 10.7% without correction.Conclusions:
The authors report a control design approach for both translational and rotational head motion correction. The experiments demonstrated that control performance of the 4D robotic stage meets the submillimeter and subdegree accuracy required by SRS.
A revised dosimetric characterization of the model S700 electronic brachytherapy source containing an anode-centering plastic insert and other components not included in the 2006 model42(2015); http://dx.doi.org/10.1118/1.4919280View Description Hide DescriptionPurpose:
The model S700 Axxent electronic brachytherapy source by Xoft, Inc., was characterized by Rivard et al. in 2006. Since then, the source design was modified to include a new insert at the source tip. Current study objectives were to establish an accurate source model for simulation purposes, dosimetrically characterize the new source and obtain its TG-43 brachytherapy dosimetry parameters, and determine dose differences between the original simulation model and the current model S700 source design.Methods:
Design information from measurements of dissected model S700 sources and from vendor-supplied CAD drawings was used to aid establishment of an updated Monte Carlo source model, which included the complex-shaped plastic source-centering insert intended to promote water flow for cooling the source anode. These data were used to create a model for subsequent radiation transport simulations in a water phantom. Compared to the 2006 simulation geometry, the influence of volume averaging close to the source was substantially reduced. A track-length estimator was used to evaluate collision kerma as a function of radial distance and polar angle for determination of TG-43 dosimetry parameters. Results for the 50 kV source were determined every 0.1 cm from 0.3 to 15 cm and every 1° from 0° to 180°. Photon spectra in water with 0.1 keV resolution were also obtained from 0.5 to 15 cm and polar angles from 0° to 165°. Simulations were run for 1010 histories, resulting in statistical uncertainties on the transverse plane of 0.04% at r = 1 cm and 0.06% at r = 5 cm.Results:
The dose-rate distribution ratio for the model S700 source as compared to the 2006 model exceeded unity by more than 5% for roughly one quarter of the solid angle surrounding the source, i.e., θ ≥ 120°. The radial dose function diminished in a similar manner as for an 125I seed, with values of 1.434, 0.636, 0.283, and 0.0975 at 0.5, 2, 5, and 10 cm, respectively. The radial dose function ratio between the current and the 2006 model had a minimum of 0.980 at 0.4 cm, close to the source sheath and for large distances approached 1.014. 2D anisotropy function ratios were close to unity for 50° ≤ θ ≤ 110°, but exceeded 5% for θ < 40° at close distances to the sheath and exceeded 15% for θ > 140°, even at large distances. Photon energy fluence of the updated model as compared to the 2006 model showed a decrease in output with increasing distance; this effect was pronounced at the lowest energies. A decrease in photon fluence with increase in polar angle was also observed and was attributed to the silver epoxy component.Conclusions:
Changes in source design influenced the overall dose rate and distribution by more than 2% in several regions. This discrepancy is greater than the dose calculation acceptance criteria as recommended in the AAPM TG-56 report. The effect of the design change on the TG-43 parameters would likely not result in dose differences outside of patient applicators. Adoption of this new dataset is suggested for accurate depiction of model S700 source dose distributions.
Validating FMEA output against incident learning data: A study in stereotactic body radiation therapy42(2015); http://dx.doi.org/10.1118/1.4919440View Description Hide DescriptionPurpose:
Though failure mode and effects analysis (FMEA) is becoming more widely adopted for risk assessment in radiation therapy, to our knowledge, its output has never been validated against data on errors that actually occur. The objective of this study was to perform FMEA of a stereotactic body radiation therapy (SBRT) treatment planning process and validate the results against data recorded within an incident learning system.Methods:
FMEA on the SBRT treatment planning process was carried out by a multidisciplinary group including radiation oncologists, medical physicists, dosimetrists, and IT technologists. Potential failure modes were identified through a systematic review of the process map. Failure modes were rated for severity, occurrence, and detectability on a scale of one to ten and risk priority number (RPN) was computed. Failure modes were then compared with historical reports identified as relevant to SBRT planning within a departmental incident learning system that has been active for two and a half years. Differences between FMEA anticipated failure modes and existing incidents were identified.Results:
FMEA identified 63 failure modes. RPN values for the top 25% of failure modes ranged from 60 to 336. Analysis of the incident learning database identified 33 reported near-miss events related to SBRT planning. Combining both methods yielded a total of 76 possible process failures, of which 13 (17%) were missed by FMEA while 43 (57%) identified by FMEA only. When scored for RPN, the 13 events missed by FMEA ranked within the lower half of all failure modes and exhibited significantly lower severity relative to those identified by FMEA (p = 0.02).Conclusions:
FMEA, though valuable, is subject to certain limitations. In this study, FMEA failed to identify 17% of actual failure modes, though these were of lower risk. Similarly, an incident learning system alone fails to identify a large number of potentially high-severity process errors. Using FMEA in combination with incident learning may render an improved overview of risks within a process.
Quantification and comparison of visibility and image artifacts of a new liquid fiducial marker in a lung phantom for image-guided radiation therapy42(2015); http://dx.doi.org/10.1118/1.4919616View Description Hide DescriptionPurpose:
A new biodegradable liquid fiducial marker was devised to allow for easy insertion in lung tumors using thin needles. The purpose of this study was to evaluate the visibility of the liquid fiducial markers for image-guided radiation therapy and compare to existing solid fiducial markers and to one existing liquid fiducial marker currently commercially available.Methods:
Fiducial marker visibility was quantified in terms of contrast to noise ratio (CNR) on planar kilovoltage x-ray images in a thorax phantom for different concentrations of the radio-opaque component of the new liquid fiducial marker, four solid fiducial markers, and one existing liquid fiducial marker. Additionally, the image artifacts produced on computer tomography (CT) and cone-beam CT (CBCT) of all fiducial markers were quantified.Results:
The authors found that the new liquid fiducial marker with the highest concentration of the radio-opaque component had a CNR > 2.05 for 62/63 exposures, which compared favorably to the existing solid fiducial markers and to the existing liquid fiducial marker evaluated. On CT and CBCT, the new liquid fiducial marker with the highest concentration produced lower streaking index artifact (30 and 14, respectively) than the solid gold markers (113 and 20, respectively) and the existing liquid fiducial marker (39 and 20, respectively). The size of the image artifact was larger for all of the liquid fiducial markers compared to the solid fiducial markers because of their larger physical size.Conclusions:
The visibility and the image artifacts produced by the new liquid fiducial markers were comparable to existing solid fiducial markers and the existing liquid fiducial marker. The authors conclude that the new liquid fiducial marker represents an alternative to the fiducial markers tested.
42(2015); http://dx.doi.org/10.1118/1.4919742View Description Hide DescriptionPurpose:
Volumetric modulated arc therapy (VMAT) optimization is a computationally challenging problem due to its large data size, high degrees of freedom, and many hardware constraints. High-performance graphics processing units (GPUs) have been used to speed up the computations. However, GPU’s relatively small memory size cannot handle cases with a large dose-deposition coefficient (DDC) matrix in cases of, e.g., those with a large target size, multiple targets, multiple arcs, and/or small beamlet size. The main purpose of this paper is to report an implementation of a column-generation-based VMAT algorithm, previously developed in the authors’ group, on a multi-GPU platform to solve the memory limitation problem. While the column-generation-based VMAT algorithm has been previously developed, the GPU implementation details have not been reported. Hence, another purpose is to present detailed techniques employed for GPU implementation. The authors also would like to utilize this particular problem as an example problem to study the feasibility of using a multi-GPU platform to solve large-scale problems in medical physics.Methods:
The column-generation approach generates VMAT apertures sequentially by solving a pricing problem (PP) and a master problem (MP) iteratively. In the authors’ method, the sparse DDC matrix is first stored on a CPU in coordinate list format (COO). On the GPU side, this matrix is split into four submatrices according to beam angles, which are stored on four GPUs in compressed sparse row format. Computation of beamlet price, the first step in PP, is accomplished using multi-GPUs. A fast inter-GPU data transfer scheme is accomplished using peer-to-peer access. The remaining steps of PP and MP problems are implemented on CPU or a single GPU due to their modest problem scale and computational loads. Barzilai and Borwein algorithm with a subspace step scheme is adopted here to solve the MP problem. A head and neck (H&N) cancer case is then used to validate the authors’ method. The authors also compare their multi-GPU implementation with three different single GPU implementation strategies, i.e., truncating DDC matrix (S1), repeatedly transferring DDC matrix between CPU and GPU (S2), and porting computations involving DDC matrix to CPU (S3), in terms of both plan quality and computational efficiency. Two more H&N patient cases and three prostate cases are used to demonstrate the advantages of the authors’ method.Results:
The authors’ multi-GPU implementation can finish the optimization process within ∼1 min for the H&N patient case. S1 leads to an inferior plan quality although its total time was 10 s shorter than the multi-GPU implementation due to the reduced matrix size. S2 and S3 yield the same plan quality as the multi-GPU implementation but take ∼4 and ∼6 min, respectively. High computational efficiency was consistently achieved for the other five patient cases tested, with VMAT plans of clinically acceptable quality obtained within 23–46 s. Conversely, to obtain clinically comparable or acceptable plans for all six of these VMAT cases that the authors have tested in this paper, the optimization time needed in a commercial TPS system on CPU was found to be in an order of several minutes.Conclusions:
The results demonstrate that the multi-GPU implementation of the authors’ column-generation-based VMAT optimization can handle the large-scale VMAT optimization problem efficiently without sacrificing plan quality. The authors’ study may serve as an example to shed some light on other large-scale medical physics problems that require multi-GPU techniques.
42(2015); http://dx.doi.org/10.1118/1.4919847View Description Hide DescriptionPurpose:
Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning.Methods:
The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan quality of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with dose–volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases.Results:
The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied.Conclusions:
The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by several critical structures.
Technical Note: Study of the electron transport parameters used in penelope for the Monte Carlo simulation of Linac targets42(2015); http://dx.doi.org/10.1118/1.4916686View Description Hide DescriptionPurpose:
The Monte Carlo simulation of electron transport in Linac targets using the condensed history technique is known to be problematic owing to a potential dependence of absorbed dose distributions on the electron step length. In the penelope code, the step length is partially determined by the transport parameters C1 and C2. The authors have investigated the effect on the absorbed dose distribution of the values given to these parameters in the target.Methods:
A monoenergetic 6.26 MeV electron pencil beam from a point source was simulated impinging normally on a cylindrical tungsten target. Electrons leaving the tungsten were discarded. Radial absorbed dose profiles were obtained at 1.5 cm of depth in a water phantom located at 100 cm for values of C1 and C2 in the target both equal to 0.1, 0.01, or 0.001. A detailed simulation case was also considered and taken as the reference. Additionally, lateral dose profiles were estimated and compared with experimental measurements for a 6 MV photon beam of a Varian Clinac 2100 for the cases of C1 and C2 both set to 0.1 or 0.001 in the target.Results:
On the central axis, the dose obtained for the case C1 = C2 = 0.1 shows a deviation of (17.2% ± 1.2%) with respect to the detailed simulation. This difference decreases to (3.7% ± 1.2%) for the case C1 = C2 = 0.01. The case C1 = C2 = 0.001 produces a radial dose profile that is equivalent to that of the detailed simulation within the reached statistical uncertainty of 1%. The effect is also appreciable in the crossline dose profiles estimated for the realistic geometry of the Linac. In another simulation, it was shown that the error made by choosing inappropriate transport parameters can be masked by tuning the energy and focal spot size of the initial beam.Conclusions:
The use of large path lengths for the condensed simulation of electrons in a Linac target with penelope conducts to deviations of the dose in the patient or phantom. Based on the results obtained in this work, values of C1 and C2 larger than 0.001 should not be used in Linac targets without further investigation.
42(2015); http://dx.doi.org/10.1118/1.4921041View Description Hide DescriptionPurpose:
The purpose of this work is to develop a clinically feasible method of calculating actual delivered dose distributions for patients who have significant respiratory motion during the course of stereotactic body radiation therapy (SBRT).Methods:
A novel approach was proposed to calculate the actual delivered dose distribution for SBRT lung treatment. This approach can be specified in three steps. (1) At the treatment planning stage, a patient-specific motion model is created from planning 4DCT data. This model assumes that the displacement vector field (DVF) of any respiratory motion deformation can be described as a linear combination of some basis DVFs. (2) During the treatment procedure, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying “fluoroscopic” 3D images of the patient are reconstructed using the motion model. The DVF of each timepoint in the time-varying reconstruction is an optimized linear combination of basis DVFs such that the 2D projection of the 3D volume at this timepoint matches the projection image. (3) 3D dose distribution is computed for each timepoint in the set of 3D reconstructed fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach was first validated using two modified digital extended cardio-torso (XCAT) phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for routine 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms at all timepoints. The approach was also tested using one set of patient data, which demonstrated the application of our method in a clinical scenario.Results:
For the first XCAT phantom that has a mostly regular breathing pattern, the errors in 95% volume dose (D95) are 0.11% and 0.83%, respectively for 3D fluoroscopic images reconstructed from kV and MV projections compared to the ground truth, which is clinically comparable to 4DCT (0.093%). For the second XCAT phantom that has an irregular breathing pattern, the errors are 0.81% and 1.75% for kV and MV reconstructions, both of which are better than that of 4DCT (4.01%). In the case of real patient, although it is impossible to obtain the actual delivered dose, the dose estimation is clinically reasonable and demonstrates differences between 4DCT and MV reconstruction-based dose estimates.Conclusions:
With the availability of kV or MV projection images, the proposed approach is able to assess delivered doses for all respiratory phases during treatment. Compared to the planning dose based on 4DCT, the dose estimation using reconstructed 3D fluoroscopic images was as good as 4DCT for regular respiratory pattern and was a better dose estimation for the irregular respiratory pattern.
A fast GPU-based Monte Carlo simulation of proton transport with detailed modeling of nonelastic interactions42(2015); http://dx.doi.org/10.1118/1.4921046View Description Hide DescriptionPurpose:
Very fast Monte Carlo (MC) simulations of proton transport have been implemented recently on graphics processing units (GPUs). However, these MCs usually use simplified models for nonelastic proton–nucleus interactions. Our primary goal is to build a GPU-based proton transport MC with detailed modeling of elastic and nonelastic proton–nucleus collisions.Methods:
Using the cuda framework, the authors implemented GPU kernels for the following tasks: (1) simulation of beam spots from our possible scanning nozzle configurations, (2) proton propagation through CT geometry, taking into account nuclear elastic scattering, multiple scattering, and energy loss straggling, (3) modeling of the intranuclear cascade stage of nonelastic interactions when they occur, (4) simulation of nuclear evaporation, and (5) statistical error estimates on the dose. To validate our MC, the authors performed (1) secondary particle yield calculations in proton collisions with therapeutically relevant nuclei, (2) dose calculations in homogeneous phantoms, (3) recalculations of complex head and neck treatment plans from a commercially available treatment planning system, and compared with geant 4.9.6p2/TOPAS.Results:
Yields, energy, and angular distributions of secondaries from nonelastic collisions on various nuclei are in good agreement with the geant 4.9.6p2 Bertini and Binary cascade models. The 3D-gamma pass rate at 2%-2 mm for treatment plan simulations is typically 98%. The net computational time on a NVIDIA GTX680 card, including all CPU–GPU data transfers, is ∼20 s for 1 × 107 proton histories.Conclusions:
Our GPU-based MC is the first of its kind to include a detailed nuclear model to handle nonelastic interactions of protons with any nucleus. Dosimetric calculations are in very good agreement with geant 4.9.6p2/TOPAS. Our MC is being integrated into a framework to perform fast routine clinical QA of pencil-beam based treatment plans, and is being used as the dose calculation engine in a clinically applicable MC-based IMPT treatment planning system. The detailed nuclear modeling will allow us to perform very fast linear energy transfer and neutron dose estimates on the GPU.
Commissioning of a proton gantry equipped with dual x-ray imagers and a robotic patient positioner, and evaluation of the accuracy of single-beam image registration for this system42(2015); http://dx.doi.org/10.1118/1.4921122View Description Hide DescriptionPurpose:
To check the accuracy of a gantry equipped with dual x-ray imagers and a robotic patient positioner for proton radiotherapy, and to evaluate the accuracy and feasibility of single-beam registration using the robotic positioner.Methods:
One of the proton treatment rooms at their institution was upgraded to include a robotic patient positioner (couch) with 6 degrees of freedom and dual orthogonal kilovoltage x-ray imaging panels. The wander of the proton beam central axis, the wander of the beamline, and the orthogonal image panel crosswires from the gantry isocenter were measured for different gantry angles. The couch movement accuracy and couch wander from the gantry isocenter were measured for couch loadings of 50–300 lb with couch rotations from 0° to ±90°. The combined accuracy of the gantry, couch, and imagers was checked using a custom-made 30 × 30 × 30 cm3 Styrofoam phantom with beekleys embedded in it. A treatment in this room can be set up and registered at a setup field location, then moved precisely to any other treatment location without requiring additional image registration. The accuracy of the single-beam registration strategy was checked for treatments containing multiple beams with different combinations of gantry angles, couch yaws, and beam locations.Results:
The proton beam central axis wander from the gantry isocenter was within 0.5 mm with gantry rotations in both clockwise (CW) and counterclockwise (CCW) directions. The maximum wander of the beamline and orthogonal imager crosswire centers from the gantry isocenter were within 0.5 and 0.8 mm, respectively, with the gantry rotations in CW and CCW directions. Vertical and horizontal couch wanders from the gantry isocenter were within 0.4 and 1.3 mm, respectively, for couch yaw from 0° to ±90°. For a treatment with multiple beams with different gantry angles, couch yaws, and beam locations, the measured displacements of treatment beam locations from the one based on the initial setup beam registered at the gantry at 0°/180° and couch yaw at 0° were within 1.5 mm in three translations and 0.5° in three rotations for a 200 lb couch loading.Conclusions:
Results demonstrate that the gantry equipped with a robotic patient positioner and dual imaging panels satisfies treatment requirements for proton radiotherapy. The combined accuracy of the gantry, couch, and imagers allows a patient to be registered at one setup position and then moved precisely to another treatment position by commanding the robotic patient positioner and delivering treatment without requiring additional image registration.
- RADIATION IMAGING PHYSICS
Technical Note: Intrafractional changes in time lag relationship between anterior–posterior external and superior–inferior internal motion signals in abdominal tumor sites42(2015); http://dx.doi.org/10.1118/1.4919446View Description Hide DescriptionPurpose:
To investigate constancy, within a treatment session, of the time lag relationship between implanted markers in abdominal tumors and an external motion surrogate.Methods:
Six gastroesophageal junction and three pancreatic cancer patients (IRB-approved protocol) received two cone-beam CTs (CBCT), one before and one after treatment. Time between scans was less than 30 min. Each patient had at least one implanted fiducial marker near the tumor. In all scans, abdominal displacement (Varian RPM) was recorded as the external motion signal. Purpose-built software tracked fiducials, representing internal signal, in CBCT projection images. Time lag between superior–inferior (SI) internal and anterior–posterior external signals was found by maximizing the correlation coefficient in each breathing cycle and averaging over all cycles. Time-lag-induced discrepancy between internal SI position and that predicted from the external signal (external prediction error) was also calculated.Results:
Mean ± standard deviation time lag, over all scans and patients, was 0.10 ± 0.07 s (range 0.01–0.36 s). External signal lagged the internal in 17/18 scans. Change in time lag between pre- and post-treatment CBCT was 0.06 ± 0.07 s (range 0.01–0.22 s), corresponding to 3.1% ± 3.7% (range 0.6%–10.8%) of gate width (range 1.6–3.1 s). In only one patient, change in time lag exceeded 10% of the gate width. External prediction error over all scans of all patients varied from 0.1 ± 0.1 to 1.6 ± 0.4 mm.Conclusions:
Time lag between internal motion along SI and external signals is small compared to the treatment gate width of abdominal patients examined in this study. Change in time lag within a treatment session, inferred from pre- to post-treatment measurements is also small, suggesting that a single measurement of time lag at the session start is adequate. These findings require confirmation in a larger number of patients.
42(2015); http://dx.doi.org/10.1118/1.4919680View Description Hide DescriptionPurpose:
Compressed sensing (CS) is a new approach in medical imaging which allows a sparse image to be reconstructed from undersampled data. Total variation (TV) based minimization algorithms are the one CS technique that has achieved great success due to its virtue of preserving edges while reducing image noise. The purpose of this work is to implement and evaluate the performance of a TV minimization filter able to increase the signal difference to noise ratio (SDNR) of digital breast tomosynthesis (DBT) images.Methods:
Assuming a Poisson noise model, the authors present a practical methodology, based on Rudin, Osher, and Fatemi model, which directly applies a TV minimization filter to real phantom and clinical DBT images. Different moments of filter application (before and after image reconstruction) and the suitable Lagrange multiplier (λ) to be used in filter equation are studied. Also, the relationship between background standard deviation (σB ) of unfiltered images and optimal λ values is determined, in order to maximize the SDNR. Qualitative and quantitative analyses are conducted between unfiltered and filtered images and between the different moments of filter application. The proposed methodology is also tested with one clinical DBT data set.Results:
Using phantom data, when the filter is applied to the projections, the authors observed a decrease of 31.34% in TV and an increase of 5.29% and 5.44% in SDNR and full width at half maximum (FWHM), respectively. When applied after reconstruction, a decrease of 35.48% and 2.59% was achieved for TV and FWHM, respectively, and an increase of 8.32% for SDNR. For each moment of filter application, the optimal λ value found through a comprehensive study was λ = 85 and λ = 60 when the filter is applied before and after reconstruction, respectively. The best fit found for the relationship between σB and the corresponding λ values that allowed the highest filtered SDNR was the logarithmic adjustment. The difference between the λ values obtained by the first approach and the logarithmic adjustment ranges from 0.11% (filter applied before reconstruction) to 2.54% (filter applied after reconstruction). On the other hand, a decrease of 37.63% and 2.42% in TV and FWHM, respectively, and an increase of 24.39% in SDNR were obtained when the filter is applied to clinical data. This great minimization is present through a visual inspection of unfiltered and filtered clinical images, where areas with higher noise level become smoother while preserving edges and details of the structures.Conclusions:
An optimized digital filter for TV minimization in DBT imaging has been presented. The reliability of a logarithmic relation found between σB and λ values was confirmed and can be used in future work. Both quantitative and qualitative analyses performed in a clinical DBT image confirmed the relevance of this approach in improving image quality in DBT imaging. The results obtained are very encouraging about increasing SDNR in a short time and preserving the principal variations in image, the structures’ boundary.
42(2015); http://dx.doi.org/10.1118/1.4919772View Description Hide DescriptionPurpose:
To help improve efficacy of screening mammography by eventually establishing a new optimal personalized screening paradigm, the authors investigated the potential of using the quantitative multiscale texture and density feature analysis of digital mammograms to predict near-term breast cancer risk.Methods:
The authors’ dataset includes digital mammograms acquired from 340 women. Among them, 141 were positive and 199 were negative/benign cases. The negative digital mammograms acquired from the “prior” screening examinations were used in the study. Based on the intensity value distributions, five subregions at different scales were extracted from each mammogram. Five groups of features, including density and texture features, were developed and calculated on every one of the subregions. Sequential forward floating selection was used to search for the effective combinations. Using the selected features, a support vector machine (SVM) was optimized using a tenfold validation method to predict the risk of each woman having image-detectable cancer in the next sequential mammography screening. The area under the receiver operating characteristic curve (AUC) was used as the performance assessment index.Results:
From a total number of 765 features computed from multiscale subregions, an optimal feature set of 12 features was selected. Applying this feature set, a SVM classifier yielded performance of AUC = 0.729 ± 0.021. The positive predictive value was 0.657 (92 of 140) and the negative predictive value was 0.755 (151 of 200).Conclusions:
The study results demonstrated a moderately high positive association between risk prediction scores generated by the quantitative multiscale mammographic image feature analysis and the actual risk of a woman having an image-detectable breast cancer in the next subsequent examinations.
42(2015); http://dx.doi.org/10.1118/1.4921120View Description Hide DescriptionPurpose:
To provide a noninvasive technique to measure the intensity profile of the fan beam in a computed tomography (CT) scanner that is cost effective and easily implemented without the need to access proprietary scanner information or service modes.Methods:
The fabrication of an inexpensive aperture is described, which is used to expose radiochromic film in a rotating CT gantry. A series of exposures is made, each of which is digitized on a personal computer document scanner, and the resulting data set is analyzed to produce a self-consistent calibration of relative radiation exposure. The bow tie profiles were analyzed to determine the precision of the process and were compared to two other measurement techniques, direct measurements from CT gantry detectors and a dynamic dosimeter.Results:
The radiochromic film method presented here can measure radiation exposures with a precision of ∼6% root-mean-square relative error. The intensity profiles have a maximum 25% root-mean-square relative error compared with existing techniques.Conclusions:
The proposed radiochromic film method for measuring bow tie profiles is an inexpensive (∼$100 USD + film costs), noninvasive method to measure the fan beam intensity profile in CT scanners.
42(2015); http://dx.doi.org/10.1118/1.4919849View Description Hide DescriptionPurpose:
Signal detection on 3D medical images depends on many factors, such as foveal and peripheral vision, the type of signal, and background complexity, and the speed at which the frames are displayed. In this paper, the authors focus on the speed with which radiologists and naïve observers search through medical images. Prior to the study, the authors asked the radiologists to estimate the speed at which they scrolled through CT sets. They gave a subjective estimate of 5 frames per second (fps). The aim of this paper is to measure and analyze the speed with which humans scroll through image stacks, showing a method to visually display the behavior of observers as the search is made as well as measuring the accuracy of the decisions. This information will be useful in the development of model observers, mathematical algorithms that can be used to evaluate diagnostic imaging systems.Methods:
The authors performed a series of 3D 4-alternative forced-choice lung nodule detection tasks on volumetric stacks of chest CT images iteratively reconstructed in lung algorithm. The strategy used by three radiologists and three naïve observers was assessed using an eye-tracker in order to establish where their gaze was fixed during the experiment and to verify that when a decision was made, a correct answer was not due only to chance. In a first set of experiments, the observers were restricted to read the images at three fixed speeds of image scrolling and were allowed to see each alternative once. In the second set of experiments, the subjects were allowed to scroll through the image stacks at will with no time or gaze limits. In both static-speed and free-scrolling conditions, the four image stacks were displayed simultaneously. All trials were shown at two different image contrasts.Results:
The authors were able to determine a histogram of scrolling speeds in frames per second. The scrolling speed of the naïve observers and the radiologists at the moment the signal was detected was measured at 25–30 fps. For the task chosen, the performance of the observers was not affected by the contrast or experience of the observer. However, the naïve observers exhibited a different pattern of scrolling than the radiologists, which included a tendency toward higher number of direction changes and number of slices viewed.Conclusions:
The authors have determined a distribution of speeds for volumetric detection tasks. The speed at detection was higher than that subjectively estimated by the radiologists before the experiment. The speed information that was measured will be useful in the development of 3D model observers, especially anthropomorphic model observers which try to mimic human behavior.
42(2015); http://dx.doi.org/10.1118/1.4921138View Description Hide DescriptionPurpose:
Fast growing access to large databases and cloud stored data presents a unique opportunity for multi-atlas based image segmentation and also presents challenges in heterogeneous atlas quality and computation burden. This work aims to develop a novel two-stage method tailored to the special needs in the face of large atlas collection with varied quality, so that high-accuracy segmentation can be achieved with low computational cost.Methods:
An atlas subset selection scheme is proposed to substitute a significant portion of the computationally expensive full-fledged registration in the conventional scheme with a low-cost alternative. More specifically, the authors introduce a two-stage atlas subset selection method. In the first stage, an augmented subset is obtained based on a low-cost registration configuration and a preliminary relevance metric; in the second stage, the subset is further narrowed down to a fusion set of desired size, based on full-fledged registration and a refined relevance metric. An inference model is developed to characterize the relationship between the preliminary and refined relevance metrics, and a proper augmented subset size is derived to ensure that the desired atlases survive the preliminary selection with high probability.Results:
The performance of the proposed scheme has been assessed with cross validation based on two clinical datasets consisting of manually segmented prostate and brain magnetic resonance images, respectively. The proposed scheme demonstrates comparable end-to-end segmentation performance as the conventional single-stage selection method, but with significant computation reduction. Compared with the alternative computation reduction method, their scheme improves the mean and medium Dice similarity coefficient value from (0.74, 0.78) to (0.83, 0.85) and from (0.82, 0.84) to (0.95, 0.95) for prostate and corpus callosum segmentation, respectively, with statistical significance.Conclusions:
The authors have developed a novel two-stage atlas subset selection scheme for multi-atlas based segmentation. It achieves good segmentation accuracy with significantly reduced computation cost, making it a suitable configuration in the presence of extensive heterogeneous atlases.
Effect of color visualization and display hardware on the visual assessment of pseudocolor medical images42(2015); http://dx.doi.org/10.1118/1.4921125View Description Hide DescriptionPurpose:
Even though the use of color in the interpretation of medical images has increased significantly in recent years, the ad hoc manner in which color is handled and the lack of standard approaches have been associated with suboptimal and inconsistent diagnostic decisions with a negative impact on patient treatment and prognosis. The purpose of this study is to determine if the choice of color scale and display device hardware affects the visual assessment of patterns that have the characteristics of functional medical images.Methods:
Perfusion magnetic resonance imaging (MRI) was the basis for designing and performing experiments. Synthetic images resembling brain dynamic-contrast enhanced MRI consisting of scaled mixtures of white, lumpy, and clustered backgrounds were used to assess the performance of a rainbow (“jet”), a heated black-body (“hot”), and a gray (“gray”) color scale with display devices of different quality on the detection of small changes in color intensity. The authors used a two-alternative, forced-choice design where readers were presented with 600 pairs of images. Each pair consisted of two images of the same pattern flipped along the vertical axis with a small difference in intensity. Readers were asked to select the image with the highest intensity. Three differences in intensity were tested on four display devices: a medical-grade three-million-pixel display, a consumer-grade monitor, a tablet device, and a phone.Results:
The estimates of percent correct show that jet outperformed hot and gray in the high and low range of the color scales for all devices with a maximum difference in performance of 18% (confidence intervals: 6%, 30%). Performance with hot was different for high and low intensity, comparable to jet for the high range, and worse than gray for lower intensity values. Similar performance was seen between devices using jet and hot, while gray performance was better for handheld devices. Time of performance was shorter with jet.Conclusions:
Our findings demonstrate that the choice of color scale and display hardware affects the visual comparative analysis of pseudocolor images. Follow-up studies in clinical settings are being considered to confirm the results with patient images.
42(2015); http://dx.doi.org/10.1118/1.4921067View Description Hide DescriptionPurpose:
A quantitative and objective metric, the medical similarity index (MSI), has been developed for evaluating the accuracy of a medical image segmentation relative to a reference segmentation. The MSI uses the medical consideration function (MCF) as its basis.Methods:
Currently, no indices provide quantitative evaluations of segmentation accuracy with medical considerations. Variations in segmentation can occur due to individual skill levels and medical relevance—curable or palliative intent, boundary uncertainty due to volume averaging, contrast levels, spatial resolution, and unresolved motion all affect the accuracy of a patient segmentation. Current accuracy measuring indices are not medically relevant. For example, undercontouring the tumor volume is not differentiated from overcontouring tumor. Dice similarity coefficient (DSC) and Hausdorff distance (HD) are two similarity measures often used. However, these metrics consider only geometric difference without considering medical implications. Two segments (under- vs overcontouring tumor) with similar DSC and HD measures could produce significantly different medical treatment results. The authors are proposing a MSI involving a user-defined MCF derived from an asymmetric Gaussian function. The shape of the MCF can be determined by a user, reflecting the anatomical location and characteristics of a particular tissue, organ, or tumor type. The peak of MCF is set along the reference contour; the inner and outer slopes are selected by the user. The discrepancy between the test and reference contours is calculated at each pixel by using a bidirectional local distance measure. The MCF value corresponding to that distance is summed and averaged to produce the MSI. Synthetic segmentations and clinical data from a 15 multi-institutional trial for a head-and-neck case are scored and compared by using MSI, DSC, and Hausdorff distance.Results:
The MSI was shown to reflect medical considerations through the choice of MCF penalties for under- and overcontouring. Existing similarity scores were either insensitive to medical realities or simply inaccurate.Conclusions:
The medical similarity index, a segmentation evaluation metric based on medical considerations, has been proposed, developed, and tested to incorporate clinically relevant considerations beyond geometric parameters alone.