Index of content:
Volume 36, Issue 12, December 2009
36(2009); http://dx.doi.org/10.1118/1.3253462View Description Hide Description
36(2009); http://dx.doi.org/10.1118/1.3223359View Description Hide Description
- RADIATION THERAPY PHYSICS
Evaluation of similarity measures for use in the intensity-based rigid 2D-3D registration for patient positioning in radiotherapy36(2009); http://dx.doi.org/10.1118/1.3250843View Description Hide DescriptionPurpose:
Rigid 2D-3D registration is an alternative to 3D-3D registration for cases where largely bony anatomy can be used for patient positioning in external beam radiation therapy. In this article, the authors evaluated seven similarity measures for use in the intensity-based rigid 2D-3D registration using a variation in Skerl’s similarity measure evaluation protocol.Methods:
The seven similarity measures are partitioned intensity uniformity, normalized mutual information (NMI), normalized cross correlation (NCC), entropy of the difference image, pattern intensity (PI), gradient correlation (GC), and gradient difference (GD). In contrast to traditional evaluation methods that rely on visual inspection or registration outcomes, the similarity measure evaluation protocol probes the transform parameter space and computes a number of similarity measure properties, which is objective and optimization method independent. The variation in protocol offers an improved property in the quantification of the capture range. The authors used this protocol to investigate the effects of the downsampling ratio, the region of interest, and the method of the digitally reconstructed radiograph(DRR) calculation [i.e., the incremental ray-tracing method implemented on a central processing unit (CPU) or the 3D texture rendering method implemented on a graphics processing unit (GPU)] on the performance of the similarity measures. The studies were carried out using both the kilovoltage (kV) and the megavoltage (MV) images of an anthropomorphic cranial phantom and the MV images of a head-and-neck cancer patient.Results:
Both the phantom and the patient studies showed the 2D-3D registration using the GPU-based DRR calculation yielded better robustness, while providing similar accuracy compared to the CPU-based calculation. The phantom study using kV imaging suggested that NCC has the best accuracy and robustness, but its slow function value change near the global maximum requires a stricter termination condition for an optimization method. The phantom study using MV imaging indicated that PI, GD, and GC have the best accuracy, while NCC and NMI have the best robustness. The clinical study using MV imaging showed that NCC and NMI have the best robustness.Conclusions:
The authors evaluated the performance of seven similarity measures for use in 2D-3D image registration using the variation in Skerl’s similarity measure evaluation protocol. The generalized methodology can be used to select the best similarity measures, determine the optimal or near optimal choice of parameter, and choose the appropriate registration strategy for the end user in his specific registration applications in medical imaging.
36(2009); http://dx.doi.org/10.1118/1.3250857View Description Hide DescriptionPurpose:
Previous Monte Carlo and experimental studies involving secondary neutrons in proton therapy have employed a number of phantom materials that are designed to represent human tissue. In this study, the authors determined the suitability of common phantom materials for dosimetry of secondary neutrons, specifically for pediatric and intracranial proton therapytreatments.Methods:
This was achieved through comparison of the absorbed dose and dose equivalent from neutrons generated within the phantom materials and various ICRP tissues. The phantom materials chosen for comparison were Lucite, liquid water, solid water, and A150 tissue equivalent plastic. These phantom materials were compared to brain, muscle, and adipose tissues.Results:
The magnitude of the doses observed were smaller than those reported in previous experimental and Monte Carlo studies, which incorporated neutrons generated in the treatment head. The results show that for both neutron absorbed dose and dose equivalent, no single phantom material gives agreement with tissue within 5% at all the points considered. Solid water gave the smallest mean variation with the tissues out of field where neutrons are the primary contributor to the total dose.Conclusions:
Of the phantom materials considered, solid water shows best agreement with tissues out of field.
An integral quality monitoring system for real-time verification of intensity modulated radiation therapy36(2009); http://dx.doi.org/10.1118/1.3250859View Description Hide DescriptionPurpose:
To develop an independent and on-line beam monitoring system, which can validate the accuracy of segment-by-segment energy fluence delivery for each treatment field. The system is also intended to be utilized for pretreatment dosimetric quality assurance of intensity modulated radiation therapy(IMRT), on-line image-guided adaptive radiation therapy, and volumetric modulated arc therapy.Methods:
The system, referred to as the integral quality monitor (IQM), utilizes an area integrating energy fluence monitoring sensor (AIMS) positioned between the final beam shaping device [i.e., multileaf collimator(MLC)] and the patient. The prototype AIMS consists of a novel spatially sensitive large area ionization chamber with a gradient along the direction of the MLC motion. The signal from the AIMS provides a simple output for each beam segment, which is compared in real time to the expected value. The prototype ionization chamber, with a physical area of , has been constructed out of aluminum with the electrode separations varying linearly from 2 to 20 mm. A calculation method has been developed to predict AIMS signals based on an elementwise integration technique, which takes into account various predetermined factors, including the spatial response function of the chamber, MLC characteristics, beam transmission through the secondary jaws, and field size factors. The influence of the ionization chamber on the beam has been evaluated in terms of transmission, surface dose, beam profiles, and depth dose. The sensitivity of the system was tested by introducing small deviations in leaf positions. A small set of IMRT fields for prostate and head and neck plans was used to evaluate the system. The ionization chamber and the data acquisition software systems were interfaced to two different types of linear accelerators: Elekta Synergy and Varian iX.Results:
For a field, the chamber attenuates the beam intensity by 7% and 5% for 6 and 18 MV beams, respectively, without significantly changing the depth dose, surface dose, and dose profile characteristics. An MLC bank calibration error of 1 mm causes the IQM signal of a aperture to change by 3%. A positioning error in a single 5 mm wide leaf by 3 mm in aperture causes a signal difference of 2%. Initial results for prostate and head and neck IMRT fields show an average agreement between calculation and measurement to within 1%, with a maximum deviation for each of the smallest beam segments to within 5%. When the beam segments of a prostate IMRT field were shifted by 3 mm from their original position, along the direction of the MLC motion, the IQM signals varied, on average, by 2.5%.Conclusions:
The prototype IQM system can validate the accuracy of beam delivery in real time by comparing precalculated and measured AIMS signals. The system is capable of capturing errors in MLC leaf calibration or malfunctions in the positioning of an individual leaf. The AIMS does not significantly alter the beam quality and therefore could be implemented without requiring recommissioning measurements.
36(2009); http://dx.doi.org/10.1118/1.3250867View Description Hide DescriptionPurpose:
The tissue phantom ratio (TPR) is a common dosimetric quantity used in photon dose calculations. For small photon fields with side lengths less than, TPR data hardly exist in literature. In this work, a self-contained functional representation of TPR is proposed, valid for the whole range of clinically relevant depth and field sizes. This is especially useful for small fields shaped by multileaf collimators.Methods:
TPRs were measured for quadratic fields with side lengths between 0.4 and. The measured data were fitted to a physically meaningful function taking electron buildup, buildup of scatteredphotons, beam attenuation, and beam hardening into account. The achievable accuracy was tested against measurement and data from the literature.Results:
A set of parameters for the proposed function was derived for 6 and beams. The comparison of the calculated and the measured data generally yielded a difference of less than 1%. For field sizes below , a systematic discrepancy between the author’s data and those from Cheng et al. [Med. Phys34, 3149–3157 (2007)] was found.Conclusions:
With the proposed model, TPRs can be calculated for the full range of field sizes and depths required by treatment planning system algorithms and monitor unit check programs with very high accuracy. The method is also useful in detecting and reducing errors in measurement.
Fast, accurate photon beam accelerator modeling using BEAMnrc: A systematic investigation of efficiency enhancing methods and cross-section data36(2009); http://dx.doi.org/10.1118/1.3253300View Description Hide Description
In this work, an investigation of efficiency enhancing methods and cross-section data in the BEAMnrcMonte Carlo(MC) code system is presented. Additionally, BEAMnrc was compared with VMC++, another special-purpose MC code system that has recently been enhanced for the simulation of the entire treatment head. BEAMnrc and VMC++ were used to simulate a photon beam from a Siemens Primus linear accelerator(linac) and phase space (PHSP) files were generated at source-to-surface distance for the and field sizes. The BEAMnrc parameters/techniques under investigation were grouped by (i) photon and bremsstrahlung cross sections, (ii) approximate efficiency improving techniques (AEITs), (iii) variance reduction techniques (VRTs), and (iv) a VRT (bremsstrahlung photon splitting) in combination with an AEIT (charged particle range rejection). The BEAMnrc PHSP file obtained without the efficiency enhancing techniques under study or, when not possible, with their default values (e.g., EXACT algorithm for the boundary crossing algorithm) and with the default cross-section data (PEGS4 and Bethe–Heitler) was used as the “base line” for accuracy verification of the PHSP files generated from the different groups described previously. Subsequently, a selection of the PHSP files was used as input for DOSXYZnrc-based water phantom dose calculations, which were verified against measurements. The performance of the different VRTs and AEITs available in BEAMnrc and of VMC++ was specified by the relative efficiency, i.e., by the efficiency of the MC simulation relative to that of the BEAMnrc base-line calculation. The highest relative efficiencies were ( on a single processor) and ( on a single processor) for the field size with 50 million histories and field size with 100 million histories, respectively, using the VRT directional bremsstrahlung splitting (DBS) with no electron splitting. When DBS was used with electron splitting and combined with augmented charged particle range rejection, a technique recently introduced in BEAMnrc, relative efficiencies were ( on a single processor) and ( on a single processor) for the and field sizes, respectively. Calculations of the Siemens Primus treatment head with VMC++ produced relative efficiencies of ( on a single processor) and ( on a single processor) for the and field sizes, respectively. BEAMnrc PHSP calculations with DBS alone or DBS in combination with charged particle range rejection were more efficient than the other efficiency enhancing techniques used. Using VMC++, accurate simulations of the entire linac treatment head were performed within minutes on a single processor. Noteworthy differences in the mean energy, planar fluence, and angular and spectral distributions were observed with the NIST bremsstrahlung cross sections compared with those of Bethe–Heitler (BEAMnrc default bremsstrahlung cross section). However, MC calculated dose distributions in water phantoms (using combinations of VRTs/AEITs and cross-section data) agreed within 2% of measurements. Furthermore, MC calculated dose distributions in a simulated water/air/water phantom, using NIST cross sections, were within 2% agreement with the BEAMnrc Bethe–Heitler default case.
36(2009); http://dx.doi.org/10.1118/1.3253463View Description Hide DescriptionPurpose:
The aim of this work was to investigate the use of amorphous silicon electronic portal imaging devices(EPIDs) for regular quality assurance of linear accelerator asymmetric jaw junctioning.Methods:
The method uses the beam central axis position on the EPID measured to subpixel accuracy found from two EPIDimages with 180° opposing collimator angles. Individual zero jaw position (“half-beam blocked”) images are then acquired and the jaw position precisely determined for each using penumbra interpolation. The accuracy of determining jaw position with the EPID method was measured by translating a block (simulating a jaw) by known distances, using a translation stage, and then measuring each translation distance with the EPID. To establish the utility of EPID based junction dose measurements, radiographic film measurements of junction dose maxima/minima as a function of jaw gap/overlap were made and compared to EPID measurements. Using the method, the long-term stability of zero jaw positioning was assessed for four linear accelerators over a 1–1.5 yr time period. The stability at nonzero gantry angles was assessed over a shorter time period.Results:
The accuracy of determining jaw translations with the method was within 0.14 mm found using the translation stage [standard deviation (SD) of 0.037 mm]. The junction doses measured with the EPID were different from film due to the nonwater equivalent EPID scattering properties and hence different penumbra profile. The doses were approximately linear with gap or overlap, and a correction factor was derived to convert EPID measured junction dose to film measured equivalent. Over a 1 yr period, the zero jaw positions at gantry zero position were highly reproducible with an average SD of 0.07 mm for the 16 collimator jaws examined. However, the average jaw positions ranged from −0.7 to 0.9 mm relative to central axis for the different jaws. The zero jaw position was also reproducible at gantry 90° position with 0.1 mm SD variation with the mean jaw position offset from the gantry zero position consistently by 0.3–0.4 mm for the jaws studied.Conclusions:
The EPID based method is efficient and yields more precise data on linear accelerator jaw positioning and reproducibility than previous methods. The results highlight that zero jaw positions are highly reproducible to a level much smaller than the displayed jaw resolution and that there is a need for better methods to calibrate the jaw positioning.
36(2009); http://dx.doi.org/10.1118/1.3253464View Description Hide DescriptionPurpose:
Intensity modulated radiation therapy(IMRT)treatment plan quality depends on the planner’s level of experience and the amount of time the planner invests in developing the plan. Planners often unwittingly accept plans when further sparing of the organs at risk (OARs) is possible. The authors propose a method of IMRTtreatment plan quality control that helps planners to evaluate the doses of the OARs upon completion of a new plan.Methods:
It is achieved by comparing the geometric configurations of the OARs and targets of a new patient with those of prior patients, whose plans are maintained in a database. They introduce the concept of a shape relationship descriptor and, specifically, the overlap volume histogram (OVH) to describe the spatial configuration of an OAR with respect to a target. The OVH provides a way to infer the likely DVHs of the OARs by comparing the relative spatial configurations between patients. A database of prior patients is built to serve as an external reference. At the conclusion of a new plan, planners search through the database and identify related patients by comparing the OAR-target geometric relationships of the new patient with those of prior patients. The treatment plans of these related patients are retrieved from the database and guide planners in determining whether lower doses delivered to the OARs in the new plan are feasible.Results:
Preliminary evaluation is promising. In this evaluation, they applied the analysis to the parotid DVHs of 32 prior head-and-neck patients, whose plans are maintained in a database. Each parotid was queried against the other 63 parotids to determine whether a lower dose was possible. The 17 parotids that promised the greatest reduction in (DVH dose at 50% volume) were flagged. These 17 parotids came from 13 patients. The method also indicated that the doses of the other nine parotids of the 13 patients could not be reduced, so they were included in the replanning process as controls. Replanning with an effort to reduce was conducted on these 26 parotids. After replanning, the average reductions for of the 17 flagged parotids and nine unflagged parotids were 6.6 and 1.9 Gy, respectively. These results demonstrate that the quality control method has accurately identified not only the parotids that require dose reductions but also those for which dose reductions are marginal. Originally, 11 of out the 17 flagged parotids did not meet the Radiation Therapy Oncology Group sparing goal of . Replanning reduced them to three. Additionally, PTV coverage and OAR sparing of the original plans were compared to those of the replans by using pairwise Wilcoxon test. The statistical comparisons show that replanning compromised neither PTV coverage nor OAR sparing.Conclusions:
This method provides an effective quality control mechanism for evaluating the DVHs of the OARs. Adoption of such a method will advance the quality of current IMRT planning, providing better treatment plan consistency.
Development of prototype shielded cervical intracavitary brachytherapy applicators compatible with CT and MR imaging36(2009); http://dx.doi.org/10.1118/1.3253967View Description Hide DescriptionPurpose:
Intracavitary brachytherapy (ICBT) is an integral part of the treatment regimen for cervical cancer and, generally, outcome in terms of local disease control and complications is a function of dose to the disease bed and critical structures, respectively. Therefore, it is paramount to accurately determine the dose given via ICBT to the tumor bed as well as critical structures. This is greatly facilitated through the use of advanced three-dimensional imaging modalities, such as CT and MR, to delineate critical and target structures with an ICBT applicator insertedin vivo. These methods are not possible when using a shielded applicator due to the image artifacts generated by interovoid shielding. The authors present two prototype shielded ICBT applicators that can be utilized for artifact-free CTimage acquisition. They also investigate the MR amenability and dosimetry of a novel tungsten-alloy shielding material to extend the functionality of these devices.Methods:
To accomplish artifact-free CTimage acquisition, a “step-and-shoot” (S&S) methodology was utilized, which exploits the prototype applicators movable interovoid shielding. Both prototypes were placed in imaging phantoms that positioned the applicators in clinically applicable orientations. CTimage sets were acquired of the prototype applicators as well as a shielded Fletcher–Williamson (sFW) ovoid. Artifacts present in each CTimage set were qualitatively compared for each prototype applicator following the S&S methodology and the sFW. To test the novel tungsten-alloy shielding material’s MR amenability, they constructed a phantom applicator that mimics the basic components of an ICBT ovoid. This phantom applicator positions the MR-compatible shields in orientations equivalent to the sFW bladder and rectal shields. MRimages were acquired within a gadopentetate dimeglumine-doped water tank using standard pulse sequences and examined for artifacts. In addition, Monte Carlo simulations were performed to match the attenuation due to the thickness of this new shield type with current, clinically utilized ovoid shields and a HDR/PDR source.Results:
Artifact-free CTimages could be acquired of both generation applicators in a clinically applicable geometry using the S&S method. MRimages were acquired of the phantom applicator containing shields, which contained minimal, clinically relevant artifacts. The thickness required to match the dosimetry of the MR-compatible and sFW rectal shields was determined using Monte Carlo simulations.Conclusions:
Utilizing a S&S imaging method in conjunction with prototype applicators that feature movable interovoid shields, they were able to acquire artifact-free CTimage sets in a clinically applicable geometry. MRimages were acquired of a phantom applicator that contained shields composed of a novel tungsten alloy. Artifacts were largely limited to regions within the ovoid cap and are of no clinical interest. The second generation utilizes this material for interovoid shielding.
36(2009); http://dx.doi.org/10.1118/1.3259729View Description Hide DescriptionPurpose:
Obtain an accurate simulation of the dose from the 6 and 18 MV x-ray beams from a Siemens Oncor linear accelerator by comparing simulation to measurement. Constrain the simulation by independently determining parameters of the treatment head and incident beam, in particular, the energy and spot size.Methods:
Measurements were done with the treatment head in three different configurations: (1) The clinical configuration, (2) the flattening filter removed, and (3) the target and flattening filter removed. Parameters of the incident beam and treatment head were measured directly. Incident beam energy and spectral width were determined from the percent-depth ionization of the raw beam (as described previously), spot size was determined using a spot camera, and the densities of the flattening filters were determined by weighing them. Simulations were done with EGSnrc/BEAMnrc code. An asymmetric simulation was used, including offsets of the spot, primary collimator, and flattening filter from the collimator rotation axis.Results:
Agreement between measurement and simulation was obtained to the least restrictive of 1% or 1 mm at 6 MV, both with and without the flattening filter in place, except for the buildup region. At 18 MV, the agreement was 1.5%/1.5 mm with the flattening filter in place and 1%/1 mm with it removed, except for in the buildup region. In the buildup region, the discrepancy was 2%/2 mm at 18 MV and 1.5%/1.5 mm at 6 MV with the flattening filter either removed or in place. The methodology for measuring the source and geometry parameters for the treatment head simulation is described. Except to determine the density of the flattening filter, no physical modification of the treatment head is necessary to obtain those parameters. In particular, the flattening filter does not need to be removed as was done in this work.Conclusions:
Good agreement between measured and simulated dose distributions was obtained, even in the buildup region. The simulation was tightly constrained by independent measurements of parameters of the incident beam and treatment head. The method of obtaining the input parameters is described, and can be carried out on a clinical linear accelerator.
Development and evaluation of an ultrasound-guided tracking and gating system for hepatic radiotherapy36(2009); http://dx.doi.org/10.1118/1.3250893View Description Hide DescriptionPurpose:
Respiratory motion must be accounted for daily in order to permit optimum radiotherapy of hepatic malignancies. However, existing tracking systems are often invasive or poorly tolerated by patients. The authors describe the development and validation of an ultrasound-guided tracking and gating system for stereotactic body radiation therapy of the liver.Methods:
This noninvasive system is designed to determine the correlation between tumor and external fiducial motion and to verify the optimum gating level for treatment delivery daily. A tracked ultrasound probe moves with patient respiration, obtaining 2D ultrasoundimages of tumor motion throughout the respiratory cycle. The target volume is registered to the static radiotherapy treatment beams in order to verify optimum gating levels. These gating levels are then transferred to an existing gating system for treatment delivery. The authors examined the temporal and spatial accuracy of this system using a custom-built phantom and verified the accuracy of gating level transfer and delivery.Results:
The temporal accuracy of the ultrasound-guided system was shown to be comparable to the existing clinical x-ray imaging system. Using ultrasound rather than x-rays to image internal targets provides good soft-tissue contrast without the invasiveness of implanting fiducial markers. High frame rates enable continuous monitoring of the target throughout the respiratory cycle. The authors anticipate this passive monitoring system should be well tolerated by patients.Conclusions:
The system developed provides good quality video of the laboratory motion phantom and can be successfully used in gated beam delivery.
36(2009); http://dx.doi.org/10.1118/1.3245886View Description Hide DescriptionPurpose:
Commercial EPIDs are normally used in indirect detection mode (iEPID) where incident x-ray photons are converted to optical photons in a phosphor scintillator, which are then detected by a photodiode array. The EPIDs are constructed from a number of nonwater equivalent materials which affect the dose response of the detector. The so-called direct detection EPIDs (dEPIDs), operating without the phosphor layer, have been reported to display dose response close to in-water data. In this study, the effect that different layers of materials in the EPID have on the dose response was experimentally investigated and evaluated with respect to changes in field size response and beam profiles.Methods:
An iEPID was disassembled and the different layers of materials were removed or replaced with other materials. Data were also obtained on and off the support arm and with a sheet of opaque paper blocking the optical photons from the gadolinium oxysulfide phosphor layer. Field size response was measured for field sizes ranging from , and profiles for the beams were extracted from the data.Results:
The iEPID configuration was found to be very sensitive to backscatter. The increases in output with solid water backscatter compared to the no backscatter case were 14.7% and 6.6% at the largest field size investigated for the 6 and beams, respectively. The phosphor layer had a large influence on field size response as well as beam profiles for photons, while no major effects were observed for the beam. For large differences in dose response were found when the standard Cu buildup was changed for equivalent Cu or solid water buildup, indicating that head scatter largely influences dose response for this energy. When the optical photons originating in the layer were blocked from reaching the photodiodes, both field size output data and beam profiles corresponded well with data obtained in the dEPID configuration as well as reference ion chamber data for both energies.Conclusions:
As expected, changing the layers of material in the EPID had a dramatic effect on dose response, which was often quite complex. For, the complex dose response is mainly caused by the optical photons from the layer, while insufficient filtering of scattered radiation largely affects the dose response for the beam. The iEPID was also found to be very sensitive to backscatter for both energies. Blocking the optical photons created in the layer essentially changed the iEPID configuration into the dEPID configuration, thus demonstrating great potential for a system that can be optimized for both imaging and dosimetry.
- RADIATION IMAGING PHYSICS
36(2009); http://dx.doi.org/10.1118/1.3250863View Description Hide DescriptionPurpose:
For dosimetry and for work in optimization of x-ray imaging of the breast, it is commonly assumed that the breast is composed of 50% fibroglandular tissue and 50% fat. The purpose of this study was to assess whether this assumption was realistic.Methods:
First, data obtained from an experimental breast CT scanner were used to validate an algorithm that measures breast density from digitized film mammograms. Density results obtained from a total of 2831 women, including 191 women receiving CT and from mammograms of 2640 women from three other groups, were then used to estimate breast compositions.Results:
Mean compositions, expressed as percent fibroglandular tissue (including the skin), varied from 13.7% to 25.6% among the groups with an overall mean of 19.3%. The mean compressed breast thickness for the mammograms was 5.9 cm. 80% of the women in our study had volumetric breast density less than 27% and 95% were below 45%.Conclusions:
Based on the results obtained from the four groups of women in our study, the “50-50” breast is not a representative model of the breast composition.
36(2009); http://dx.doi.org/10.1118/1.3250907View Description Hide DescriptionPurpose:
The recent introduction of digital tomosynthesisimaging into routine clinical use has enabled the acquisition of volumetric patient data within a standard radiographic examination. Tomosynthesis requires the acquisition of multiple projection views, requiring additional dose compared to a standard projection examination. Knowledge of the effective dose is needed to make an appropriate decision between standard projection, tomosynthesis, and CT for thoracic x-ray examinations. In this article, the effective dose to the patient of chest tomosynthesis is calculated and compared to a standard radiographic examination and to values published for thoracic CT.Methods:
Radiographic technique data for posterior-anterior (PA) and left lateral (LAT) radiographic chest examinations of medium-sized adults was obtained from clinical sites. From these data, the average incident air kerma for the standard views was determined. A commercially available tomosynthesis system was used to define the acquisition technique and geometry for each projection view. Using Monte Carlo techniques, the effective dose of the PA, LAT, and each tomosynthesis projection view was calculated. The effective dose for all projections of the tomosynthesis sweep was summed and compared to the calculated PA and LAT values and to the published values for thoracic CT.Results:
The average incident air kerma for the PA and left lateral clinical radiographic examinations were found to be 0.10 and, respectively. The effective dose for the PA view of a patient of the size of an average adult male was determined to be (ICRP 60) [ (ICRP 103)]. For the left lateral view of the same sized patient, the effective dose was determined to be (ICRP 60) [ (ICRP 103)]. The cumulative mA s for a tomosynthesis examination is recommended to be ten times the mA s of the PA image. With this technique, the effective dose for an average tomosynthesis examination was calculated to be (ICRP60) [ (ICRP103)]. This is less than 75% of that predicted by scaling of the PA mA s ratio. This lower dose was due to changes in the focal-spot-to-skin distance, effective changes in collimation with projection angle, rounding down of the mA s step, and variations in organ exposure to the primary x-ray beam for each view. Large errors in dose estimation can occur if these factors are not accurately modeled.Conclusions:
The effective dose of a chest examination with this chest tomosynthesis system is about twice that of a two-view chest examination and less than 2% of the published average values for thoracic CT. It is shown that complete consideration of the tomosynthesis acquisition technique and geometry is required for accurate determination of the effective dose to the patient. Tomosynthesis provides three-dimensional imaging at a dose level comparable to a two-view chest x-ray examination and may provide a low dose alternative to thoracic CT for obtaining depth information in chest imaging.
Single x-ray absorptiometry method for the quantitative mammographic measure of fibroglandular tissue volume36(2009); http://dx.doi.org/10.1118/1.3253972View Description Hide DescriptionPurpose:
This study describes the design and characteristics of a highly accurate, precise, and automated single-energy method to quantify percent fibroglandular tissue volume (%FGV) and fibroglandular tissue volume (FGV) using digital screening mammography.Methods:
The method uses a breast tissue-equivalent phantom in the unused portion of the mammogram as a reference to estimate breast composition. The phantom is used to calculate breast thickness and composition for each image regardless of x-ray technique or the presence of paddle tilt. The phantom adheres to the top of the mammographic compression paddle and stays in place for both craniocaudal and mediolateral oblique screening views. We describe the automated method to identify the phantom and paddle orientation with a three-dimensional reconstruction least-squares technique. A series of test phantoms, with a breast thickness range of 0.5–8 cm and a %FGV of 0%–100%, were made to test the accuracy and precision of the technique.Results:
Using test phantoms, the estimated repeatability standard deviation equaled 2%, with a ±2% accuracy for the entire thickness and density ranges. Without correction, paddle tilt was found to create large errors in the measured density values of up to 7%/mm difference from actual breast thickness. This new density measurement is stable over time, with no significant drifts in calibration noted during a four-month period. Comparisons of %FGV to mammographic percent density and left to right breast %FGV were highly correlated ( and 0.94, respectively).Conclusions:
An automated method for quantifying fibroglandular tissue volume has been developed. It exhibited good accuracy and precision for a broad range of breast thicknesses, paddle tilt angles, and %FGV values. Clinical testing showed high correlation to mammographic density and between left and right breasts.
36(2009); http://dx.doi.org/10.1118/1.3254076View Description Hide DescriptionPurpose:
Evaluate the utility of tests in a proposed protocol for constancy control of digital mammography systems.Methods:
The protocol contained tests for image acquisition, mechanical function and safety, monitors and printers, and viewing conditions. Nine sites with digital systems from four equipment manufacturers were recruited. Dedicated PMMA test objects and Excel spreadsheets were developed. Quantitative measurements were done on processed images for systems where these images were the ones most readily available. For daily assessment of the automatic exposure control system, a homogeneous PMMA phantom was exposed under clinical conditions. The mAs and signal to noise ratio (SNR) were recorded, the deviation from a target value calculated, and the resulting image inspected for artifacts. For thickness tracking, the signal difference to noise ratio obtained for three thicknesses was calculated. Detector uniformity was assessed through comparison of SNR values for regions of interest in the center and corners of an image of a homogeneous test object. Mechanical function and safety control included a compression test, a checklist for mechanical aspects, and control of field alignment. Monitor performance was evaluated by visual inspection of the AAPM TG 18 QC test image [E. Samei et al., “Assessment of display performance for medical imaging systems,” Task Group 18 (Madison, WI, April 2005)].Results:
For quantitative parameters, target values and tolerance limits were established. Test results exceeding the limits were registered. Most systems exhibited stable mAs values, indicating that the tolerance limit of was readily achievable. The SNR also showed little variation, indicating that the tolerance limit of was too wide. At one site, a defective grid caused artifacts that were visible in the test images. The monitor controls proved more difficult to implement due to both difficulties importing and displaying the test image, and the radiographic technologists not getting necessary access to the reading stations.Conclusions:
The proposed tests could easily be performed by trained radiographic technologists and within a time frame comparable to similar programs for analog systems. Tests with quantitative measures were more readily performed than procedures that required a subjective evaluation. Several of the proposed tests revealed equipment performance that required intervention, and which would otherwise have gone unnoticed. They therefore defend a place in a vendor-independent constancy control protocol.
Singular value decomposition based computationally efficient algorithm for rapid dynamic near-infrared diffuse optical tomography36(2009); http://dx.doi.org/10.1118/1.3261029View Description Hide DescriptionPurpose:
A computationally efficient algorithm (linear iterative type) based on singular value decomposition (SVD) of the Jacobian has been developed that can be used in rapid dynamic near-infrared(NIR) diffuse optical tomography.Methods:
Numerical and experimental studies have been conducted to prove the computational efficacy of this SVD-based algorithm over conventional optical image reconstruction algorithms.Results:
These studies indicate that the performance of linear iterative algorithms in terms of contrast recovery (quantitation of optical images) is better compared to nonlinear iterative (conventional) algorithms, provided the initial guess is close to the actual solution. The nonlinear algorithms can provide better quality images compared to the linear iterative type algorithms. Moreover, the analytical and numerical equivalence of the SVD-based algorithm to linear iterative algorithms was also established as a part of this work. It is also demonstrated that the SVD-based image reconstruction typically requires operations per iteration, as contrasted with linear and nonlinear iterative methods that, respectively, require and operations, with “” being the number of unknown parameters in the optical image reconstruction procedure.Conclusions:
This SVD-based computationally efficient algorithm can make the integration of image reconstruction procedure with the data acquisition feasible, in turn making the rapid dynamic NIRtomography viable in the clinic to continuously monitor hemodynamic changes in the tissue pathophysiology.
Coronary centerline extraction from CT coronary angiography images using a minimum cost path approach36(2009); http://dx.doi.org/10.1118/1.3254077View Description Hide DescriptionPurpose:
The application and large-scale evaluation of minimum cost path approaches for coronary centerline extraction from computed tomography coronary angiography (CTCA) data and the development and evaluation of a novel method to reduce the user-interaction time.Methods:
A semiautomatic method based on a minimum cost path approach is evaluated for two different cost functions. The first cost function is based on a frequently used vesselness measure and intensity information, and the second is a recently proposed cost function based on region statistics. User interaction is minimized to one or two mouse clicks distally in the coronary artery. The starting point for the minimum cost path search is automatically determined using a newly developed method that finds a point in the center of the aorta in one of the axial slices. This step ensures that all computationally expensive parts of the algorithm can be precomputed.Results:
The performance of the aorta localization procedure was demonstrated by a success rate of 100% in 75 images. The success rate and accuracy of centerline extraction was quantitatively evaluated on 48 coronary arteries in 12 images by comparing extracted centerlines with a manually annotated reference standard. The method was able to extract 88% and 47% of the vessel centerlines correctly using the vesselness/intensity and region statistics cost function, respectively. For only the proximal part of the vessels these values were 97% and 86%, respectively. Accuracy of centerline extraction, defined as the average distance from correctly automatically extracted parts of the centerline to the reference standard, was 0.64 mm for the vesselness/intensity and 0.51 mm for the region statistics cost function. The interobserver variability was 99% for the success rate measure and 0.42 mm for the accuracy measure. Qualitative evaluation using the best performing cost function resulted in successful centerline extraction for 233 out of the 252 coronaries (92%) in 63 additional CTCA images.Conclusions:
The presented results, in combination with minimal user interaction and low computation time, show that minimum cost path approaches can effectively be applied as a preprocessing step for subsequent analysis in clinical practice and biomedical research.
Registration of prone and supine CT colonography scans using correlation optimized warping and canonical correlation analysis36(2009); http://dx.doi.org/10.1118/1.3259727View Description Hide DescriptionPurpose:
In computed tomographic colonography (CTC), a patient will be scanned twice—Once supine and once prone—to improve the sensitivity for polyp detection. To assist radiologists in CTC reading, in this paper we propose an automated method for colon registration from supine and prone CTC scans.Methods:
We propose a new colon centerline registration method for prone and supine CTC scans using correlation optimized warping (COW) and canonical correlationanalysis (CCA) based on the anatomical structure of the colon. Four anatomical salient points on the colon are first automatically distinguished. Then correlation optimized warping is applied to the segments defined by the anatomical landmarks to improve the global registration based on local correlation of segments. The COW method was modified by embedding canonical correlationanalysis to allow multiple features along the colon centerline to be used in our implementation.Results:
We tested the COW algorithm on a CTC data set of 39 patients with 39 polyps (19 training and 20 test cases) to verify the effectiveness of the proposed COW registration method. Experimental results on the test set show that the COW method significantly reduces the average estimation error in a polyp location between supine and prone scans by 67.6%, from to , compared to the normalized distance along the colon centerline algorithm .Conclusions:
The proposed COW algorithm is more accurate for the colon centerline registration compared to the normalized distance along the colon centerline method and the dynamic time warping method. Comparison results showed that the feature combination of-coordinate and curvature achieved lowest registration error compared to the other feature combinations used by COW. The proposed method is tolerant to centerline errors because anatomical landmarks help prevent the propagation of errors across the entire colon centerline.