Volume 36, Issue 6, June 2009
Index of content:
- Therapy Scientific Session: Ballroom D
- Monte Carlo and General I
WE‐E‐BRD‐01: Investigating Effects of Pelvic Bone Marrow Radiation Dose On Acute Hematologic Toxicity Using High Dimensional Data Analysis36(2009); http://dx.doi.org/10.1118/1.3182553View Description Hide Description
Purpose: To develop a new method to investigate the effect of local pelvic bone marrow (BM) radiationdose on acute hematologic toxicity (HT) in cervical cancer patients undergoing concurrent chemoradiation therapy (CRT), with the ultimate goal of optimizing BM‐sparing radiation techniques. Method and Materials: We analyzed 24 cervical cancer patients treated with concurrent cisplatin (40mg/m2/week) and pelvic IMRT. The white blood cell count (WBC) nadir, defined as the lowest value occurring between the start of CRT and two weeks following IMRT, was used as the indicator of acute HT. The pelvic bone region included the os coxae, lower lumbar vertebrae, sacrum, acetabulae, and proximal femora. BM doses were standardized in two steps: pelvic bone registration followed by dose remapping. Simulation CTimages were registered to a common (canonical) template using the optical flow based deformable image registration developed by Yang et al.. The deformation field was used to remap the dose distribution back to the deformed pelvic bones. We generated a data structure called a “dose‐array” that reserves the spatial information of each dose value. The position of an element inside array can be used to trace its location in 3D. Results: Substantial variation of BM dose distribution among the 24 patients was observed. Patients were classified based on their WBC nadir value (⩾ vs. < 2000/μL) into two groups (n=15 vs. n=9, respectively), and the average pelvic BM 3‐D dose distribution was compared visually. Results suggested that patients receiving higher doses to the lower lumbar spine, upper sacrum, and medial ilium, were more likely to develop acute HT. Conclusion: We have developed a novel method to study the impact of BM radiationdose on acute HT. Our next step is to implement both unsupervised and supervised classification models to analyzeradiation effects, for use in IMRT plan optimization.
WE‐E‐BRD‐02: Development of a Software for Integrating the Medical Accelerator Model with Patient Phantoms Into Monte Carlo Based Dosimetry Platform36(2009); http://dx.doi.org/10.1118/1.3182554View Description Hide Description
Purpose: To develop a new software package that integrates the detailed medical acceleratormodel with anatomically‐realistic phantoms in the Monte Carlo engine, MCNPX, for the determination of secondary organ doses from secondary photons and neutrons during the treatment. Method and Materials: The accelerator models were developed include 80 MLC leaves, two pairs of tungsten jaws operating at 6 and 18 MV, respectively. The Pregnant Women, Adult Male and Adult Female phantoms were utilized. The software was designed in the Visual C♯.NET and the preliminary effort was on the graphic user interface (GUI) to automatically generate the MCNPX input deck consisting of accelerator and patient according to user‐defined treatment plans. The MLC configuration files were parsed to collect positions of the jaws and MLCs before they were exported into MCNPX by using the “TRCL” with the “LIKE BUT” card. Results: A number of user‐friendly GUI features have been developed for a user to select treatment parameters: selection of an accelerator type, a type of phantom, specify the phantom origin position and the field numbers, parse in the MLC configuration files and save the MLC leaf positions. According to the positions of the jaws and MLCs, the software can automatically output the MCNPX deck files, classified per field, per segment and per MLC leaf position with the accelerator model and computational phantoms. Conclusion: A new software has been demonstrated. The preliminary results indicate that the GUI design and integration features are feasible and versatile. This software is designed to help a user to carry out organ dose studies using the MCNPX input deck without having to handle complex accelerator and phantom information. Such a tool can facilitate the study for assessing and for comparing various new modalities in terms the likelihood for secondary cancer induction.
WE‐E‐BRD‐03: Re‐Evaluation of the Product of W/e and the Graphite to Air Stopping Power Ratio for Co‐60 Air Kerma Standards36(2009); http://dx.doi.org/10.1118/1.3182555View Description Hide Description
Purpose: To reanalyze experiments which determine , the product of (W / e) air , the average energy deposited per coulomb of charge released in dry air, and , the Spencer‐Attix mass collision stopping‐power ratio for graphite to air, and to calculate an average value for this product for the BIPM air kerma standard: . This value could be adopted for use with air kerma primary standards, along with corrections to account for variations due to cavity size. Methods and Materials: The experiments measured by various methods, often involving calorimeters and ionization chambers. Correction factors, e.g., to account for gaps about a calorimeter core or perturbations due to a cavity's presence, are calculated as needed for each experiment using the EGSnrc user‐codes CAVRZnrc, DOSRZnrc, and CAVITY.Stopping power ratios are evaluated using SPRRZnrc for different choices of graphite density (bulk 1.70 g/cm3 or grain 2.265 g/cm3) for the density effect correction and average excitation energy for graphite (I=78 or 87 eV). For each experiment, the corrected value of is multiplied by , the quotient of the stopping power ratios for the BIPM chamber and the experiment in question. A least squares technique is used to compute an average value of . Results: The correction factors generally decrease the value of for each experiment, often outside the range of one standard deviation quoted with each experimental result. The ratio varies by less than 0.1% for different choices of density correction and I‐value and hence the product is also relatively insensitive to these choices.
Conclusion: The preliminary analysis suggests that the accepted value of , 33.97 J/C ±0.15%, is 0.6% too high. This would have implications for primary air kerma standards worldwide and for the value of (W / e) air which is used in low energy x‐ray standards.
36(2009); http://dx.doi.org/10.1118/1.3182556View Description Hide Description
Purpose To provide a fast and accurate dose calculation solution, a massively parallel version of a Monte Carlo superposition (MCS) method is developed on a computer with a Nvidia GTX260 graphics card. Method and Materials Graphics processing units (GPU) can be extremely powerful and economical in accelerating computationally intensive applications. However, the GPU memory structure can be too rigid for certain types of applications to realize its computation potential. The MCS method, by nature, is a memory‐intensive application since most of its operations are based on linear interpolation and table‐look‐up. To leverage the power of GPU, we modified a sequential MC algorithm and implemented a parallel version. The biggest challenge to our implementation is to how to arrange memory access sequences and increase temporal and spatial data locality. We introduced several techniques for handling this challenge, including stage‐based processing, photon grouping with respect to kernel index, etc. To match the hierarchy of GPU memory, data were carefully organized to minimize off‐chip memory accesses. A parallel thread‐safe random number generator was implemented to produce the necessary randomness required by the MC algorithm. Results Our GPU‐based implementation is based on fully optimized code and demonstrates a speedup of 25∼40 times over the CPU‐based single‐thread implementation. Although the speedup is significant, the research does reveal some inherent limitations on the GPU‐based implementation of the particular target MC algorithm. The most critical one is due to the conflicts between the GPU's SIMD (single‐instruction‐multiple‐data) architecture and the stochastic nature of the MC algorithm. Conclusion A massively parallel version of a MCS method is implemented on a computer powered by GTX260 GPU and a speedup up to 40X is observed. This indicates that a GPU‐based MC approach is viable and cost‐efficient for satisfying the increasing demands on computation power and accuracy by advanced radiation therapy technologies.
WE‐E‐BRD‐05: EGSnrc Benchmarking Against High‐Precision Lateral Distributions of X‐Rays Produced in Thick Targets36(2009); http://dx.doi.org/10.1118/1.3182557View Description Hide Description
Purpose: The EGSnrc Monte Carlo code was benchmarked against newly measured angular distributions of x‐rays produced by electron beams stopping in thick targets. Method and Materials: The electron beam was generated using the National Research Council Canada (NRC) Vickers linear accelerator. Five materials: Be, Al,Cu,Ta and Pb, were used as targets and measurements were made at 20 MeV. The absolute absorbed dose was measured at different lateral positions with various ionization chambers (PTW 30013, Exradin A19, and NE2505/3) using various build‐up caps made of PMMA, Brass,Cu, Sn and W alloy. The BEAMnrc code was used to simulate the experimental setup. The geometry, beam and material properties were implemented in BEAMnrc, while the “cavity” user code was used to score the absorbed dose in the ionization chamber air cavity per incident electron up to angles of 25°. Results: For medium‐Z targets the shape agreement between measured and simulated distributions is better than 1 % for the entire scanning range. In terms of absolute dose, it was found that there are differences of up to 4 % related to the build‐up caps used for the ionization chamber. When considering the five targets, a trend with Z of the target material is noticed. For medium‐Z targets the ratio of calculated to measured dose has a variation below 1 %, for the Be target the ratio increases 12 % with the scanning position, while for the Pb target the ratio decreases 5 %. Conclusion: The present study shows that the EGSnrc code predicts x‐ray angular distributions with shapes in agreement with the measured data at the 1 % level for medium‐Z targets. There are still unresolved discrepancies in the absolute dose related to the build‐up cap used for the ionization chamber and the target material.
Partial support from NIH R01 CA104777‐01A2.
WE‐E‐BRD‐06: Validation of An MCNPX Monte Carlo Model of a Discrete‐Spot Scanning Beam Nozzle and Evaluation of the Beamlet Lateral Envelope of Low Doses36(2009); http://dx.doi.org/10.1118/1.3182558View Description Hide Description
Purpose: To build and experimentally validate an accurate Monte Carlo (MC) model of a discrete spot scanning beam nozzle and to use this model to assess the dosimetric consequences of the “halo” dose (i.e., that due to scattering from beam line components and nuclear interactions).Method and Materials: An accurate model of the scanning nozzle at our institution was implemented using the MC system MCNPX version 2.5.0 according to the blueprints provided by the manufacturer of the proton therapy system (Hitachi Ltd., Tokyo, Japan). The MC model was validated against measurements of percentage depth doses (PDDs), lateral profiles in‐air (LPA) and in‐water (LPW), field size factors (FSFs), and spread out Bragg peaks. The validated MC model presented in this study was used to generate data for clinical configuration of our treatment planning system (TPS). Results: Distance to agreement between measured and simulated ranges, LPAs and LPWs were within 0.16 cm, 0.05 cm and 0.1 cm, respectively. Comparisons between measured FSFs and TPS predictions, which may not model the halo dose properly, reveled differences of up to 9.5%, whereas comparison with MC simulations showed maximum differences of 3.6%. We found that ion chambers with radius as large as 4.2 cm are insufficient to measure the integral energy deposition. Conclusion: Because of the halo dose, which broadens lateral sizes of the proton beamlets, care should be taken when measuring integral energy deposition for configuration of the TPSs. Furthermore, the halo dose must be properly modeled in the dose algorithm used by TPS for accurate prediction of scanned proton beam doses.
WE‐E‐BRD‐07: Algorithm for Accurate Computation of Doses From Large‐Angle Scattering of Scanned Proton Pencil Beams36(2009); http://dx.doi.org/10.1118/1.3182559View Description Hide Description
Purpose: The purpose of this work was to find a technique to improve the accuracy of proton pencil beam dose computation for large scattering angles. Methods: The computed dose from an elementary pencil beam in the scattering medium, like water in our study, comes from a convolution integral of the proton fluence in air and the scattering distribution function in the medium. The incident proton beam profile in air, just outside the target medium, was modeled with a sum of up to three Gaussians fitted with measured data. The proton scattering in the medium, assumed to proceed through Coulomb interactions, was described using the classic Moliere distribution. The convolution integration was performed by a combination of analytical and numerical methods leading to a numerically manageable expression. Results: We compared measured and computed dose profiles at the 2 cm and at the Bragg peak depths for proton pencil beams with low and high energies. In all cases, profile calculations using just one Gaussian term for the beam fluence in air and the first Moliere term for the scattering distribution were in large disagreement with measurements at lateral distances beyond 10% profile falloff. At 2 cm depth in the water, the second and the third terms in Moliere distribution did not contribute significantly. On the other hand, the use of three Gaussians for the proton fluence in air produces nearly perfect agreement for lower energy protons and brings the agreement to within one order of magnitude for the higher proton energies. At Bragg peak depths, higher order Moliere terms are again insignificant for lower energy protons, but become important contributors for higher energy protons.Conclusions: The proposed procedure was shown to achieve a dramatic improvement in the accuracy of computed doses from proton pencil beams at large proton scattering angles.
36(2009); http://dx.doi.org/10.1118/1.3182560View Description Hide Description
Purpose: Whole‐body patient models of various sizes and postures are needed for the assessment of organdoses in CTimaging, internal nuclear medicine and external‐beam radiation treatment procedures. This paper discusses a deformable mesh‐based modeling method to create patient‐specific phantoms that are morphed by changing to 5th‐ to 95th‐percentiles of body height and weight, as well as internal organ volume and masse. Method and Materials: The mesh‐based reference adult male and female phantoms were deformed by mainly two different percentile data: 1) the whole‐body size percentile data which were defined by the anthropometric parameters such as height and weight from the National Health and Nutrition Examination Survey (NHANES). 2) individual internal organ percentile data which were derived by the cumulative pattern analysis based on the International Commission on Radiological Protection (ICRP) 23 and 89 references. These mesh‐based percentile phantoms were converted into the voxel‐based phantoms. The final step is to link the voxel phantom with correct tissue density and elemental composition, so that radiation transport through the human‐body phantom was modeled correctly in a Monte Carlo code. Results: The whole‐body size percentile models have been created by the NHANES anthropometric data and the details of organ percentiles derived from ICRP references. The deformability of the RPI reference adult phantoms has been shown through the demonstration of percentiles‐ and postures‐specific adult models. Conclusion: A next generation deformable patient modeling method has been demonstrated. With the mesh deformation algorithms, the individual organs are able to be deformed to match the volumes and masses with desired organ percentiles. The flexible modeling allows patients to be represented in various sizes and postures for the purpose of Monte Carlodose calculations. This study also identified the need for further research to develop method to run Monte Carlo calculations in mesh geometry directly.
- Treatment Planning and Stereotactic II
36(2009); http://dx.doi.org/10.1118/1.3182603View Description Hide Description
Purpose: Magnetically scanned proton beams (PBS) have lateral position, energy, spot size, and flux degrees of freedom that allow for optimal target dose distributions and tissue sparing. PBS requires fast and accurate control of the degrees of freedom during the treatment to deliver the desired dose. We describe how a PBS field is delivered and verified in the technology we developed. Methods and Materials:Treatment plan beam parameters are converted to equipment settings, sorted in energy layers, and sent to various measurement and control devices that check the parameters every 0.25 ms. The on‐line dosimetry system was adapted to support this speed and characterized over a range of beam positions and dose rates. These data were calibrated against measurements at isocenter using a variety of instrumentation techniques. Sources of potential error and noise were identified and reduced to a level which enabled accurate clinical treatment.Results: The scanning system produces pencil beams smaller than 6–8 mm lateral width (1σ) with focusing magnets and 9–14 mm without. Dose accuracy is +/−0.2 cGy in the Bragg peak for a single pencil‐beam and +/−0.75% for our reference irradiation geometry. Measured 3‐dimensional dose distributions satisfy the (3 mm, 3%) gamma index criterion with 97% of points below 1. Conclusion: Our current system has proven to be accurate and safe. We developed the capability for treating large tumors, a purpose not usually considered for proton beam scanning but which we see as an important opportunity. We achieved clinically useful dose distributions with our beam and system parameters (including dose and position accuracy and spot size). Further refinements are planned.
36(2009); http://dx.doi.org/10.1118/1.3182604View Description Hide Description
Purpose: Accuracy in monitor unit (MU) calculation is critical for patient treatment which is measured in proton beam for each field that takes significant of beam time. An accurate, simple and time saving method is desired for MU calculation. Presented is a sector integration approach with constant correction for dose/MU calculation for any treatment in a uniform scanning proton beam. Materials and methods: Dose measurements were made in a scanning proton beam with a parallel plate Markus ion‐chamber in a water phantom. Two methods were adopted to measure the Dose/MU values: (i) varying beam energy and SOBP values with a fixed aperture radius (10cm) at the reference point and (ii) varying the beam energy and aperture radius with a fixed SOBP. These measured values were summarized into two tables. These two tables were used to calculate dose/MU for an energy and a SOBP using certain aperture radius by linear interpolation, which produced the initial results: D(energy, SOBP)10cm and D(energy, radius)SOBP'. The dose/MU value for any field size and shape is calculated using sector integration method with a piecewise correction based on these two values. Results: Dose/MU values for total 412 beam fields without compensators were measured and calculated using the model above. The model parameters were derived from the measurements in 90 fields with varying shapes. These parameters were applied to compute the dose in the other 322 treatment fields. The difference between calculated and measured values in these 322 fields is −0.12%±1.36% for beam range in water <20 cm. The difference is smaller to −0.06%±0.74% for beam range >20cm. Conclusion: An accurate, simple, and fast method for MU calculation in any proton treatment fields is introduced. The sector integration method derived from the actual measured data produced very small error when tested in 322 additional fields.
TH‐C‐BRD‐03: Kilovoltage Stereotactic Radiosurgery for Age Related Macula Degeneration: Assessment of Patient Effective Dose and Patient Specific Tissue Doses36(2009); http://dx.doi.org/10.1118/1.3182605View Description Hide Description
Purpose: Age‐related macula degeneration (AMD) is a leading cause of vision loss in the United States. Radiation therapy was initially explored as a treatment option in the 1990's, but has since been abandoned in favor of intraocular drug injections. Interest continues for stereotactic radiosurgery(SRS), an option that provides a noninvasive treatment for the wet form of AMD. Method and Materials: Two adult heads, male and female, were computationally modeled with Rhinoceros 4.0 using CT slice segmentation scaled to ICRP Publication 89 reference values. The head phantoms were voxelized and a three‐beam photontreatment (100 kVp) was modeled using the MCNPXradiation transport code to evaluate tissuedose and effective dose.Treatment was also simulated using the reference heads with changeable optic nerve positions based on individual patient variability seen in a head CT scan review. Results: A cumulative dose of 24 Gy to the macula (8 Gy per beam) yielded an effective dose of 0.28 mSv. The maximum doses to the most extreme patient specific optic nerve positions were evaluated using Dose Volume Histograms and found to be below the thresholds for serious complications, as were other reference tissues.Conclusion: The results of this study show that SRS is a safe option considering effective dose and tissue toxicity for a reference individual. Patient specific models will be created from the CT review and treatment will be modeled using MCNPX to establish certainty that this is a safe treatment option for all individuals considering other patient specific variations in anatomy.
This work was sponsored by Oraya Therapeutics.
36(2009); http://dx.doi.org/10.1118/1.3182606View Description Hide Description
Purpose:Dose volume histogram (DVH) parameters of the lung have been reported to be risk factors for radiation pneumonitis. We evaluated different lung volumes using 4DCTs to determine the optimal data sets for lung contouring and to develop a model to convert the DVH parameters between datasets. Methods and Materials: A retrospective analysis was performed on 10 patients' 4DCT and fast helical free breathing scans. The lungs were automatically contoured on the averaged (AVG_CT), end expiration (EE), end inspiration (EI), free breathing (FB), maximum intensity projection (MIP) and mean position (MP) images. Volumes and centers of mass were compared to those obtained with deformable image registration (AVG_def). AVG_def contours were created by averaging displacement vectors for each image pixel over the 4DCT, representing the averaged contour due to organ motion. To compare DVH parameters (V20, V12.5 and MLD), patients' plans were standardized as hypo‐fractionated. A population‐based model was tested for converting DVH parameters between datasets. Results: A paired Student's t‐test showed significant variation between AVG_def and other volumes. MP image volume was most statistically similar (0.479%). Centers of mass between image volumes showed little relevant variation. Marked volume effect was observed for dose‐volume relationships. MP image DVH parameters were most statistically similar to those of the AVG_def (< 1%). An average percentage change of 3.95% was observed between V20 of EI and AVG_def volumes. A model to convert from one DVH parameter to another was effective when comparing different volumes. Conclusion: There was a marked variation between the volumes and DVH parameters obtained from EI, MIP and EEimages and those of AVG_def. The most statistically similar image, in terms of volume and DVH analysis, was the MP. A model can be used, with caution, in converting the DVH parameters from one image to another.
TH‐C‐BRD‐05: Amplitude Gated Deep‐Inspiration‐Breath‐Hold Treatment for Heart Dose Reduction in Left Breast Cancer Patients: Residual Motion and Breath‐Hold Threshold36(2009); http://dx.doi.org/10.1118/1.3182607View Description Hide Description
Purpose: Amplitude gated deep‐inspiration‐breath‐hold (DIBH) treatment technique may greatly reduce cardiac dose for left‐breast cancer patients compared to irradiation under free‐breathing. This study investigates the correlation between inter/intrafractional motion and the amplitude gated breath‐hold threshold. The outcome of this study can be used to guide setting the thresholds and margin determination for patients to be treated with this technique. Method and Materials: 12 left‐sided breast cancer patients were studied in this investigation. EPID device was used for cine image acquisition. Real‐time position management® (RPM) was used in accordance with amplitude gated DIBH technique for motion management. Megavoltage cine images taken during the treatments were used to determine chestwall motions. Image analysis of the DICOM cine images were performed using MATLAB® and Image J®. Threshold sizes were compared with maximum intrafractional chestwall motions in 282 sessions, separated into superior, middle, and inferior regions. Interfraction motion was measured by picking cine images acquired during different sessions at the top and bottom of the threshold for each patient. Results: Breath‐hold level was found to have large variations among various patients. The mean intrafractional motion as measured with cine EPIDimages was 0.68 mm, with a standard deviation of 0.54 mm. For each given mm increase in intrafraction chestwall motion, the threshold increased an average of 0.38, 0.39, and 0.14 mm for superior, middle, and inferior regions, respectively, for the 12 patients. Interfraction chestwall motion increased with increasing threshold size. Mean interfractional motion was 3.34 mm, with a standard deviation of 2.31 mm. A 1 mm increase in threshold size led to an average 0.88 mm increase in mean interfraction motion. Conclusions: A correlation exists between the breath‐hold level and intrafractional chestwall motion. Threshold correlations with intra/interfraction motion were also found. Increasing the threshold leads to both increased inter and intrafractional motion.
TH‐C‐BRD‐06: Analysis of the Mechanical Accuracy of Volumetric Modulated Arc Therapy (VMAT) Deliveries and Corresponding Dosimetric Effects36(2009); http://dx.doi.org/10.1118/1.3182608View Description Hide Description
Purpose: Due to the continuous nature of VMAT deliveries, monitoring mechanical parameters and their effect on dosimetry is not straightforward. The purpose of this work is to present and evaluate a tool that compares real‐time VMAT delivery parameters with expected values, and utilize the tool to evaluate the mechanical accuracy of VMAT plans and judge the effect of mechanical errors on dosimetric QA. Method and Materials: An application was designed to record real‐time VMAT delivery information such as gantry position, leaf positions, and dose rate. An analysis utility was created to compare the delivery data to the DICOM‐RT plan data. Several example and clinical plans with and without simulated errors were delivered to a cylindrical QA phantom and analyzed.Results: For test plans where a 2cm MLC gap oscillated during arc delivery at various speeds, mechanical accuracy was good. The leaf gap absolute error was 0.24+/−0.26mm at maximum speed. For four clinical plans, the average mean gantry/leaf errors were 0.18 °/− 0.01mm and the average standard deviations were 0.35°/0.52mm. The analysis program correctly reported simulated errors in both test and clinical plans, including a 2mm shift of one leaf, a missing control point, and gantry lag. Dosimetric results were not significantly affected by the simulated errors, but the phantom was sensitive enough to detect them. Conclusion:Software tools that support monitoring and subsequent analysis of treatment machine parameters during VMAT compared to expected values have been developed. Clinical plans demonstrated good mechanical accuracy, and simulated errors were detected by the analysissoftware although dosimetric QA was relatively insensitive to them. These tools can complement VMAT dosimetric QA by helping determine the cause of dosimetric discrepancies, and routine use may also identify potential problems before they lead to dosimetric or other issues.
Conflict of Interest: Supported in part by Elekta
TH‐C‐BRD‐07: Small Field Intracranial Radiosurgery Using Intermediate Energy X‐Rays (1 MV) to Improve Dose Gradient and Homogeneity36(2009); http://dx.doi.org/10.1118/1.3182609View Description Hide Description
Purpose: The radiological penumbra of small radiosurgical dose fields is dictated by the range of secondary electrons, which in turn is determined by the primary photon energy. The purpose of this work is to experimentally compare the dose gradient and homogeneity of a multiple beam dose distribution in a radiosurgery head phantom for 6 MV versus 1 MV while minimizing geometrical penumbra. Methods and Materials: A commercial radiosurgery head phantom (LUCY™) containing Gafchromic EBT film was used for all irradiations. A digital microscopy imaging system resolved steep dose gradients in the films and a Siemens linac was modified to produce 1 MV x‐rays. The XKnife™ RT3 TPS was modeled for both 1 and 6 MV to compare with measurements. Two‐beam (90° apart) and eighteen‐beam (10° apart) irradiations were done in the same plane as the film using a 5 mm tertiary collimator. The geometrical penumbra ranged from 0.2–0.4 mm, equivalent to a linac with a 1 mm focal spot with collimator 20 cm from the isocenter. Dose was normalized at the isocenter at depth 7 cm in phantom. Results: For the two‐beam irradiations, the 90%–50% and 90%–10% dose gradients at the beam intersection were 1.7 & 4.7 mm (6MV) versus 0.5 & 1.3 mm (1 MV) for identical irradiation conditions. For the eighteen‐beam arc, the 90%–80% & 90%–60% dose gradients in the plane of irradiation were 0.84 & 1.7 mm (6MV) versus 0.29 & 0.9 (1 MV). In all cases, the homogeneity across the isocenter was superior for 1 MV. The dose at the entrance of each beam was greater for 1 MV. Conclusions: A 1 MV x‐ray beam showed superior dose gradient and homogeneity compared to 6MV for the irradiations examined at the expense of an increase in dose at the beam entrance for the lower energy.
TH‐C‐BRD‐08: The Effect of Embolization Glue On Gamma Knife Radiosurgery of Arterio‐Venous Malformations36(2009); http://dx.doi.org/10.1118/1.3182610View Description Hide Description
Purpose: Gamma Knife stereotactic radiosurgery in combination with embolization or alone is a widely recommended treatment option for patients with arterio‐venous malformations (AVM) with estimated near 90% cure rate. A recently published report suggested that the failures to obliterate the AVM nidus were caused by up to 15% dose reduction in the target volume due to the effect of the embolization materials used prior to the Gamma Knife radiosurgery. This work aimed to resolve the issue. Method and Materials: 1) The apparent linear attenuation coefficients for 120keV to 140keV x‐rays in the embolized regions were retrieved from CT scans for several patients with AVM malformations. Based on these coefficients and our virtual model of Gamma Knife with basic ray tracing, we obtained the pathlengths and densities for the embolized regions. The attenuation of Co‐60 beams was then calculated for various sizes and positions of AVM embolized regions and the number of beams used for treatment. 2) Published experiments for several high atomic number materials were used to estimate the effective Co‐60 beam attenuation coefficients for N‐butylcyanoacrylate (doped with high Z) and Onyx (ethylene‐vinyl alcohol copolymerdoped with tantalum) used in the AVM embolizations. The dose reduction during Gamma Knife radiosurgery was calculated based on the Co‐60 energy attenuation coefficient. Results: Based on apparent CT (keV) attenuation coefficients, one may conclude that the cumulative effect of the glue would decrease the dose delivered in Gamma Knife radiosurgery from −8 to −15 %. However, using the true attenuation coefficient for Co‐60 energies in the dose calculation leads one to conclude that there is 0.2% dose reduction per beam and less than 0.01–0.2% dose reduction in total. Conclusion: Dose reduction due to attenuation of the Co‐60 beam by the AVM embolization glue material is negligible.
36(2009); http://dx.doi.org/10.1118/1.3182611View Description Hide Description
Purpose: The absorbed dose for a clinical megavoltage photon beam can be separated into primary and scatter doses expressed as a convolution of the point spread kernels with Terma (T). This expression requires an additional beam hardening function to account for the difference of photon attenuation vs. energy for the photon energy spectrum. It has been established that the beam hardening function can be eliminated if one rewrite the convolution as modified polyenergetic energy deposition kernels with primary collision Kerma Kc, and Sc, Scerma, equal to T‐ Kc. Both Kc and Sc include beam‐hardening effects but are with different depth dependence. However, no experimental method has been established to determine the depth dependence of Sc. In this study, we focused on developing a method to experimentally determine Sc/Kc at central‐ and off‐axis. Methods: We use an empirical expression for scatter‐to‐primary ratio (SPR) as a function of photon energy to determine Scerma at infinite field size and depth. There are three parameters, a0, w0 and d0, related to SPR, which need to be determined experimentally, and they can be directly correlated with attenuation coefficient μ. Therefore, one can determine central‐ and off‐axis scerma, if μ is known. To confirm the off‐axis Scerma we obtained from the off‐axis μ(x), we propose an independent experiment to extract off‐axis SPR. The SPR will be determined by fitting experimentally‐measured TPR *Sp. Results: We have extracted the parameters from Monte Carlo simulated data at central axis in a wide clinical energy range. We find that a0/w0 is proportional to the beam energy at central axis, and this is the slope of the depth dependence of Sc/Kc. Our results also show that the ratio of primary Scerma‐to‐collision Kerma has linear depth dependence. Conclusion: Experimental method is developed to determine the off‐axis Scerma, which is critical in convolution‐based scatter dose calculation.
36(2009); http://dx.doi.org/10.1118/1.3182612View Description Hide Description
Purpose: Our aim is to address the problem of unacceptably large dose variations at the junction of two adjoined fields in modulated electron radiation therapy (MERT) by optimizing the gap separation at the junction area. Another aim is to provide basic knowledge helping the optimization to get the best MERT plan and understanding the upper limitations for the future MERT treatment design. Material and method: In this work we used MCBEAM and MCSIM codes for accelerator simulation and phantom/patient dose calculation, respectively. A prototype manually driven eMLC was accurately modeled. Simulation results were compared to measurements. Dose distributions inside a phantom for two adjoined electron fields were investigated for different gaps and different electron energies. Gaps were then evaluated by achieving the best dose distributions and the sharpest target DVH curve fall‐off. The gap sizes, which gave the best dose distributions and DVHs for all the energies available, were determined. Gap sizes as a function of energy and SSD to achieve optimal dose profiles at desired depths in the phantom was also studied. Results: It is shown that as energy increases the gap size need to increase which may be ascribed to the change in penumbra. For a given energy, the gap size shows a linear relationship with the SSD. As is expected, the optimal gap is larger with larger SSD and higher energies. A series of empirical formulas have been developed in obtaining the optimum gap sizes for the adjoined beams with different energies, which can improve the target dose uniformity significantly. Conclusion: For each electron beamenergy an optimal gap can be chosen to minimize the dose inhomogeneity arising from adjoining two electron fields, which will facilitate the design of the leaf sequences for MERT dose delivery using an eMLC.
- Radiobiology II and Informatics
TH‐D‐BRD‐01: New Developments in The Computational Environment for Radiotherapy Research (CERR) Software System36(2009); http://dx.doi.org/10.1118/1.3182655View Description Hide Description
Purpose: CERR continues to be used across the world for radiotherapy research, and was downloaded over 1,000 times in the last year. We present recent improvements and changes to the system in response to user requests and research needs. Methods: New developments in the last year include: (1) integration of more powerful image registration tools (a multi‐scale demons algorithm), (2) the development of an extensive plan robustness analysis module (used to statistically simulate the effect of random, systematic, and contouring variations over a course of treatment), (3) generalizations to the java‐based DICOM input and output, (4) documentation now available via an extensive wiki page, and (5) a new tool that indicates the location of cold spots in a target volume by plotting the distance to the edge of a target volume for each voxel below a user‐selected dose threshold. Results: The image registration module has been stabilized and is being used extensively. The plan robustness module provides the statistical effect on DVH curves of presumed uncertainties. These ‘dose‐distance histograms’ give the user a simple graphical method for understanding the location of cold spots in target volumes. Documentation is now much more extensive. Conclusion: The updates to CERR will enhance user experience in using image registration and plan QA to validate treatment plan.
Partially supported by NIH grants R01 CA11820 and R01 CA85181.
36(2009); http://dx.doi.org/10.1118/1.3182656View Description Hide Description
Purpose: To study the impact in terms of execution time and accuracy of using graphics hardware for calculating the dose in a treatment planning system. The architecture of Graphics Processing Units (GPU) is well suited for numerical tasks that are intrinsically parallel, such as dose calculations. Method and Materials: This work was made within the framework of PlanUNC, or PLUNC, a treatment planning system developed and maintained by the Department of Radiation Oncology of the University of North Carolina at Chapel Hill for research and development purposes. The objective was to transparently integrate a GPU dose calculation engine to PLUNC. The CUDA platform from NVIDIA was used for the GPU implementation. A convolution/superposition (CS) dose calculation algorithm was ported by developing programs (called kernels) that are executed on the GPU. Firstly, the CS engine of PLUNC was directly ported to the GPU, with the original code preserved as much as possible. Secondly, parts of the original algorithm were redesigned to better exploit the massively parallel architecture of GPUs. The numerical experiments were conducted with a NVIDIA GeForce GTX280 and an Intel Q6600 CPU. Results: Acceleration factors of 10× to 20× were achieved with the GPU implementation relative to the CPU version with the direct port of the CS algorithm. The numerical accuracy of the results was preserved with the GPU implementation. A 40× acceleration factor was obtained for the TERMA calculation subroutine, which was rewritten with the GPU architecture in mind. These acceleration factors were sufficient to significantly improve the responsiveness of the PLUNC graphical interface. Conclusion: This work demonstrates the potential of graphics hardware for dose calculation in treatment planning systems. This could in turn have a significant impact on optimization strategies for complex delivery techniques such as IMRT.
Research sponsored by Varian Medical Systems, Inc.