Index of content:
Volume 36, Issue 6, June 2009
- Therapy Scientific Session: Ballroom D
- Monte Carlo and General I
WE‐E‐BRD‐01: Investigating Effects of Pelvic Bone Marrow Radiation Dose On Acute Hematologic Toxicity Using High Dimensional Data Analysis36(2009); http://dx.doi.org/10.1118/1.3182553View Description Hide Description
Purpose: To develop a new method to investigate the effect of local pelvic bone marrow (BM) radiationdose on acute hematologic toxicity (HT) in cervical cancer patients undergoing concurrent chemoradiation therapy (CRT), with the ultimate goal of optimizing BM‐sparing radiation techniques. Method and Materials: We analyzed 24 cervical cancer patients treated with concurrent cisplatin (40mg/m2/week) and pelvic IMRT. The white blood cell count (WBC) nadir, defined as the lowest value occurring between the start of CRT and two weeks following IMRT, was used as the indicator of acute HT. The pelvic bone region included the os coxae, lower lumbar vertebrae, sacrum, acetabulae, and proximal femora. BM doses were standardized in two steps: pelvic bone registration followed by dose remapping. Simulation CTimages were registered to a common (canonical) template using the optical flow based deformable image registration developed by Yang et al.. The deformation field was used to remap the dose distribution back to the deformed pelvic bones. We generated a data structure called a “dose‐array” that reserves the spatial information of each dose value. The position of an element inside array can be used to trace its location in 3D. Results: Substantial variation of BM dose distribution among the 24 patients was observed. Patients were classified based on their WBC nadir value (⩾ vs. < 2000/μL) into two groups (n=15 vs. n=9, respectively), and the average pelvic BM 3‐D dose distribution was compared visually. Results suggested that patients receiving higher doses to the lower lumbar spine, upper sacrum, and medial ilium, were more likely to develop acute HT. Conclusion: We have developed a novel method to study the impact of BM radiationdose on acute HT. Our next step is to implement both unsupervised and supervised classification models to analyzeradiation effects, for use in IMRT plan optimization.
WE‐E‐BRD‐02: Development of a Software for Integrating the Medical Accelerator Model with Patient Phantoms Into Monte Carlo Based Dosimetry Platform36(2009); http://dx.doi.org/10.1118/1.3182554View Description Hide Description
Purpose: To develop a new software package that integrates the detailed medical acceleratormodel with anatomically‐realistic phantoms in the Monte Carlo engine, MCNPX, for the determination of secondary organ doses from secondary photons and neutrons during the treatment. Method and Materials: The accelerator models were developed include 80 MLC leaves, two pairs of tungsten jaws operating at 6 and 18 MV, respectively. The Pregnant Women, Adult Male and Adult Female phantoms were utilized. The software was designed in the Visual C♯.NET and the preliminary effort was on the graphic user interface (GUI) to automatically generate the MCNPX input deck consisting of accelerator and patient according to user‐defined treatment plans. The MLC configuration files were parsed to collect positions of the jaws and MLCs before they were exported into MCNPX by using the “TRCL” with the “LIKE BUT” card. Results: A number of user‐friendly GUI features have been developed for a user to select treatment parameters: selection of an accelerator type, a type of phantom, specify the phantom origin position and the field numbers, parse in the MLC configuration files and save the MLC leaf positions. According to the positions of the jaws and MLCs, the software can automatically output the MCNPX deck files, classified per field, per segment and per MLC leaf position with the accelerator model and computational phantoms. Conclusion: A new software has been demonstrated. The preliminary results indicate that the GUI design and integration features are feasible and versatile. This software is designed to help a user to carry out organ dose studies using the MCNPX input deck without having to handle complex accelerator and phantom information. Such a tool can facilitate the study for assessing and for comparing various new modalities in terms the likelihood for secondary cancer induction.
WE‐E‐BRD‐03: Re‐Evaluation of the Product of W/e and the Graphite to Air Stopping Power Ratio for Co‐60 Air Kerma Standards36(2009); http://dx.doi.org/10.1118/1.3182555View Description Hide Description
Purpose: To reanalyze experiments which determine , the product of (W / e) air , the average energy deposited per coulomb of charge released in dry air, and , the Spencer‐Attix mass collision stopping‐power ratio for graphite to air, and to calculate an average value for this product for the BIPM air kerma standard: . This value could be adopted for use with air kerma primary standards, along with corrections to account for variations due to cavity size. Methods and Materials: The experiments measured by various methods, often involving calorimeters and ionization chambers. Correction factors, e.g., to account for gaps about a calorimeter core or perturbations due to a cavity's presence, are calculated as needed for each experiment using the EGSnrc user‐codes CAVRZnrc, DOSRZnrc, and CAVITY.Stopping power ratios are evaluated using SPRRZnrc for different choices of graphite density (bulk 1.70 g/cm3 or grain 2.265 g/cm3) for the density effect correction and average excitation energy for graphite (I=78 or 87 eV). For each experiment, the corrected value of is multiplied by , the quotient of the stopping power ratios for the BIPM chamber and the experiment in question. A least squares technique is used to compute an average value of . Results: The correction factors generally decrease the value of for each experiment, often outside the range of one standard deviation quoted with each experimental result. The ratio varies by less than 0.1% for different choices of density correction and I‐value and hence the product is also relatively insensitive to these choices.
Conclusion: The preliminary analysis suggests that the accepted value of , 33.97 J/C ±0.15%, is 0.6% too high. This would have implications for primary air kerma standards worldwide and for the value of (W / e) air which is used in low energy x‐ray standards.
36(2009); http://dx.doi.org/10.1118/1.3182556View Description Hide Description
Purpose To provide a fast and accurate dose calculation solution, a massively parallel version of a Monte Carlo superposition (MCS) method is developed on a computer with a Nvidia GTX260 graphics card. Method and Materials Graphics processing units (GPU) can be extremely powerful and economical in accelerating computationally intensive applications. However, the GPU memory structure can be too rigid for certain types of applications to realize its computation potential. The MCS method, by nature, is a memory‐intensive application since most of its operations are based on linear interpolation and table‐look‐up. To leverage the power of GPU, we modified a sequential MC algorithm and implemented a parallel version. The biggest challenge to our implementation is to how to arrange memory access sequences and increase temporal and spatial data locality. We introduced several techniques for handling this challenge, including stage‐based processing, photon grouping with respect to kernel index, etc. To match the hierarchy of GPU memory, data were carefully organized to minimize off‐chip memory accesses. A parallel thread‐safe random number generator was implemented to produce the necessary randomness required by the MC algorithm. Results Our GPU‐based implementation is based on fully optimized code and demonstrates a speedup of 25∼40 times over the CPU‐based single‐thread implementation. Although the speedup is significant, the research does reveal some inherent limitations on the GPU‐based implementation of the particular target MC algorithm. The most critical one is due to the conflicts between the GPU's SIMD (single‐instruction‐multiple‐data) architecture and the stochastic nature of the MC algorithm. Conclusion A massively parallel version of a MCS method is implemented on a computer powered by GTX260 GPU and a speedup up to 40X is observed. This indicates that a GPU‐based MC approach is viable and cost‐efficient for satisfying the increasing demands on computation power and accuracy by advanced radiation therapy technologies.
WE‐E‐BRD‐05: EGSnrc Benchmarking Against High‐Precision Lateral Distributions of X‐Rays Produced in Thick Targets36(2009); http://dx.doi.org/10.1118/1.3182557View Description Hide Description
Purpose: The EGSnrc Monte Carlo code was benchmarked against newly measured angular distributions of x‐rays produced by electron beams stopping in thick targets. Method and Materials: The electron beam was generated using the National Research Council Canada (NRC) Vickers linear accelerator. Five materials: Be, Al,Cu,Ta and Pb, were used as targets and measurements were made at 20 MeV. The absolute absorbed dose was measured at different lateral positions with various ionization chambers (PTW 30013, Exradin A19, and NE2505/3) using various build‐up caps made of PMMA, Brass,Cu, Sn and W alloy. The BEAMnrc code was used to simulate the experimental setup. The geometry, beam and material properties were implemented in BEAMnrc, while the “cavity” user code was used to score the absorbed dose in the ionization chamber air cavity per incident electron up to angles of 25°. Results: For medium‐Z targets the shape agreement between measured and simulated distributions is better than 1 % for the entire scanning range. In terms of absolute dose, it was found that there are differences of up to 4 % related to the build‐up caps used for the ionization chamber. When considering the five targets, a trend with Z of the target material is noticed. For medium‐Z targets the ratio of calculated to measured dose has a variation below 1 %, for the Be target the ratio increases 12 % with the scanning position, while for the Pb target the ratio decreases 5 %. Conclusion: The present study shows that the EGSnrc code predicts x‐ray angular distributions with shapes in agreement with the measured data at the 1 % level for medium‐Z targets. There are still unresolved discrepancies in the absolute dose related to the build‐up cap used for the ionization chamber and the target material.
Partial support from NIH R01 CA104777‐01A2.
WE‐E‐BRD‐06: Validation of An MCNPX Monte Carlo Model of a Discrete‐Spot Scanning Beam Nozzle and Evaluation of the Beamlet Lateral Envelope of Low Doses36(2009); http://dx.doi.org/10.1118/1.3182558View Description Hide Description
Purpose: To build and experimentally validate an accurate Monte Carlo (MC) model of a discrete spot scanning beam nozzle and to use this model to assess the dosimetric consequences of the “halo” dose (i.e., that due to scattering from beam line components and nuclear interactions).Method and Materials: An accurate model of the scanning nozzle at our institution was implemented using the MC system MCNPX version 2.5.0 according to the blueprints provided by the manufacturer of the proton therapy system (Hitachi Ltd., Tokyo, Japan). The MC model was validated against measurements of percentage depth doses (PDDs), lateral profiles in‐air (LPA) and in‐water (LPW), field size factors (FSFs), and spread out Bragg peaks. The validated MC model presented in this study was used to generate data for clinical configuration of our treatment planning system (TPS). Results: Distance to agreement between measured and simulated ranges, LPAs and LPWs were within 0.16 cm, 0.05 cm and 0.1 cm, respectively. Comparisons between measured FSFs and TPS predictions, which may not model the halo dose properly, reveled differences of up to 9.5%, whereas comparison with MC simulations showed maximum differences of 3.6%. We found that ion chambers with radius as large as 4.2 cm are insufficient to measure the integral energy deposition. Conclusion: Because of the halo dose, which broadens lateral sizes of the proton beamlets, care should be taken when measuring integral energy deposition for configuration of the TPSs. Furthermore, the halo dose must be properly modeled in the dose algorithm used by TPS for accurate prediction of scanned proton beam doses.
WE‐E‐BRD‐07: Algorithm for Accurate Computation of Doses From Large‐Angle Scattering of Scanned Proton Pencil Beams36(2009); http://dx.doi.org/10.1118/1.3182559View Description Hide Description
Purpose: The purpose of this work was to find a technique to improve the accuracy of proton pencil beam dose computation for large scattering angles. Methods: The computed dose from an elementary pencil beam in the scattering medium, like water in our study, comes from a convolution integral of the proton fluence in air and the scattering distribution function in the medium. The incident proton beam profile in air, just outside the target medium, was modeled with a sum of up to three Gaussians fitted with measured data. The proton scattering in the medium, assumed to proceed through Coulomb interactions, was described using the classic Moliere distribution. The convolution integration was performed by a combination of analytical and numerical methods leading to a numerically manageable expression. Results: We compared measured and computed dose profiles at the 2 cm and at the Bragg peak depths for proton pencil beams with low and high energies. In all cases, profile calculations using just one Gaussian term for the beam fluence in air and the first Moliere term for the scattering distribution were in large disagreement with measurements at lateral distances beyond 10% profile falloff. At 2 cm depth in the water, the second and the third terms in Moliere distribution did not contribute significantly. On the other hand, the use of three Gaussians for the proton fluence in air produces nearly perfect agreement for lower energy protons and brings the agreement to within one order of magnitude for the higher proton energies. At Bragg peak depths, higher order Moliere terms are again insignificant for lower energy protons, but become important contributors for higher energy protons.Conclusions: The proposed procedure was shown to achieve a dramatic improvement in the accuracy of computed doses from proton pencil beams at large proton scattering angles.
36(2009); http://dx.doi.org/10.1118/1.3182560View Description Hide Description
Purpose: Whole‐body patient models of various sizes and postures are needed for the assessment of organdoses in CTimaging, internal nuclear medicine and external‐beam radiation treatment procedures. This paper discusses a deformable mesh‐based modeling method to create patient‐specific phantoms that are morphed by changing to 5th‐ to 95th‐percentiles of body height and weight, as well as internal organ volume and masse. Method and Materials: The mesh‐based reference adult male and female phantoms were deformed by mainly two different percentile data: 1) the whole‐body size percentile data which were defined by the anthropometric parameters such as height and weight from the National Health and Nutrition Examination Survey (NHANES). 2) individual internal organ percentile data which were derived by the cumulative pattern analysis based on the International Commission on Radiological Protection (ICRP) 23 and 89 references. These mesh‐based percentile phantoms were converted into the voxel‐based phantoms. The final step is to link the voxel phantom with correct tissue density and elemental composition, so that radiation transport through the human‐body phantom was modeled correctly in a Monte Carlo code. Results: The whole‐body size percentile models have been created by the NHANES anthropometric data and the details of organ percentiles derived from ICRP references. The deformability of the RPI reference adult phantoms has been shown through the demonstration of percentiles‐ and postures‐specific adult models. Conclusion: A next generation deformable patient modeling method has been demonstrated. With the mesh deformation algorithms, the individual organs are able to be deformed to match the volumes and masses with desired organ percentiles. The flexible modeling allows patients to be represented in various sizes and postures for the purpose of Monte Carlodose calculations. This study also identified the need for further research to develop method to run Monte Carlo calculations in mesh geometry directly.
- Radiobiology II and Informatics
TH‐D‐BRD‐01: New Developments in The Computational Environment for Radiotherapy Research (CERR) Software System36(2009); http://dx.doi.org/10.1118/1.3182655View Description Hide Description
Purpose: CERR continues to be used across the world for radiotherapy research, and was downloaded over 1,000 times in the last year. We present recent improvements and changes to the system in response to user requests and research needs. Methods: New developments in the last year include: (1) integration of more powerful image registration tools (a multi‐scale demons algorithm), (2) the development of an extensive plan robustness analysis module (used to statistically simulate the effect of random, systematic, and contouring variations over a course of treatment), (3) generalizations to the java‐based DICOM input and output, (4) documentation now available via an extensive wiki page, and (5) a new tool that indicates the location of cold spots in a target volume by plotting the distance to the edge of a target volume for each voxel below a user‐selected dose threshold. Results: The image registration module has been stabilized and is being used extensively. The plan robustness module provides the statistical effect on DVH curves of presumed uncertainties. These ‘dose‐distance histograms’ give the user a simple graphical method for understanding the location of cold spots in target volumes. Documentation is now much more extensive. Conclusion: The updates to CERR will enhance user experience in using image registration and plan QA to validate treatment plan.
Partially supported by NIH grants R01 CA11820 and R01 CA85181.
36(2009); http://dx.doi.org/10.1118/1.3182656View Description Hide Description
Purpose: To study the impact in terms of execution time and accuracy of using graphics hardware for calculating the dose in a treatment planning system. The architecture of Graphics Processing Units (GPU) is well suited for numerical tasks that are intrinsically parallel, such as dose calculations. Method and Materials: This work was made within the framework of PlanUNC, or PLUNC, a treatment planning system developed and maintained by the Department of Radiation Oncology of the University of North Carolina at Chapel Hill for research and development purposes. The objective was to transparently integrate a GPU dose calculation engine to PLUNC. The CUDA platform from NVIDIA was used for the GPU implementation. A convolution/superposition (CS) dose calculation algorithm was ported by developing programs (called kernels) that are executed on the GPU. Firstly, the CS engine of PLUNC was directly ported to the GPU, with the original code preserved as much as possible. Secondly, parts of the original algorithm were redesigned to better exploit the massively parallel architecture of GPUs. The numerical experiments were conducted with a NVIDIA GeForce GTX280 and an Intel Q6600 CPU. Results: Acceleration factors of 10× to 20× were achieved with the GPU implementation relative to the CPU version with the direct port of the CS algorithm. The numerical accuracy of the results was preserved with the GPU implementation. A 40× acceleration factor was obtained for the TERMA calculation subroutine, which was rewritten with the GPU architecture in mind. These acceleration factors were sufficient to significantly improve the responsiveness of the PLUNC graphical interface. Conclusion: This work demonstrates the potential of graphics hardware for dose calculation in treatment planning systems. This could in turn have a significant impact on optimization strategies for complex delivery techniques such as IMRT.
Research sponsored by Varian Medical Systems, Inc.
TH‐D‐BRD‐03: Experience with Error Reporting and Tracking Database Tool for Process Improvement in Radiation Oncology36(2009); http://dx.doi.org/10.1118/1.3182657View Description Hide Description
Purpose: To present long‐term results of systematic near‐miss and actual error reporting and analysis system based on a web‐based tool and effects of formal process improvement structure on error rates and safety culture in radiation therapy (RT). Materials and Methods: An internally developed web‐based system was used to report, track, and analyze errors and near‐misses in a large RT department for almost two years. The system was designed as an efficient and effective process for collecting, storing, and analyzing the failure rate data in individual RT facilities. The aim of the system was to support process improvement in patient care and safety. The reporting tool was designed so individual events could be reported in as little as two minutes. Events were categorized based on functional area, type, and severity of failure. The events were processed and analyzed by a formal process improvement group which used the data and statistics collected through the web‐based tool for guidance in reengineering clinical processes. The results for the first nineteen months of clinical use of the system are presented. Results: The collected data and the process improvement structure resulted in measureable safety and error rate improvements in several clinical areas. The collected data was also very effective in identifying ineffective measures and efforts which did not produce improvements in clinical processes. The overall process demonstrated that it was possible to establish and maintain a high functioning safety culture in radiation oncology. The error reporting compliance, though voluntary, was very high and consistent from the inception of the process through the date of this report. Conclusions: Near‐miss and actual error collection process in RT can result in quantifiable safety and error rate improvements and more importantly it can result in a sustainable safety culture.
TH‐D‐BRD‐04: Increased Tumor Radioresistance by Imaging Doses From Volumetric Image Guided Radiation Therapy36(2009); http://dx.doi.org/10.1118/1.3182658View Description Hide Description
Purpose: The radiationdosedelivered from volumetric imaging guidance is normally between 1–10 cGy, depending on the imaging modalities, tumor sites and patient thickness. To correctly compensate for such doses, their biological effects on tumor and endothelial cells are investigated. Methods and materials: A lungcarcinomacell line (H460), a breast cancercell line (MCF‐7) and a prostate cancercell line (PC3) and a human umbilical vein endothelial cell line (HUVEC) were studied for their responses to small doses of radiation ranging from 5 cGy to 50 cGy. Specifically, a MTT assay was used to quantify their proliferation over time. To measure the impact of image guidance doses on cell survival, 5 cGy or 10 cGy were delivered to H460 cells before a 200 cGy therapeuticdose. The tumorcell survival was measured by clonogenic assay and compared with cells that received 200 cGy only. Results: Accelerated proliferation was observed among all tumorcell lines but not the HUVEC to low dose irradiation. The acceleration was statistically significant (p<0.05) for H460 and MFC7 cells but not for the PC3 cells. A 12.6% increase in the H460 cell survival was observed for cells receiving 5cGy before 200 cGy radiation compared to those that received 200 cGy alone (p<0.02), while cells receiving 10 cGy prior to 200 cGy had the same survival as cells receiving 200 cGy alone. Conclusions: Small pre‐treatment doses of radiation may increase the radioresistance of tumorcells. In this study, we demonstrated that the imagingdose from image guided radiation therapy is sufficient to trigger accelerated proliferation when delivered alone and increased tumor survival when delivered prior to a therapeuticdose. Our results suggest that subtracting the imagingdose from the therapeuticdose may adversely affect the tumor control probability.
36(2009); http://dx.doi.org/10.1118/1.3182659View Description Hide Description
Purpose: phenomenological normal tissue complications probability (NTCP) models use generalized equivalent uniform dose (gEUD) as a summary measure which converts an inhomogeneous dose distribution into an "equivalent" uniform dose. We have investigated adapting gEUD models to brachytherapydose distributions which are characterized by small focal hotspots. We have found that small hotspots dominate the behavior of gEUD measure. As it is unlikely that clinical NTCP values are determined to this extent by small focal hotspots, we have developed an alternative to gEUD‐based dose‐volume histogram reduction for adapting the phenomenological NTCP models to combined external‐beam and brachytherapy treatments. Our revised metric allows for local saturation in biological effectiveness in volume‐limited high‐dose regions. Materials & Methods: Three summary measures were computed on rectal DVHs. The gBEUD is an equivalent of gEUD, but with the physical dose replaced by biologically equivalent dose (BED). The tESD and vESD measures compute ESD index (equivalent uniform dose leading to the same cell survival fraction as the inhomogeneous dose distribution) on high BED portions of rectal DVHs. The tESD measure uses common BED threshold for all DVHs, which becomes an adjustable parameter of the measure. The vESD measure uses common fraction of the rectal volume as an adjustable parameter, and thus computes a BED threshold for each DVH. Properties of the three measures are compared. A database of 9 prostate patients with 7‐field IMRT boost plans and HDR brachytherapy boost plans delivering biologically equivalent D98 doses were used for the study. Results: For 6 of the 9 patients the gBEUD increased by approximately 100% due to doses confined to volumes <1% of the organ volume. The gEUD/gBEUD measures could be replaced by ESD based metrics, which explicitly incorporate saturation of biological effectiveness of high doses.
Acknowledgments: Supported by NIH Grant P01 CA11602.
TH‐D‐BRD‐06: Tumor Cell Survival Dependence On the Dose Delivery Modalities and a Statistical Model to Bridge in Vitro Results and the Clinical Outcome36(2009); http://dx.doi.org/10.1118/1.3182660View Description Hide Description
Purpose: To study the influence on tumor cell survival from state of the art radiation therapy techniques and to model tumor cell survival after a 30 fraction 60 Gy treatment with the assumption of tumor cell intrinsic heterogeneity. Methods and materials: Fractions of 2 Gy radiation were delivered to H460, PC3 and MCF‐7 cells by Helical TomoTherapy (HT), 7 field radiation therapy (7F) simulating conventional IMRTdelivery, and 2 minute continuous dosedelivery (CDD), simulating recently developed volumetric rotational arc therapy (ie Elekta VMAT and Varian Rapid‐Arc). Cell survival was determined by Clonogenic assay. A statistic model that assumes the normal distribution of the relative cell survival was developed to predict the cell survival after 30 fractions of treatment.Results: The H460 and MCF‐7 cells showed strong dependency on the treatment modalities. The number of viable H460 cell colonies was 23.2% and 27.7% lower in group irradiated by CDD compared with HT and 7F respectively, the values were 36.8% and 35.3% for MCF7 cells. The differences between CDD and HT or 7F were not significant for PC3 cells. Statistical modeling indicates that instead of the several magnitudes of survival difference predicted by simple exponential extrapolation, the reduced tumor cell killing due to spatial and temporal modulation is in the range of 10% to 30% for a wide range of means and standard deviations in the model. Conclusions:Tumor cell survival may be strongly dependent on the dosedelivery pattern in a single 2 Gy fraction, but the difference did not expand exponentially with the increasing number of fractions if tumor cell heterogeneity was considered. The survival difference plateaus in such a way that may be difficult to observe for patient population with large heterogeneity. Continuous dosedelivery may still improve the tumorcontrol from intensity modulated radiation therapy.
36(2009); http://dx.doi.org/10.1118/1.3182661View Description Hide Description
Purpose: One of the uncertainties in proton therapy is the depth dependence of cell survival RBE. Nanodosimetry is an experimental gas‐based technique that can also be simulated with Monte Carloradiation transportation codes. Agreement between experimental and simulated nanodosimetric data has previously been presented. Here we explore the use of simulated nanodosimetric distributions for prediction of RBE in a spread‐out Bragg peak (SOBP) of 200 MeV protons.Method and Materials: A previously published model converting nanodosimetric data to frequency distributions of double strand breaks (DSBs) of different complexity (number of associated breaks) was used as the starting point. A radiobiological model of cell survival using these frequency distributions and assigning different relative lethality to simple and complex DSBs was developed and model parameters were chosen to reproduce the characteristics of LET‐dependent RBE for V79 cell survival. GEANT4 was used to calculate proton spectra at 8 different depths within a 200 MeV SOBP (3 cm width), serving as input to a dedicated Monte Carlo simulation code for nanodosimetry. Cell survival RBE values as a function of protondose were determined at each SOBP depth. Results: With an SOPB protondose of 1.63 Gy (corresponding to a dose of 1.8 GyE for an assumed RBE of 1.1), the predicted RBE for V79 cell survival had an entry value of 1.23, decreasing to a minimum of 1.17 in the SOBP plateau, and then increasing to about 1.25 in the proximal SOBP, followed by an avalanche‐type increase to a value of 1.39 at the distal SOPB end. Conclusions: The depth dependence of V79 survival RBE was predicted using simulated nanodosimetric data within a proton SOBP. We found that the predicted RBE decreased slightly in the SOBP plateau, increased by about 10% in the proximal SOBP, and increased steeply by 20% in the distal SOPB.
36(2009); http://dx.doi.org/10.1118/1.3182662View Description Hide Description
Purpose: Presence of high‐Z implants inside the patient causes streaking artifacts in the kVCT (kilovoltage computed tomography) scan. The purpose of this study is to evaluate the variation in the computed dose distribution due to these artifacts in kVCT scans as compared with megavoltage (MV)CT, and also to determine a suitable scanning and planning protocol in such situations.
Methods and Materials: High‐Z materials produce more artifacts in kVCT images than MVCT images (using Tomotherapy). To quantify the difference in dose distributions due to these artifacts, dose is computed using both datasets for the same plan. Pinnacle 8.1u was used for this study, which provided the option of using different datasets for the same plan. With plan parameters being the same, we derive the variation in dose attributed to the high‐Z artifacts. For this study, two sites with high Z implants were selected — a prostate with hip prostheses (HP) and a breast with a silicon breast expander. For example, the average density of the HP object is 3.55g/cc. KVCT and MVCT scans were obtained for both patients. The dose distributions on both datasets were evaluated using isodose and DVH (dose‐volume histogram) parameters typically used by the physician for plan evaluation.
Results: For the prostate plan, PTV coverage for kVCT and MVCT datasets were 98.95% and 93.57% respectively for a 100% prescription. For the breast plan, the respective values were 97.40% and 94.36%.
Conclusion: A comparison of the PTVs calculated for both the kVCT and MVCT plans in the presence of high‐Z materials shows that dose computed based on the kVCT dataset underestimates doses by ∼3% in the presence of hip prosthesis, and 4.6% when a silicon breast expander is present. Our preliminary studies show that MVCT provides relatively realistic dose estimate in the presence of high‐Z materials.
TH‐D‐BRD‐09: Dependence of CHO Cell Survival On IUdR Uptake for 35‐KeV Photoactivated Auger Electron Therapy36(2009); http://dx.doi.org/10.1118/1.3182663View Description Hide Description
Purpose: To characterize sensitization enhancement ratios (SER) obtainable using monochromatic x‐ray activated Auger electron radiotherapy as a function of radiosensitizer concentration for a 35‐keV x‐ray beam and compare those results to measurements made using conventional 4 MV x‐rays in order to separate effects due to dose enhancement from effects due to other (chemical) mechanisms. Methods and Materials: IUdR was incorporated into CHO cell DNA through incubation in growth media containing 0, 5, 10, or 20 μM IUdR concentrations for 27 hours. Percent thymidine replacement was determined in separate tests using radiolabeled 125I‐IUdR. IUdR‐loaded cells were irradiated to 1–8 Gy with 35 keV x‐rays, generated at LSU's CAMD synchrotron, using a 2.8×2.5‐cm2 effective field size.Dose was determined from ionization chamber‐measured dose rates (∼18 cGy⋅min−1 at 100 mA) and verified with GAFCHROMIC® EBT film. 4 MV irradiations were performed using a Varian Clinac 21EX (30×30‐cm2 field, 0.5‐cm depth). Irradiated cells were incubated for 1 week, then fixed and stained with crystal violet. Colonies of 50 or more cells were scored as survivors. Survival fraction (SF) was plotted versus dose with results fit to a linear quadratic model. SER10 was calculated as the ratio of dose required to achieve 10% SF for cells without and with DNA‐incorporated IUdR. Results:SERs of 2.7 at 16.6±1.9% thymidine replacement (20 μM), 2.3 at 12.0±1.4% (10 μM), and 1.6 at 9.2±1.3% (5 μM) following 4‐MV irradiations illustrate IUdR's effect as a chemical radiosensitizer. Following 35‐keV irradiations, SERs of 4.3, 3.1, and 2.1 at 16.6%, 12.0%, and 9.2% replacement, respectively, indicate dose enhancement due to increased local DNAdose resulting from photoelectric interactions with DNA‐incorporated iodine. Conclusions: SER depends on percent thymidine replacement by IUdR. Compared to 4 MV x‐rays, 35 keV photons produce an additional SER, linear with percent thymidine replacement.
- Treatment Planning and Stereotactic II
36(2009); http://dx.doi.org/10.1118/1.3182603View Description Hide Description
Purpose: Magnetically scanned proton beams (PBS) have lateral position, energy, spot size, and flux degrees of freedom that allow for optimal target dose distributions and tissue sparing. PBS requires fast and accurate control of the degrees of freedom during the treatment to deliver the desired dose. We describe how a PBS field is delivered and verified in the technology we developed. Methods and Materials:Treatment plan beam parameters are converted to equipment settings, sorted in energy layers, and sent to various measurement and control devices that check the parameters every 0.25 ms. The on‐line dosimetry system was adapted to support this speed and characterized over a range of beam positions and dose rates. These data were calibrated against measurements at isocenter using a variety of instrumentation techniques. Sources of potential error and noise were identified and reduced to a level which enabled accurate clinical treatment.Results: The scanning system produces pencil beams smaller than 6–8 mm lateral width (1σ) with focusing magnets and 9–14 mm without. Dose accuracy is +/−0.2 cGy in the Bragg peak for a single pencil‐beam and +/−0.75% for our reference irradiation geometry. Measured 3‐dimensional dose distributions satisfy the (3 mm, 3%) gamma index criterion with 97% of points below 1. Conclusion: Our current system has proven to be accurate and safe. We developed the capability for treating large tumors, a purpose not usually considered for proton beam scanning but which we see as an important opportunity. We achieved clinically useful dose distributions with our beam and system parameters (including dose and position accuracy and spot size). Further refinements are planned.
36(2009); http://dx.doi.org/10.1118/1.3182604View Description Hide Description
Purpose: Accuracy in monitor unit (MU) calculation is critical for patient treatment which is measured in proton beam for each field that takes significant of beam time. An accurate, simple and time saving method is desired for MU calculation. Presented is a sector integration approach with constant correction for dose/MU calculation for any treatment in a uniform scanning proton beam. Materials and methods: Dose measurements were made in a scanning proton beam with a parallel plate Markus ion‐chamber in a water phantom. Two methods were adopted to measure the Dose/MU values: (i) varying beam energy and SOBP values with a fixed aperture radius (10cm) at the reference point and (ii) varying the beam energy and aperture radius with a fixed SOBP. These measured values were summarized into two tables. These two tables were used to calculate dose/MU for an energy and a SOBP using certain aperture radius by linear interpolation, which produced the initial results: D(energy, SOBP)10cm and D(energy, radius)SOBP'. The dose/MU value for any field size and shape is calculated using sector integration method with a piecewise correction based on these two values. Results: Dose/MU values for total 412 beam fields without compensators were measured and calculated using the model above. The model parameters were derived from the measurements in 90 fields with varying shapes. These parameters were applied to compute the dose in the other 322 treatment fields. The difference between calculated and measured values in these 322 fields is −0.12%±1.36% for beam range in water <20 cm. The difference is smaller to −0.06%±0.74% for beam range >20cm. Conclusion: An accurate, simple, and fast method for MU calculation in any proton treatment fields is introduced. The sector integration method derived from the actual measured data produced very small error when tested in 322 additional fields.
TH‐C‐BRD‐03: Kilovoltage Stereotactic Radiosurgery for Age Related Macula Degeneration: Assessment of Patient Effective Dose and Patient Specific Tissue Doses36(2009); http://dx.doi.org/10.1118/1.3182605View Description Hide Description
Purpose: Age‐related macula degeneration (AMD) is a leading cause of vision loss in the United States. Radiation therapy was initially explored as a treatment option in the 1990's, but has since been abandoned in favor of intraocular drug injections. Interest continues for stereotactic radiosurgery(SRS), an option that provides a noninvasive treatment for the wet form of AMD. Method and Materials: Two adult heads, male and female, were computationally modeled with Rhinoceros 4.0 using CT slice segmentation scaled to ICRP Publication 89 reference values. The head phantoms were voxelized and a three‐beam photontreatment (100 kVp) was modeled using the MCNPXradiation transport code to evaluate tissuedose and effective dose.Treatment was also simulated using the reference heads with changeable optic nerve positions based on individual patient variability seen in a head CT scan review. Results: A cumulative dose of 24 Gy to the macula (8 Gy per beam) yielded an effective dose of 0.28 mSv. The maximum doses to the most extreme patient specific optic nerve positions were evaluated using Dose Volume Histograms and found to be below the thresholds for serious complications, as were other reference tissues.Conclusion: The results of this study show that SRS is a safe option considering effective dose and tissue toxicity for a reference individual. Patient specific models will be created from the CT review and treatment will be modeled using MCNPX to establish certainty that this is a safe treatment option for all individuals considering other patient specific variations in anatomy.
This work was sponsored by Oraya Therapeutics.