Volume 34, Issue 6, June 2007
Index of content:
- Therapy Symposium: Ballroom A
- Hypofractionated RT: Biological Basis and Clinical Application in the Prostate and Lung
MO‐D‐BRA‐01: Hypofractionated RT: Biological Basis and Clinical Application in the Prostate and Lung34(2007); http://dx.doi.org/10.1118/1.2761251View Description Hide Description
Hypo‐fractionation, the delivery of radiation therapy with a dose per fraction >2.0 Gy, was introduced in curative radiotherapy in many centers all over the World in the period from WWII to the mid‐1970's, mainly for health economics reasons. Clinical studies published in the late 70's and early 80's showed that these schedules were often associated with excessive late toxicity compared to standard fractionation schedules, and hypo‐fractionation was abandoned in most centers. In hindsight, this negative experience largely resulted from the over‐estimation of tolerance doses in hypo‐fractionated schedules arising from the Ellis NSD formula. Logically, this historical clinical experience does not exclude that hypo‐fractionation can be acceptable or even advantageous under certain defined circumstances.
The current status of the linear‐quadratic bio‐effect model in clinical practice is reviewed. Two clinical settings, where hypo‐fractionation is considered, are presented and discussed: (1) definitive radiotherapy for nonsmall cell lungcancer(NSCLC); (2) definitive radiotherapy for prostate cancer.
For both NSCLC and prostate cancer the current interest in the development and clinical testing of safe hypo‐fractionation regimens springs partly from the improved physical dose distribution achievable with 3D conformal radiotherapy or IMRT. This provides a window of opportunity for escalating dose per fraction. But there is also a biological rationale for hypo‐fractionation in these two tumor types — and a slightly different one in the two cases! For NSCLC there is strong evidence that shortening the overall treatment time creates a favorable efficacy:toxicity ratio with respect to late toxicity: hypo‐fractionation is a convenient way of delivering accelerated radiotherapy. In other words, we are trading in time for dose per fraction. For prostate cancer, there are no good reasons to believe that there is a strong time factor. However, there is increasingly convincing evidence that the α/β ratio for this tumor type is low, perhaps even lower than for the dose‐limiting rectal side‐effects. This alone creates a case for exploring hypo‐fractionation in this disease.
Hypo‐fractionation schedules are being tested in controlled clinical trials in several tumor types at the moment. These schedules should not be introduced in the clinic without appropriate evidence that they are safe and effective. However, based on our improved knowledge of clinical radiobiology, it appears that hypo‐fractionation schedules may yield a beneficial therapeutic ratio and/or a superior cost‐effectiveness in some clinical indications.
After this session the participants should be able to:
1. Recognize the limitations of the traditional linear‐quadratic model.
2. Summarize the changes in biological understanding of dose fractionation that have occurred over the last 10–15 years.
3. Explain the rationale behind the current interest in hypo‐fractionation in NSCLC and prostate cancer.
34(2007); http://dx.doi.org/10.1118/1.2761253View Description Hide Description
Prostate cancer exhibits slow growth with a potential doubling time ranging from weeks to months (median 42 days). From these data emerged a hypothesis that adenocarcinoma of the prostate may behave more like a late reacting tissue. Brenner and colleagues (1) used data from prostate low dose rate permanent seed implants and external beam radiotherapy series and derived an α/β of approximately 1.5. Many other groups have also calculated the α/β ratio to be in the < 3.0 range; yet, hyper fractionation does not seem to compromise outcome after radiotherapy (2). There are many potential pitfalls of these analyses and some investigators have concluded that the α/β for prostate cancer is closer to that for late effects of the surrounding normal tissues (>3).
Understanding the α/β ratio for prostate cancer is key to designing clinical trials that maximize the efficacy of radiotherapy. If the α/β for prostate cancer is lower than the surrounding normal tissues, hypofractionation will afford an advantage in terms of greater sensitivity of prostate cancer to this strategy, as compared to the bladder and rectum. Brenner et al (3) have estimated the α/β for the rectum to be over 5.0. Recently, Fiorino and Valdagni (4) have argued that these estimates may be inaccurate because of variation in, and dependence of toxicity on, the proportion of rectum receiving higher radiationdoses.
Clinical results in the PSA/IMRT era using hypofractionation show that this strategy is well tolerated by the surrounding normal tissues, with outcomes consistent with a low α/β. Kupelian et al. (5) have treated a large series of men to 70 Gy at 2.5 Gy per fraction with excellent results. As an offshoot of this strategy, RTOG 04‐15 contrasts this hypofractionation regimen with 73.8 Gy in 1.8 Gy fractions.
The Cleveland Clinic data also prompted us at Fox Chase Cancer Center to devise a randomized hypofractionation trial comparing 76 Gy at 2.0 Gy per fraction to 70.2 Gy at 2.7 Gy/Fx. The latter hypofractionation regimen is equivalent to 84.4 Gy at 2.0 Gy/Fx, assuming an α/β of 1.5. A total of 307 patients were entered from 2002 to 2006. The trial has completed accrual. Acute toxicity in the first 100 men entered shows minor differences between the two treatment groups (6). Analysis of late toxicity during the first year of follow‐up also is revealing little difference. The encouraging results thus far with hypofractionation suggest that more significant hypofractionation (e.g., stereotactic radiotherapy) might be cost‐effective and potentially advantageous.
1. To understand the alpha/beta ratio for prostate cancer.
2. To appreciate the efficacy of hypofractionation and resultant toxicity.
34(2007); http://dx.doi.org/10.1118/1.2761255View Description Hide Description
- Robustness of IMRT Treatments
MO‐E‐BRA‐01: IGRT and Treatment Planning: Geometric Uncertainties and Individualized Patient Treatment34(2007); http://dx.doi.org/10.1118/1.2761294View Description Hide Description
Widespread use of precision conformal and intensity modulated treatment techniques puts extreme emphasis on targeting. While the cloud of uncertainty associated with defining and localizing targeted tissue has been realized for quite some time, standards for working with this uncertainty are hardly widespread. When looking at the range of variations in target position across a population, it becomes clear that there is quite a range for any given body site, with some patients exhibiting very small variations, and others at extremes far beyond the population mean. The ability to characterize individual patients early on in treatment permits modification of the population assumptions used in planning, providing potential benefit to a subset of patients. Making plans robust to expected variations, especially at the start of treatment, may further aid in this individualization process. One critical tool in these endeavors is the ability to estimate potential dosimetric consequences of various levels of uncertainty, as opposed to the use of geometric margins as approximations.
The educational objectives of this talk are:
1. Gain an understanding of the range of geometric uncertainties in a population.
2. Look at various methods of assessing individual variations and their impact.
3. Compare geometric and dosimetric means of assessing the impact of variations.
4. Introduce the topic of robust planning.
34(2007); http://dx.doi.org/10.1118/1.2761295View Description Hide Description
Local control by radiation therapy relates directly to the dose delivered to the diseased tissue, but can be limited by the dose tolerated by adjacent structures. This axiom has guided clinical practice into a number of technological shifts over the past several decades. Intensity‐modulated radiotherapy(IMRT) has emerged as an important means to achieve higher doses and to intensify treatment, while simultaneously decreasing the dose in normal tissues. However, the proximity of critical normal organs to disease and geometric uncertainties arising from organ movement continue to present significant challenges in some anatomical sites.
Advances in image‐guidedradiation therapy(IGRT) permit more frequent soft‐tissue imaging in the course of treatment delivery, creating opportunities to enhance the accuracy and precision of treatment. A framework for considering image‐guidance strategies and their implications for treatment planning is required. For example, target localization can improve geometric accuracy in the on‐line setting; i.e., during treatment delivery. An off‐line statistical analysis of a patient's images can also enhance precision by supporting the adaptation of margins. Even a complete re‐optimization of the plan is possible, in response to systematic anatomical changes accumulated with the progression of treatment. Frequent re‐optimization is potentially inefficient and expensive, within the constraints of current technologies, clinical practices, and QA standards. Clearly, there are trade‐offs between the level of effort required to an exploit IGRT for adaptive re‐planning, and the potential benefits of re‐planning. The concept of robust optimization points to the possibility designing treatment plans that are tolerant to uncertainties in treatment delivery. Robust plans may reduce the need for routine re‐planning in response to variations in patient setup, organ movement, or progressive changes leading to organ deformation.
This presentation reviews clinical experience with IGRT, and explores the implications, opportunities and challenges for treatment planning in the era of IGRT. The central principles of using IGRT in IMRTtreatment planning with respect to requirements for adaptive re‐planning and the design of robust IMRTtreatments.
1. To review illustrate clinical applications of IGRT.
2. To describe how the adoption of IGRT can influence external beam treatment planning.
3. To outline how IGRT information is used for re‐planning of IMRTtreatments, and in the design of plans that are tolerant to uncertainties in treatment delivery.
34(2007); http://dx.doi.org/10.1118/1.2761296View Description Hide Description
For lungtumors, the presence of motion due to breathing is a key source of uncertainty. Motion essentially blurs the static dose distribution, which can be thought of mathematically as a convolution of the static dose distribution with a probability density function (PDF) describing the motion. The 4D IMRT optimization/inverse planning method tries to undo this blurring effect by taking the motion PDF into account during the optimization of the intensity map. Such an approach is effective as a motion compensation technique, but only when the motion is highly reproducible over the entire treatment course. In terms of the PDF, “reproducibility” corresponds to witnessing the same PDF over the course of treatment that was observed in the treatment planning stage. However, if the realized PDF during treatment differs from the planning PDF, the subsequent convolution of the static dose distribution with the realized PDF may produce undesirable hot and cold spots.
Robust optimization is a concept that has gained prominence in the optimization community for its wide applicability to problems with uncertain data. Real‐world problems are rarely, if ever, accompanied by noiseless data, hence, there is a natural motivation to incorporate this uncertainty into any optimization process. The robust framework we present builds on the 4D approach by explicitly accounting for the uncertain motion represented by uncertainty in the motion PDF. Instead of basing the optimization on one PDF, robust optimization uses a family of PDFs to create a static dose distribution that is less sensitive to variations in the motion.
The robust framework allows us to craft solutions in the entire spectrum between the idealized 4D method, and a conservative, ITV‐like margin approach. A given intensity map that results from the robust optimization method will balance intensity‐modulation with intensity‐homogeneity in order to effectively trade off the sparing of healthy tissues with ensuring sufficient tumor coverage. Accordingly, the robust optimization method implicitly performs multi‐objective optimization on these competing objectives.
1. Understand the concept of robust optimization.
2. Understand the construction of robust treatment plans based on breathing motion PDFs.
3. Understand the mathematical and dosimetric differences between treatment plans of varying levels of robustness.
4. Understand the multi‐objective viewpoint of robust optimization.
34(2007); http://dx.doi.org/10.1118/1.2761297View Description Hide Description
The knowledge about patient geometry at the exact time of radiation delivery is rarely complete. The result of radiotherapy should not depend sensitively on inevitable uncertainties. This quality of robustness of a treatment can be enforced during dose optimization by a variety of means, which can be classified by the frequency with which patient information is acquired, the timespan between acquisition and delivery, and the nature of the image information. For prostate radiotherapy, random target and normal tissue motion poses the greatest challenge as it requires high‐quality volume imaging and the information content may decay quickly.
Even with today's on‐board imaging systems, a fair amount of uncertainty about the patient geometry remains, which is best described by probability distributions (PD) of pointwise displacements. Here, various off‐line and quasi on‐line image‐guided protocols differ mostly in how these PDs are constructed and how frequently it is updated. The PDs can be used to compute the expected values of dose or dose effect at each point of the patient model. This model may either be defined in the treatment room (and dose) coordinate system (TCS) or may be associated with the patient reference geometry and deform along with the anatomy. While the former is the traditional model for dose planning, the latter shifts the focus to the accumulation of dose in the tissue, hence the term tissue‐eye‐view (TEV).
The most basic probabilistic patient model in TCS is the coverage probability model, where each volume element in a rigid reference patient geometry is weighted with the cumulative probability that some volume of interest can be found there. This information quantifies the relevance of a point in the CTV‐to‐PTV margin. Despite its apparent simplicity, it is possible to alleviate the common PTV‐overlaps‐organ paradox to an extent that allows iso‐toxic dose escalation by about 10 per cent. Moving to a probabilistic patient model in TEV abandons the PTV concept altogether, at the price of more image information and the need for deformable registration. The potential for iso‐toxic dose escalation lies at more than 20 per cent.
Both optimization concepts rely on an a‐priori estimate of the pointwise displacement probabilities. A bias or time trend in these PDs would be potentially fatal. Hence, it is essential to update the PD during the course of treatment to minimize the “uncertainty in the estimates of uncertainties”. In consequence, robust off‐line adaptive protocols require some extent of monitoring while on‐line protocols require basically off‐line probabilistic models predicated (in a Bayesian sense) on the geometry of the day. Apart from the insufficiency in the input image data, another risk arises from the high specificity with which individual source of error influence the optimized dose distribution: a large margin could compensate for many uncertainties, while margin‐less optimization schemes need to quantify all of them. This limits the theoretical benefit of the most sophisticated models (daily on‐line imagingBayesian TEV) significantly. The specific cost benefit ratio of various protocols remains to be evaluated In practice, in larger populations of patients.
- The Good and the Not‐So‐Good of Proton Therapy?
34(2007); http://dx.doi.org/10.1118/1.2761360View Description Hide Description
Proton therapy is making the move out of the research laboratory and into the clinic. New hospital based facilities in the US,Asia and Europe testify to the growing interest in this treatment modality. Protons have the advantageous characteristic that the energy from a mono‐energetic proton beam is deposited in a small region known as the Bragg peak, beyond which the deposited dose is almost, but not quite, zero. Numerous comparative treatment planning studies have shown the theoretical advantage for protons in a number of indications, and the existing and new facilities are working towards translating this theoretical advantage into a real clinical advantage.
In order to make the essentially mono‐energetic, and narrow, pencil beams that are emitted from proton accelerators useful for therapy, the method most widely used is the so‐called passive scattering technique. In this, the narrow beam is widened through the use of scattering elements, whilst the narrow Bragg peak is extended in depth through the application of a series of depth shifted and modulated Bragg peaks in order to form a so‐called ‘Spread‐Out‐Bragg Peak’ (SOBP). The final form of the delivered field is defined by the use of field specific collimators and compensators, the latter of which match the distal end of the field to the distal extent of the target volume.
Coupled with the development of the new proton facilities is a growing interest in more sophisticated delivery techniques. One such is active scanning, in which narrow, mono‐energetic pencil beams are magnetically scanned throughout the target volume under computer control. This approach has a number of potential advantages over the passive approach. It is very flexible, makes more efficient use of the available protons, is more conformal than passive scattering, results in lower secondary irradiations to the patient (i.e. neutron background) and last, but certainly not least, allows for the delivery of Intensity Modulated Proton Therapy (IMPT), the proton equivalent of IMRT. Currently only one centre is clinically using the scanning approach and IMPT (PSI in Switzerland), but all new facilities are currently planning to implement scanning technology in the near future.
However, the advantages of protons don't come for free. There are a number of challenges (and perhaps even worries) about its introduction resulting from the same characteristics of protons which bring their main advantages. It is the role of this symposium to present both the good and perhaps not‐so‐good of protons. The advantages have been outlined above, but we will also take a closer look at a number of issues that can make effective proton therapy quite challenging. These include the effects of density heterogeneities and range uncertainties, the effects of organ motion and the issue of secondary neutron doses resulting from interactions of protons with atomic nuclei.
1. Understand the basic principles of proton therapy.
2. Understand the main advantages, both physical and clinical, of proton therapy.
3. Understand the main challenges of proton therapy.
34(2007); http://dx.doi.org/10.1118/1.2761361View Description Hide Description
An important characteristic of protons is that they are expected to stop at a well‐defined depth and, being heavy, deviate minimally from a straight path. Proton dose distributions computed with the aid of commonly used treatment planning systems, especially for intensity and energy‐modulated proton therapy (IMPT), exhibit exquisite target dose conformality and dose homogeneity and normal tissue sparing. However, the dose distribution actually received by the patient may be significantly different from what is seen on the original treatment plan. This is, in part, due to various sources of uncertainties and approximations. Examples of these sources include: daily treatment setup; inter‐fractional anatomical variations; intra‐fractional internal movements of organs; dose calculation approximations, especially in the presence of complex tissue heterogeneities; and CT number variability and the uncertainty in their conversion to stopping powers. These uncertainties lead to a lack of full confidence in computed dose distributions. Furthermore, proximal, distal and lateral margins are larger than what would ultimately be achievable. Often, desirable beam directions towards sensitive organs just beyond the intended range of protons are avoided as is the use of protons in the thorax and abdomen. Therefore, the optimum dose distributions possible with protons are not often achieved. Another consequence of limited accuracy is that the response of tumors and normal tissues cannot be reliably correlated with dose distributions. Most of the same uncertainties are also present in photon dose distributions; however, their impact is greater on proton dose distributions due to greater sensitivity of protons to perturbations caused by these uncertainties. Improvement in accuracy of both computed and delivered dose distributions, necessary to exploit the full potential of proton therapy, can be achieved through a variety of means. Examples include (a) mitigation of the impact of intra‐fraction motion through respiratory gating; (b) computation of composite (“4D”) dose distributions taking respiratory motion into consideration; (c) computation of cumulative dose distributions using repeat 3D and 4D CT to take inter‐fractional changes into consideration; (d) reduction in CT number uncertainty and in image artifacts caused by high Z materials through improved CT calibration, novel imaging techniques and reconstruction methods. In addition, if the repeat CT during the proton therapy course reveals that the inter‐fractional anatomic changes are significant, adaptive replanning of treatments at appropriate times may be performed to assure that the original intent of the treatment plan is met or exceeded. Clinical examples will be presented to illustrate the impact of selected uncertainties in the current state of the art and the gains to be made with the improvement in accuracy in each of the areas identified above.
1. Understand the sources of uncertainties in proton therapy.
2. Comprehend the extent of the impact of these uncertainties on the differences between computed and delivered dose distributions.
3. Become aware of the strategies to improve accuracy and learn about the resulting dosimetric and potential clinical gains.
Research sponsored in part by Varian.
34(2007); http://dx.doi.org/10.1118/1.2761362View Description Hide Description
The principal feature and physical advantage of protonradiation therapy is the finite range of protons in the patient. In a pristine proton pencil beam (ignoring effects of finite energy spectrum and finite source size), the distal dose gradient is, for all relevant energies, approximately twice as steep as the lateral gradient. What is not so good is that the localization of the distal dose gradient in the patient can be quite uncertain. The advantage of the distal dose gradient is therefore not currently used in clinical practice for tight conformation. Rather, conformation through lateral dose shaping is preferred. Uncertainties arise from several sources: dose calculation approximations, biological considerations, setup and anatomical variations, and internal movements of low and high density organs into the beam path. Organ motion also has a major impact on the range, which is managed by adding a distal safety margin. These margins reduce the benefit of proton therapy in treatment sites where the physical properties of protons could make a significant difference, such as lung cancer. Altogether, the physical advantage of proton therapy is not fully translated into a maximized dosimetric benefit in the patient. Furthermore, tangential avoidance of critical structures and use of patch fields, as currently practiced, increases the complexity of treatment and the number of beams.
To fully utilize the finite proton range for clinical treatments, developments in three areas are necessary:
1. Management and reduction of organ motion. Organ motion can have a more severe effect on protondose distributions than on photondose distributions. This is because, to first order, photon therapy produces a static “dose cloud”, and organs move within this fixed dose cloud. This assumption is not valid in proton therapy.
2. Improved dose calculation. Protondose distributions and the end of proton range are strongly affected by, for example, metal implants and their resulting CT artifacts. Hence, a careful CT to stopping power conversion and correction of artifacts are required. Monte Carlo calculations can improve the dose calculation accuracy near the end of range, and model the range degradation effect more accurately. Some kind of in vivo dosimetry is also highly useful in proton therapy. One option is to do PET imaging of the positron emitters that are produced through nuclear interactions of the proton beam in the patient.
3. Reduction of the impact of residual uncertainties through robust treatment planning and intensity modulated proton therapy. Through a careful design of intensity modulated proton therapy plans the dosimetric effect of range uncertainties can be reduced.
1. Estimate the magnitude of range uncertainties in various sites such as lung and prostate.
2. Be able to explain the “static dose cloud” assumption and why it breaks down in proton therapy.
3. Name at least three methods to reduce range uncertainties.
4. Explain concept of robust protontreatment planning and tangential avoidance.
34(2007); http://dx.doi.org/10.1118/1.2761363View Description Hide Description
Advanced treatment techniques (e.g., IMRT, protons) are able to deliver more conformal dose distributions. However, they may cause potential risks for second malignancies due to increased scatteredradiation.
Protons deposit secondary dose outside the treatment volume mainly via neutrons (generated either in the patient or the treatment head). In particular for passive scatteredproton beams, a general statement about this dose cannot be made because the yield and energy of these neutrons depends on several factors, e.g. the characteristics of the beam entering the treatment head, the material in the double scattering system and the modulator wheel, and the field size incident to the patient specific collimator. The latter can easily cause neutrondose variations up to two orders of magnitude (the neutrondose delivered by treatment head generated neutrons decreases with increasing collimator opening). Several experimental and simulated data from passive scatteredproton beams show that the biologically effective neutrondose (weighted by a quality factor) could be higher or even lower than scattereddoses in photon therapy. However, depending on the beam line design and the field size used for a specific treatment it might be significantly higher in some rare cases. Because the neutrondose is dominated by the contribution from the treatment head, proton beam scanning produces a much lower neutron background than passive scattering. Presumably, it delivers the lowest scattereddose of all treatment modalities.
The likelihood of developing secondary cancer depends on both the scattereddose to the whole body and the high‐dose volume. Depending upon the dose response relationship, a main concern may not be the dose far away from the field (e.g. from neutrons), but the dose delivered to, or directly adjacent to, the target. The integral dose with any type of photon beams is higher than with proton beams.
1. To understand the determinants of neutronradiation in proton beam therapy.
2. To understand risks associated with neutronsdoses.
- Novel Particle Acceleration Techniques
34(2007); http://dx.doi.org/10.1118/1.2761398View Description Hide Description
This presentation will start the symposium with a brief description of recent developments in particle acceleration techniques with a focus on the acceleration of proton and light ions and its impact on radiation therapy of cancer. Proton/ion therapy has great potential for improving local control and normal tissue sparing because of its superior dose distributions. However, the large cost of a proton/ion facility based on conventional accelerator technology has prevented its widespread use. Significant efforts have been made in recent years to develop compact particle accelerators in order to make proton or ion therapy a commonly available treatment modality. Compact particle accelerators based on dielectric wall accelerator, laser‐particle acceleration, and superconductor techniques will be discussed. The educational objectives of this presentation include (1) to describe the physical properties of proton and ion beams and their therapeutic advantages, (2) to analyze the cost‐effectiveness of conventional proton/ion therapy versus intensity modulated x‐ray therapy, and (3) to introduce recent innovations in particle acceleration and their potential for radiationoncology.
34(2007); http://dx.doi.org/10.1118/1.2761399View Description Hide Description
The proton beam therapy of cancer has been practiced to yield good therapeutic results. However, it remains a costly treatment, primarily because of its huge accelerator facility needed to drive proton beams. The introduction of laser acceleration promises to greatly reduce the size and possible cost to accelerate and provide a medically needed system for therapy. Not only its compact acceleration section, but also its compactness of necessary radiation shield and beam handling section (a portion similar to the gantry) contribute to the compactness of the overall therapy machine size. A series of innovations such as the adiabatic acceleration, double‐layer target, the optimized target thickness, etc. constitutes to provide a new paradigm of laser‐driven compact therapy. We envision that the verification of dosage with the self‐auto‐activation by PET combined with pencil‐beam scanning characteristics amounts to a new feedback therapy of cancer.
34(2007); http://dx.doi.org/10.1118/1.2761400View Description Hide Description
Laser plasmaaccelerators provide electron beams with parameters of interest in many fields and in particular for radiotherapy. A short review of progress achieved recently including bubble  and colliding  schemes will be presented. Using the last improvements of laser‐plasma accelerators, we performed dosedeposition simulations using a quasi‐monoenergetic electron beam in the 200 MeV range . It is shown that electron beam properties offer advantageous dosimetric characteristics compare to those calculated with high energy photons. The depth dose curve shows a broad maximum at large depths (> 20 cm). The lateral penumbra of treatment fields for focused electron beams is smaller compared to 6 MeV photons at depths smaller than 10 cm. These advantages result in an improvement of the quality of a clinically approved prostate treatment plan. While the target coverage is the same or even slightly better for 250 MeV electrons compared to photons the dose sparing of sensitive structures is improved. E.g. the dose to the rectum is reduced by 19% for 250 MeV, focused electrons. These findings agree with previous results regarding very high energy electrons as a treatment modality [4, 5, 6].
The lack of compact and cost‐efficient electron accelerators could be overcome by laser‐plasma systems.
1. Understand the origin electron injection in plasma.
2. Understand the acceleration process and to motivate this approach which uses extremely high electric field.
3. Understand the issues related to clinical application for radiotherapy.
34(2007); http://dx.doi.org/10.1118/1.2761401View Description Hide Description
Recent developments in the field of laser engineering, specifically the invention of chirped pulse amplification technique made it possible to achieve laser light intensities reaching 1022 W/cm2 range. In this review, we will show that such laser intensities are sufficient to accelerate protons to therapeutic energy ranges, provided that proper laser‐target parameters are chosen. In the majority of recent laser‐matter interaction experiments, the proton energy spectra coming out of the interaction chamber are thermal making it impossible to use these protons in hadron therapy. This necessitates the development of a particle selection device that would deliver quasi‐monoenergetic particles suitable for medical applications. We will discuss earlier proposed particle selection system and show the dosimetriccharacteristics of protons coming out of this device. Using “real” patient data and physical characteristics (phase‐space distribution) of selected particles we will discuss the inter‐comparison studies between photon intensity‐modulated plans on one hand and intensity‐modulated plans based on laser‐accelerated protons on the other. In concluding remarks, we will describe the current challenges facing the project and ways to resolve them.
34(2007); http://dx.doi.org/10.1118/1.2761402View Description Hide Description
Purpose:Proton Beam Radiation Therapy has been clinically investigated for over 40 years. Despite obvious physical dose deposition advantages and compelling clinical results, the considerable financial cost of existing accelerator designs has hindered widespread use of this evidently superior treatment modality. Within the last few years however, materials have been developed that enable high concentrations of electromagnetic energy to be harnessed. These materials have opened the way for reducing the size and cost of accelerators for Proton Beam Radiation Therapy.Method and Materials: Two such materials, high current densitysuperconducting wires and high field gradient dielectric elements, have led to the respective developments of compact superconducting cyclotrons and dielectric wall accelerators. Existing analytical tools for simulating the performance of circular accelerators and linear accelerators have been applied to guide the development of these designs in the new energy density regimes. These tools are essential for predicting the performance of accelerated proton beam dynamics in compact devices with high electromagnetic field gradients. Results: Each of these accelerators have been incorporated into single room proton therapytreatment system designs with a size and cost that are, or are projected to be, significantly below that of existing alternatives. Prototypes of these systems are now under construction and elemental prototype evaluation. Superconductingcurrent density performance specifications have been met or exceeded in the development of the compact superconducting cyclotron. A working cyclotron has been prototyped and shown to accelerate an intense proton beam of more than 100 nA over the first stages of the cyclotron acceleration cycle. Likewise critical elements of the dielectric wall accelerator have met specifications for the final accelerator configuration, showing standoff fields in excess of 100 MeV / m. Conclusion: At least one of these new systems is expected to be completed and being used for patient treatment before the end of 2008. With the significant reduction in complexity and cost the successful demonstration of these systems will likely lead to a more widespread adoption of Proton Beam Radiation Therapy.Conflict of Interest: Kenneth Gall is a founder and Chief Technology Officer of Still River Systems Incorporated, a company involved in the design, production and clinical implementation of proton beam radiation therapy systems.
- Radiobiological Models and Treatment Planning
34(2007); http://dx.doi.org/10.1118/1.2761532View Description Hide Description
Advances in technology and in computing have given us computer‐controlled linear accelerators equipped with multileaf collimators and wonderful 3D graphics workstations to perform treatment planning; additionally we have conceptual advances such as stereotaxy (cranial and extra‐cranial) intensity modulation (IMRT), helical tomotherapy and protons. But the bottom line in radiotherapy is radiobiology, radiobiology, radiobiology. If we don't know how to convert ‘physics’, i.e. dose distributions, into estimates of clinical outcome then these wonderful technological advances will remain ‘toys’ for physicists to play with.
Radiobiology has traditionally concerned itself with determining surviving fraction vs. (uniform) dose curves for human tumour cell lines. However, in the 3D era we need models which connect dosedistributions (and fractionation regimens) in tumours and normal tissues (generally in the form of dose‐volume histograms) with the probabilities of tumour (local) control — TCP — and of complications — NTCP. Such models now exist and their active use in treatment planning ushers in the era of Conformal Radiobiology.
1. Appreciate the limitations of technology‐drivendose‐basedradiotherapy.
2. Appreciate the limitations of ‘classical’ radiobiology in the conformal era.
3. Understand what is meant by ‘Conformal Radiobiology’.
34(2007); http://dx.doi.org/10.1118/1.2761533View Description Hide Description
Delivery of adequate tumordose without causing excessive normal tissue complications is the driving principle of modern radiation therapy. Resulting dose distributions in normal tissues are very different from the “partial organ irradiation” distributions that characterize the simple beam arrangements of earlier days. Further, the growing popularity of hypofractionation drastically widens the range of biological effective doses within organs at risk (OAR). The clinical physicist is faced with uncertainty as to what aspects of an OAR dose distribution require special consideration in treatment plan design and evaluation.
Normal tissue complication probability (NTCP) models are one way to account for the full dose distribution. For most serious toxicities, statistical models from various outcomes analyses help direct the planner toward dose‐volume limits. There are also several semi‐mechanistic models, each with a set of parameters that must be adjusted to describe existing data. But for conditions that differ greatly from those under which they were ‘commissioned’, two models for the same endpoint and starting from the same input dose distribution do not necessarily predict the same NTCP. A working group, with joint participation from AAPM, ASTRO and RTOG, is being established to, among other things, reconcile model differences and provide clinical guidelines in the near future.
In this presentation, common NTCP models for several major dose‐limiting toxicities will be described, including parameters sets gleaned from literature review. Problems and pitfalls of integrating and interpreting diverse studies and applying them to an individual clinic's practice will be discussed and real‐world examples of application to clinical decisions will be presented.
1. Understand the important features of the most widely‐used NTCP models.
2. Understand how the model parameters affect predictions of normal tissuedose‐volume responses.
3. Understand some of the complexities of implementing these models into routine clinical practice.
34(2007); http://dx.doi.org/10.1118/1.2761534View Description Hide Description
The TumorControl Probability function (TCP, Webb & Nahum) models radiation induced cell kill and uses Poisson statistics to estimate the probability of local control. Its parameters may be derived by correlating archived plan data and treatment outcome results of clinical trials. We will exploit the data of a large randomized prostate trial (68Gy against 78Gy, 600+ patients) of patients treated between the years 1999 and 2003. For these patients the planning CT scan and organ delineations, and the 3D dose distribution as generated by the treatment planning system are electronically available.
However, we have no patient specific information on the location of tumortissue inside the prostate (as could nowadays be imaged using MRI). Furthermore, the dose absorbed by the clonogen cells will have been influenced by errors in daily set‐up and by organ motion. Although portal images were acquired for an off‐line bony set‐up protocol, no in‐room soft tissueimaging was available to monitor organ motion. An additional uncertainty is introduced by the fact that the primary method of clinical follow‐up is based on blood PSA levels, which means a detected failure may not be local.
To describe the interplay between the location of clonogen cells and the varying day‐to‐day position of the prostate, we use Monte Carlo treatment plan evaluation software that was developed in‐house. This software samples population distributions of random and systematic errors to simulate many possible treatment histories. Maximum likelihood methods are then applied to determine the most probable TCP model parameters α and σα. Inspired by surveys of pathological specimens, the assumed density distribution of clonogen cells inside the prostate is modulated, and a body of clonogens located posteriorly outside the CTV is introduced to model extra capsular extension. The effects of such modulations on the TCP parameters and on the likelihood of the fit is studied.
In future trials, additional imaging will increase the amount of patient specific data on geometric variations and cell distributions, leading to a more accurate TCP model.
The TCP parameters thus acquired may be used for treatment planning purposes. If MRIimaging is available to gain knowledge about the location of tumortissue inside the gland, treatment planning may be performed by optimizing the TCP function using a heterogeneous clonogen cell distribution. By using probability based optimization techniques (in which the effect of geometric errors on the tumor cell kill is modeled in the same way as in the TCP fitting procedure above), no PTVs need to be defined, and the optimization procedure can directly aim for the largest expected TCP (for a given expected rectum NTCP).
1. Identify sources of uncertainty when basing a TCP model on clinical data.
2. Understand a method to determine TCP parameters in the face of such uncertainties.
3. Understand how these parameters may be used for treatment planning.
34(2007); http://dx.doi.org/10.1118/1.2761535View Description Hide Description
Phase One clinical trials seek to determine the maximum tolerated dose (MTD) of the investigational treatment. Similarly, traditional radiationoncologydose escalation trials assign groups of patients to increasing dose levels until an unacceptable level of complications appear. This generally transpires on a sequential basis, regardless of tumor size or the distribution pattern of radiationdose to surrounding normal tissues (beyond the specification of a few well‐accepted dose constraints such as maximum spinal cord dose). This can be a poor strategy for treatments limited primarily by complications to so‐called volume‐effect normal tissues which encompass the tumors, such as may be the case for tumors located in the liver or lung. A better scheme for Phase I/II dose escalation trials limited by these volume effect organs would attempt to treat sequential groups of patients with dose “distributions” that might be expected to lead to similar anticipated levels of complications (but of course with different tumordoses); with sequential escalation of each potential iso‐complication level until an MTD profile is realized (which would inherently include the volume effect). The use of normal tissue complication probability (NTCP) models prospectively, in the treatment planning process, facilitates this type of normal tissue iso‐complication based dose escalation.
Given the desire for iso‐NTCP based dose escalation, clinical trials were developed and carried out at the University of Michigan for tumors located in the liver and lungs. In the 3‐D conformal therapy era, these trials took place via recognition of one particular aspect of the effective volume (Veff) dose volume histogram (DVH) reduction scheme (due to Kutcher and Burman) often employed in order to use the Lyman NTCP model for non‐uniformly irradiated organs. That is, computation of a normal tissue Veff for a particular dose distribution does not depend on the units of dose in the treatment plan (e.g., Gy, cGy/hr, or of greatest interest here, % dose). Given this, we recognized that treatment planning could proceed in the normal manner of that time (dose distributions generated in relative dose (%) with respect to an ICRU reference point prescription dose (most often the isocenter)), while at the same time attempts could be made to minimize the Veff of the dose limiting normal tissue, with ultimate physical isocenter normalization dose (Dnorm in Gy) prescribed after planning. That is, each Veff has a corresponding Dnorm leading to a fixed iso‐NTCP level. Thus, reductions in Veff generated during treatment planning, led to individualized increases in prescription isocenter dose after planning (a perceived benefit/goal for the treatment planner), all at fixed perceived level of NTCP. In the IMRT/optimization era, biological cost functions have been developed to accomplish these same goals.
This talk will summarize experiences in iso‐NTCP dose escalation and planning at the University of Michigan for tumors in the liver and lungs; including current, ongoing, functional imaging based, adaptive treatment trials. Work supported in great part by NIH grant P01‐CA59827.
Understand the basis and ongoing use of iso‐NTCP based dose escalation at the University of Michigan.