Index of content:
Volume 34, Issue 6, June 2007
- Therapy Symposium: Ballroom A
- Hypofractionated RT: Biological Basis and Clinical Application in the Prostate and Lung
MO‐D‐BRA‐01: Hypofractionated RT: Biological Basis and Clinical Application in the Prostate and Lung34(2007); http://dx.doi.org/10.1118/1.2761251View Description Hide Description
Hypo‐fractionation, the delivery of radiation therapy with a dose per fraction >2.0 Gy, was introduced in curative radiotherapy in many centers all over the World in the period from WWII to the mid‐1970's, mainly for health economics reasons. Clinical studies published in the late 70's and early 80's showed that these schedules were often associated with excessive late toxicity compared to standard fractionation schedules, and hypo‐fractionation was abandoned in most centers. In hindsight, this negative experience largely resulted from the over‐estimation of tolerance doses in hypo‐fractionated schedules arising from the Ellis NSD formula. Logically, this historical clinical experience does not exclude that hypo‐fractionation can be acceptable or even advantageous under certain defined circumstances.
The current status of the linear‐quadratic bio‐effect model in clinical practice is reviewed. Two clinical settings, where hypo‐fractionation is considered, are presented and discussed: (1) definitive radiotherapy for nonsmall cell lungcancer(NSCLC); (2) definitive radiotherapy for prostate cancer.
For both NSCLC and prostate cancer the current interest in the development and clinical testing of safe hypo‐fractionation regimens springs partly from the improved physical dose distribution achievable with 3D conformal radiotherapy or IMRT. This provides a window of opportunity for escalating dose per fraction. But there is also a biological rationale for hypo‐fractionation in these two tumor types — and a slightly different one in the two cases! For NSCLC there is strong evidence that shortening the overall treatment time creates a favorable efficacy:toxicity ratio with respect to late toxicity: hypo‐fractionation is a convenient way of delivering accelerated radiotherapy. In other words, we are trading in time for dose per fraction. For prostate cancer, there are no good reasons to believe that there is a strong time factor. However, there is increasingly convincing evidence that the α/β ratio for this tumor type is low, perhaps even lower than for the dose‐limiting rectal side‐effects. This alone creates a case for exploring hypo‐fractionation in this disease.
Hypo‐fractionation schedules are being tested in controlled clinical trials in several tumor types at the moment. These schedules should not be introduced in the clinic without appropriate evidence that they are safe and effective. However, based on our improved knowledge of clinical radiobiology, it appears that hypo‐fractionation schedules may yield a beneficial therapeutic ratio and/or a superior cost‐effectiveness in some clinical indications.
After this session the participants should be able to:
1. Recognize the limitations of the traditional linear‐quadratic model.
2. Summarize the changes in biological understanding of dose fractionation that have occurred over the last 10–15 years.
3. Explain the rationale behind the current interest in hypo‐fractionation in NSCLC and prostate cancer.
34(2007); http://dx.doi.org/10.1118/1.2761253View Description Hide Description
Prostate cancer exhibits slow growth with a potential doubling time ranging from weeks to months (median 42 days). From these data emerged a hypothesis that adenocarcinoma of the prostate may behave more like a late reacting tissue. Brenner and colleagues (1) used data from prostate low dose rate permanent seed implants and external beam radiotherapy series and derived an α/β of approximately 1.5. Many other groups have also calculated the α/β ratio to be in the < 3.0 range; yet, hyper fractionation does not seem to compromise outcome after radiotherapy (2). There are many potential pitfalls of these analyses and some investigators have concluded that the α/β for prostate cancer is closer to that for late effects of the surrounding normal tissues (>3).
Understanding the α/β ratio for prostate cancer is key to designing clinical trials that maximize the efficacy of radiotherapy. If the α/β for prostate cancer is lower than the surrounding normal tissues, hypofractionation will afford an advantage in terms of greater sensitivity of prostate cancer to this strategy, as compared to the bladder and rectum. Brenner et al (3) have estimated the α/β for the rectum to be over 5.0. Recently, Fiorino and Valdagni (4) have argued that these estimates may be inaccurate because of variation in, and dependence of toxicity on, the proportion of rectum receiving higher radiationdoses.
Clinical results in the PSA/IMRT era using hypofractionation show that this strategy is well tolerated by the surrounding normal tissues, with outcomes consistent with a low α/β. Kupelian et al. (5) have treated a large series of men to 70 Gy at 2.5 Gy per fraction with excellent results. As an offshoot of this strategy, RTOG 04‐15 contrasts this hypofractionation regimen with 73.8 Gy in 1.8 Gy fractions.
The Cleveland Clinic data also prompted us at Fox Chase Cancer Center to devise a randomized hypofractionation trial comparing 76 Gy at 2.0 Gy per fraction to 70.2 Gy at 2.7 Gy/Fx. The latter hypofractionation regimen is equivalent to 84.4 Gy at 2.0 Gy/Fx, assuming an α/β of 1.5. A total of 307 patients were entered from 2002 to 2006. The trial has completed accrual. Acute toxicity in the first 100 men entered shows minor differences between the two treatment groups (6). Analysis of late toxicity during the first year of follow‐up also is revealing little difference. The encouraging results thus far with hypofractionation suggest that more significant hypofractionation (e.g., stereotactic radiotherapy) might be cost‐effective and potentially advantageous.
1. To understand the alpha/beta ratio for prostate cancer.
2. To appreciate the efficacy of hypofractionation and resultant toxicity.
34(2007); http://dx.doi.org/10.1118/1.2761255View Description Hide Description
- Novel Particle Acceleration Techniques
34(2007); http://dx.doi.org/10.1118/1.2761398View Description Hide Description
This presentation will start the symposium with a brief description of recent developments in particle acceleration techniques with a focus on the acceleration of proton and light ions and its impact on radiation therapy of cancer. Proton/ion therapy has great potential for improving local control and normal tissue sparing because of its superior dose distributions. However, the large cost of a proton/ion facility based on conventional accelerator technology has prevented its widespread use. Significant efforts have been made in recent years to develop compact particle accelerators in order to make proton or ion therapy a commonly available treatment modality. Compact particle accelerators based on dielectric wall accelerator, laser‐particle acceleration, and superconductor techniques will be discussed. The educational objectives of this presentation include (1) to describe the physical properties of proton and ion beams and their therapeutic advantages, (2) to analyze the cost‐effectiveness of conventional proton/ion therapy versus intensity modulated x‐ray therapy, and (3) to introduce recent innovations in particle acceleration and their potential for radiationoncology.
34(2007); http://dx.doi.org/10.1118/1.2761399View Description Hide Description
The proton beam therapy of cancer has been practiced to yield good therapeutic results. However, it remains a costly treatment, primarily because of its huge accelerator facility needed to drive proton beams. The introduction of laser acceleration promises to greatly reduce the size and possible cost to accelerate and provide a medically needed system for therapy. Not only its compact acceleration section, but also its compactness of necessary radiation shield and beam handling section (a portion similar to the gantry) contribute to the compactness of the overall therapy machine size. A series of innovations such as the adiabatic acceleration, double‐layer target, the optimized target thickness, etc. constitutes to provide a new paradigm of laser‐driven compact therapy. We envision that the verification of dosage with the self‐auto‐activation by PET combined with pencil‐beam scanning characteristics amounts to a new feedback therapy of cancer.
34(2007); http://dx.doi.org/10.1118/1.2761400View Description Hide Description
Laser plasmaaccelerators provide electron beams with parameters of interest in many fields and in particular for radiotherapy. A short review of progress achieved recently including bubble  and colliding  schemes will be presented. Using the last improvements of laser‐plasma accelerators, we performed dosedeposition simulations using a quasi‐monoenergetic electron beam in the 200 MeV range . It is shown that electron beam properties offer advantageous dosimetric characteristics compare to those calculated with high energy photons. The depth dose curve shows a broad maximum at large depths (> 20 cm). The lateral penumbra of treatment fields for focused electron beams is smaller compared to 6 MeV photons at depths smaller than 10 cm. These advantages result in an improvement of the quality of a clinically approved prostate treatment plan. While the target coverage is the same or even slightly better for 250 MeV electrons compared to photons the dose sparing of sensitive structures is improved. E.g. the dose to the rectum is reduced by 19% for 250 MeV, focused electrons. These findings agree with previous results regarding very high energy electrons as a treatment modality [4, 5, 6].
The lack of compact and cost‐efficient electron accelerators could be overcome by laser‐plasma systems.
1. Understand the origin electron injection in plasma.
2. Understand the acceleration process and to motivate this approach which uses extremely high electric field.
3. Understand the issues related to clinical application for radiotherapy.
34(2007); http://dx.doi.org/10.1118/1.2761401View Description Hide Description
Recent developments in the field of laser engineering, specifically the invention of chirped pulse amplification technique made it possible to achieve laser light intensities reaching 1022 W/cm2 range. In this review, we will show that such laser intensities are sufficient to accelerate protons to therapeutic energy ranges, provided that proper laser‐target parameters are chosen. In the majority of recent laser‐matter interaction experiments, the proton energy spectra coming out of the interaction chamber are thermal making it impossible to use these protons in hadron therapy. This necessitates the development of a particle selection device that would deliver quasi‐monoenergetic particles suitable for medical applications. We will discuss earlier proposed particle selection system and show the dosimetriccharacteristics of protons coming out of this device. Using “real” patient data and physical characteristics (phase‐space distribution) of selected particles we will discuss the inter‐comparison studies between photon intensity‐modulated plans on one hand and intensity‐modulated plans based on laser‐accelerated protons on the other. In concluding remarks, we will describe the current challenges facing the project and ways to resolve them.
34(2007); http://dx.doi.org/10.1118/1.2761402View Description Hide Description
Purpose:Proton Beam Radiation Therapy has been clinically investigated for over 40 years. Despite obvious physical dose deposition advantages and compelling clinical results, the considerable financial cost of existing accelerator designs has hindered widespread use of this evidently superior treatment modality. Within the last few years however, materials have been developed that enable high concentrations of electromagnetic energy to be harnessed. These materials have opened the way for reducing the size and cost of accelerators for Proton Beam Radiation Therapy.Method and Materials: Two such materials, high current densitysuperconducting wires and high field gradient dielectric elements, have led to the respective developments of compact superconducting cyclotrons and dielectric wall accelerators. Existing analytical tools for simulating the performance of circular accelerators and linear accelerators have been applied to guide the development of these designs in the new energy density regimes. These tools are essential for predicting the performance of accelerated proton beam dynamics in compact devices with high electromagnetic field gradients. Results: Each of these accelerators have been incorporated into single room proton therapytreatment system designs with a size and cost that are, or are projected to be, significantly below that of existing alternatives. Prototypes of these systems are now under construction and elemental prototype evaluation. Superconductingcurrent density performance specifications have been met or exceeded in the development of the compact superconducting cyclotron. A working cyclotron has been prototyped and shown to accelerate an intense proton beam of more than 100 nA over the first stages of the cyclotron acceleration cycle. Likewise critical elements of the dielectric wall accelerator have met specifications for the final accelerator configuration, showing standoff fields in excess of 100 MeV / m. Conclusion: At least one of these new systems is expected to be completed and being used for patient treatment before the end of 2008. With the significant reduction in complexity and cost the successful demonstration of these systems will likely lead to a more widespread adoption of Proton Beam Radiation Therapy.Conflict of Interest: Kenneth Gall is a founder and Chief Technology Officer of Still River Systems Incorporated, a company involved in the design, production and clinical implementation of proton beam radiation therapy systems.
- Radiobiological Models and Treatment Planning
34(2007); http://dx.doi.org/10.1118/1.2761532View Description Hide Description
Advances in technology and in computing have given us computer‐controlled linear accelerators equipped with multileaf collimators and wonderful 3D graphics workstations to perform treatment planning; additionally we have conceptual advances such as stereotaxy (cranial and extra‐cranial) intensity modulation (IMRT), helical tomotherapy and protons. But the bottom line in radiotherapy is radiobiology, radiobiology, radiobiology. If we don't know how to convert ‘physics’, i.e. dose distributions, into estimates of clinical outcome then these wonderful technological advances will remain ‘toys’ for physicists to play with.
Radiobiology has traditionally concerned itself with determining surviving fraction vs. (uniform) dose curves for human tumour cell lines. However, in the 3D era we need models which connect dosedistributions (and fractionation regimens) in tumours and normal tissues (generally in the form of dose‐volume histograms) with the probabilities of tumour (local) control — TCP — and of complications — NTCP. Such models now exist and their active use in treatment planning ushers in the era of Conformal Radiobiology.
1. Appreciate the limitations of technology‐drivendose‐basedradiotherapy.
2. Appreciate the limitations of ‘classical’ radiobiology in the conformal era.
3. Understand what is meant by ‘Conformal Radiobiology’.
34(2007); http://dx.doi.org/10.1118/1.2761533View Description Hide Description
Delivery of adequate tumordose without causing excessive normal tissue complications is the driving principle of modern radiation therapy. Resulting dose distributions in normal tissues are very different from the “partial organ irradiation” distributions that characterize the simple beam arrangements of earlier days. Further, the growing popularity of hypofractionation drastically widens the range of biological effective doses within organs at risk (OAR). The clinical physicist is faced with uncertainty as to what aspects of an OAR dose distribution require special consideration in treatment plan design and evaluation.
Normal tissue complication probability (NTCP) models are one way to account for the full dose distribution. For most serious toxicities, statistical models from various outcomes analyses help direct the planner toward dose‐volume limits. There are also several semi‐mechanistic models, each with a set of parameters that must be adjusted to describe existing data. But for conditions that differ greatly from those under which they were ‘commissioned’, two models for the same endpoint and starting from the same input dose distribution do not necessarily predict the same NTCP. A working group, with joint participation from AAPM, ASTRO and RTOG, is being established to, among other things, reconcile model differences and provide clinical guidelines in the near future.
In this presentation, common NTCP models for several major dose‐limiting toxicities will be described, including parameters sets gleaned from literature review. Problems and pitfalls of integrating and interpreting diverse studies and applying them to an individual clinic's practice will be discussed and real‐world examples of application to clinical decisions will be presented.
1. Understand the important features of the most widely‐used NTCP models.
2. Understand how the model parameters affect predictions of normal tissuedose‐volume responses.
3. Understand some of the complexities of implementing these models into routine clinical practice.
34(2007); http://dx.doi.org/10.1118/1.2761534View Description Hide Description
The TumorControl Probability function (TCP, Webb & Nahum) models radiation induced cell kill and uses Poisson statistics to estimate the probability of local control. Its parameters may be derived by correlating archived plan data and treatment outcome results of clinical trials. We will exploit the data of a large randomized prostate trial (68Gy against 78Gy, 600+ patients) of patients treated between the years 1999 and 2003. For these patients the planning CT scan and organ delineations, and the 3D dose distribution as generated by the treatment planning system are electronically available.
However, we have no patient specific information on the location of tumortissue inside the prostate (as could nowadays be imaged using MRI). Furthermore, the dose absorbed by the clonogen cells will have been influenced by errors in daily set‐up and by organ motion. Although portal images were acquired for an off‐line bony set‐up protocol, no in‐room soft tissueimaging was available to monitor organ motion. An additional uncertainty is introduced by the fact that the primary method of clinical follow‐up is based on blood PSA levels, which means a detected failure may not be local.
To describe the interplay between the location of clonogen cells and the varying day‐to‐day position of the prostate, we use Monte Carlo treatment plan evaluation software that was developed in‐house. This software samples population distributions of random and systematic errors to simulate many possible treatment histories. Maximum likelihood methods are then applied to determine the most probable TCP model parameters α and σα. Inspired by surveys of pathological specimens, the assumed density distribution of clonogen cells inside the prostate is modulated, and a body of clonogens located posteriorly outside the CTV is introduced to model extra capsular extension. The effects of such modulations on the TCP parameters and on the likelihood of the fit is studied.
In future trials, additional imaging will increase the amount of patient specific data on geometric variations and cell distributions, leading to a more accurate TCP model.
The TCP parameters thus acquired may be used for treatment planning purposes. If MRIimaging is available to gain knowledge about the location of tumortissue inside the gland, treatment planning may be performed by optimizing the TCP function using a heterogeneous clonogen cell distribution. By using probability based optimization techniques (in which the effect of geometric errors on the tumor cell kill is modeled in the same way as in the TCP fitting procedure above), no PTVs need to be defined, and the optimization procedure can directly aim for the largest expected TCP (for a given expected rectum NTCP).
1. Identify sources of uncertainty when basing a TCP model on clinical data.
2. Understand a method to determine TCP parameters in the face of such uncertainties.
3. Understand how these parameters may be used for treatment planning.
34(2007); http://dx.doi.org/10.1118/1.2761535View Description Hide Description
Phase One clinical trials seek to determine the maximum tolerated dose (MTD) of the investigational treatment. Similarly, traditional radiationoncologydose escalation trials assign groups of patients to increasing dose levels until an unacceptable level of complications appear. This generally transpires on a sequential basis, regardless of tumor size or the distribution pattern of radiationdose to surrounding normal tissues (beyond the specification of a few well‐accepted dose constraints such as maximum spinal cord dose). This can be a poor strategy for treatments limited primarily by complications to so‐called volume‐effect normal tissues which encompass the tumors, such as may be the case for tumors located in the liver or lung. A better scheme for Phase I/II dose escalation trials limited by these volume effect organs would attempt to treat sequential groups of patients with dose “distributions” that might be expected to lead to similar anticipated levels of complications (but of course with different tumordoses); with sequential escalation of each potential iso‐complication level until an MTD profile is realized (which would inherently include the volume effect). The use of normal tissue complication probability (NTCP) models prospectively, in the treatment planning process, facilitates this type of normal tissue iso‐complication based dose escalation.
Given the desire for iso‐NTCP based dose escalation, clinical trials were developed and carried out at the University of Michigan for tumors located in the liver and lungs. In the 3‐D conformal therapy era, these trials took place via recognition of one particular aspect of the effective volume (Veff) dose volume histogram (DVH) reduction scheme (due to Kutcher and Burman) often employed in order to use the Lyman NTCP model for non‐uniformly irradiated organs. That is, computation of a normal tissue Veff for a particular dose distribution does not depend on the units of dose in the treatment plan (e.g., Gy, cGy/hr, or of greatest interest here, % dose). Given this, we recognized that treatment planning could proceed in the normal manner of that time (dose distributions generated in relative dose (%) with respect to an ICRU reference point prescription dose (most often the isocenter)), while at the same time attempts could be made to minimize the Veff of the dose limiting normal tissue, with ultimate physical isocenter normalization dose (Dnorm in Gy) prescribed after planning. That is, each Veff has a corresponding Dnorm leading to a fixed iso‐NTCP level. Thus, reductions in Veff generated during treatment planning, led to individualized increases in prescription isocenter dose after planning (a perceived benefit/goal for the treatment planner), all at fixed perceived level of NTCP. In the IMRT/optimization era, biological cost functions have been developed to accomplish these same goals.
This talk will summarize experiences in iso‐NTCP dose escalation and planning at the University of Michigan for tumors in the liver and lungs; including current, ongoing, functional imaging based, adaptive treatment trials. Work supported in great part by NIH grant P01‐CA59827.
Understand the basis and ongoing use of iso‐NTCP based dose escalation at the University of Michigan.
34(2007); http://dx.doi.org/10.1118/1.2761536View Description Hide Description
How can we use TCP and NTCP models in optimising radiotherapy outcome? The most basic or Level‐I optimisation is to take an existing ‘standard’ treatment plan with its standard total dose and fraction size, compute the NTCP and then adjust the total dose until an acceptable NTCP value is obtained: isotoxic customised dose prescription. This will yield immediate benefits in, for example, lung‐tumour radiotherapy.Level‐II optimisation uses the TCP and NTCP functions ‘upfront’ in the optimisation process to determine the beam weights and even angles, or, in the case of IMRT, to perform ‘inverse planning’. A typical criteria might be ‘Maximise the TCP for NTCP = 2.5%’ or ‘Minimise the NTCP for TCP = 90%’. In this Level‐II mode no constraints need be set regarding uniform dose in the target volume — the TCP model will take of this. However, maximum doses may well need to be set outside the target volume.
Another exciting area is the connection between fractionation sensitivity and dose distributions in normal tissues. The ‘classical’ LQ‐based Withers isoeffect formula can be easily modified to reflect increasingly conformal dose distributions in organs at risk. The validity of this modification has been effectively demonstrated by the safe use of very large fractions in treating lungtumours with (highly conformal) body stereotaxy; the oversimplified BED concept would have forbidden such effective regimens.
We need to start using radiobiologically based optimisation, without waiting until the TCP and NTCP models are ‘perfect’. The clinician may still use conventional tools such as single‐CT slice isodoses and DVHs to approve a radiobiologically optimised plan but she/he is going to find a marked improvement in the quality of such a plan.
1. Appreciate the potential of using TCP and NTCP models in treatment plan optimisation.
2. Understand what is meant by ‘isotoxic’ dose prescription.
3. Understand the limitations of the Withers' isoeffect formula and how it can be modified to approximately account for dose distributions in organs at risk.
- Robustness of IMRT Treatments
MO‐E‐BRA‐01: IGRT and Treatment Planning: Geometric Uncertainties and Individualized Patient Treatment34(2007); http://dx.doi.org/10.1118/1.2761294View Description Hide Description
Widespread use of precision conformal and intensity modulated treatment techniques puts extreme emphasis on targeting. While the cloud of uncertainty associated with defining and localizing targeted tissue has been realized for quite some time, standards for working with this uncertainty are hardly widespread. When looking at the range of variations in target position across a population, it becomes clear that there is quite a range for any given body site, with some patients exhibiting very small variations, and others at extremes far beyond the population mean. The ability to characterize individual patients early on in treatment permits modification of the population assumptions used in planning, providing potential benefit to a subset of patients. Making plans robust to expected variations, especially at the start of treatment, may further aid in this individualization process. One critical tool in these endeavors is the ability to estimate potential dosimetric consequences of various levels of uncertainty, as opposed to the use of geometric margins as approximations.
The educational objectives of this talk are:
1. Gain an understanding of the range of geometric uncertainties in a population.
2. Look at various methods of assessing individual variations and their impact.
3. Compare geometric and dosimetric means of assessing the impact of variations.
4. Introduce the topic of robust planning.
34(2007); http://dx.doi.org/10.1118/1.2761295View Description Hide Description
Local control by radiation therapy relates directly to the dose delivered to the diseased tissue, but can be limited by the dose tolerated by adjacent structures. This axiom has guided clinical practice into a number of technological shifts over the past several decades. Intensity‐modulated radiotherapy(IMRT) has emerged as an important means to achieve higher doses and to intensify treatment, while simultaneously decreasing the dose in normal tissues. However, the proximity of critical normal organs to disease and geometric uncertainties arising from organ movement continue to present significant challenges in some anatomical sites.
Advances in image‐guidedradiation therapy(IGRT) permit more frequent soft‐tissue imaging in the course of treatment delivery, creating opportunities to enhance the accuracy and precision of treatment. A framework for considering image‐guidance strategies and their implications for treatment planning is required. For example, target localization can improve geometric accuracy in the on‐line setting; i.e., during treatment delivery. An off‐line statistical analysis of a patient's images can also enhance precision by supporting the adaptation of margins. Even a complete re‐optimization of the plan is possible, in response to systematic anatomical changes accumulated with the progression of treatment. Frequent re‐optimization is potentially inefficient and expensive, within the constraints of current technologies, clinical practices, and QA standards. Clearly, there are trade‐offs between the level of effort required to an exploit IGRT for adaptive re‐planning, and the potential benefits of re‐planning. The concept of robust optimization points to the possibility designing treatment plans that are tolerant to uncertainties in treatment delivery. Robust plans may reduce the need for routine re‐planning in response to variations in patient setup, organ movement, or progressive changes leading to organ deformation.
This presentation reviews clinical experience with IGRT, and explores the implications, opportunities and challenges for treatment planning in the era of IGRT. The central principles of using IGRT in IMRTtreatment planning with respect to requirements for adaptive re‐planning and the design of robust IMRTtreatments.
1. To review illustrate clinical applications of IGRT.
2. To describe how the adoption of IGRT can influence external beam treatment planning.
3. To outline how IGRT information is used for re‐planning of IMRTtreatments, and in the design of plans that are tolerant to uncertainties in treatment delivery.
34(2007); http://dx.doi.org/10.1118/1.2761296View Description Hide Description
For lungtumors, the presence of motion due to breathing is a key source of uncertainty. Motion essentially blurs the static dose distribution, which can be thought of mathematically as a convolution of the static dose distribution with a probability density function (PDF) describing the motion. The 4D IMRT optimization/inverse planning method tries to undo this blurring effect by taking the motion PDF into account during the optimization of the intensity map. Such an approach is effective as a motion compensation technique, but only when the motion is highly reproducible over the entire treatment course. In terms of the PDF, “reproducibility” corresponds to witnessing the same PDF over the course of treatment that was observed in the treatment planning stage. However, if the realized PDF during treatment differs from the planning PDF, the subsequent convolution of the static dose distribution with the realized PDF may produce undesirable hot and cold spots.
Robust optimization is a concept that has gained prominence in the optimization community for its wide applicability to problems with uncertain data. Real‐world problems are rarely, if ever, accompanied by noiseless data, hence, there is a natural motivation to incorporate this uncertainty into any optimization process. The robust framework we present builds on the 4D approach by explicitly accounting for the uncertain motion represented by uncertainty in the motion PDF. Instead of basing the optimization on one PDF, robust optimization uses a family of PDFs to create a static dose distribution that is less sensitive to variations in the motion.
The robust framework allows us to craft solutions in the entire spectrum between the idealized 4D method, and a conservative, ITV‐like margin approach. A given intensity map that results from the robust optimization method will balance intensity‐modulation with intensity‐homogeneity in order to effectively trade off the sparing of healthy tissues with ensuring sufficient tumor coverage. Accordingly, the robust optimization method implicitly performs multi‐objective optimization on these competing objectives.
1. Understand the concept of robust optimization.
2. Understand the construction of robust treatment plans based on breathing motion PDFs.
3. Understand the mathematical and dosimetric differences between treatment plans of varying levels of robustness.
4. Understand the multi‐objective viewpoint of robust optimization.
34(2007); http://dx.doi.org/10.1118/1.2761297View Description Hide Description
The knowledge about patient geometry at the exact time of radiation delivery is rarely complete. The result of radiotherapy should not depend sensitively on inevitable uncertainties. This quality of robustness of a treatment can be enforced during dose optimization by a variety of means, which can be classified by the frequency with which patient information is acquired, the timespan between acquisition and delivery, and the nature of the image information. For prostate radiotherapy, random target and normal tissue motion poses the greatest challenge as it requires high‐quality volume imaging and the information content may decay quickly.
Even with today's on‐board imaging systems, a fair amount of uncertainty about the patient geometry remains, which is best described by probability distributions (PD) of pointwise displacements. Here, various off‐line and quasi on‐line image‐guided protocols differ mostly in how these PDs are constructed and how frequently it is updated. The PDs can be used to compute the expected values of dose or dose effect at each point of the patient model. This model may either be defined in the treatment room (and dose) coordinate system (TCS) or may be associated with the patient reference geometry and deform along with the anatomy. While the former is the traditional model for dose planning, the latter shifts the focus to the accumulation of dose in the tissue, hence the term tissue‐eye‐view (TEV).
The most basic probabilistic patient model in TCS is the coverage probability model, where each volume element in a rigid reference patient geometry is weighted with the cumulative probability that some volume of interest can be found there. This information quantifies the relevance of a point in the CTV‐to‐PTV margin. Despite its apparent simplicity, it is possible to alleviate the common PTV‐overlaps‐organ paradox to an extent that allows iso‐toxic dose escalation by about 10 per cent. Moving to a probabilistic patient model in TEV abandons the PTV concept altogether, at the price of more image information and the need for deformable registration. The potential for iso‐toxic dose escalation lies at more than 20 per cent.
Both optimization concepts rely on an a‐priori estimate of the pointwise displacement probabilities. A bias or time trend in these PDs would be potentially fatal. Hence, it is essential to update the PD during the course of treatment to minimize the “uncertainty in the estimates of uncertainties”. In consequence, robust off‐line adaptive protocols require some extent of monitoring while on‐line protocols require basically off‐line probabilistic models predicated (in a Bayesian sense) on the geometry of the day. Apart from the insufficiency in the input image data, another risk arises from the high specificity with which individual source of error influence the optimized dose distribution: a large margin could compensate for many uncertainties, while margin‐less optimization schemes need to quantify all of them. This limits the theoretical benefit of the most sophisticated models (daily on‐line imagingBayesian TEV) significantly. The specific cost benefit ratio of various protocols remains to be evaluated In practice, in larger populations of patients.
- The Challenges Associated with Differential Dose Delivery using IMRT
34(2007); http://dx.doi.org/10.1118/1.2761570View Description Hide Description
IMRT, by virtue of the increased degrees of delivery freedom available, requires the utilization of advanced optimization approaches (so‐called inverse planning techniques). While such inverse planning optimization approaches make possible the solution of these otherwise intractably large problems, they impose unique challenges with regard to the treatment of multiple targets with different desired total delivered dose levels. In this session we will discuss and characterize these challenges and discuss various “workarounds” to these problems, along with their potential implications and ramifications.
34(2007); http://dx.doi.org/10.1118/1.2761571View Description Hide Description
Helical tomotherapy is a rotational radiation therapydelivery technique that combines pre‐treatment CT‐based image‐guidance and Intensity‐Modulated Radiation Therapy(IMRT) using a fan‐beam delivery. Helical tomotherapy treatments are delivered by a continually rotating gantry mounted on a slip ring system, which permits power and communications to be passed from the rotating to stationary sub‐systems. Treatments are delivered with both the gantry and the couch in continuous simultaneous motion. Thus, helical tomotherapy can continuously deliver intensity‐modulated radiation from 360° in a transverse plane about the patient.
In addition to its ability to deliverIMRT, helical tomotherapy systems also have the ability to obtain Megavoltage CT (MVCT) images of the patient in the treatment position prior to each treatment fraction. Daily positional uncertainties and anatomical changes (such as weight loss, tumor response, etc…) can be minimized by utilizing CT‐based pre‐treatment imaging. As such, the position of the tumor relative to the treatment beam can be corrected prior to treatment by moving the patient with appropriate superior‐inferior, anterior‐posterior, lateral, pitch, roll, and/or yaw offsets.
The combination of precise positioning of the target volume and the rotational nature of the delivery make helical tomotherapy well suited for simultaneously delivering differential doses to multiple targets. Treatment sites that are large in the superior‐inferior direction are easily treated with helical tomotherapy. Multiple target volumes up to meter in length can be treated in a single helical tomotherapy delivery sequence. This can be useful for treating multiple metastases throughout the body while avoiding nearby critical structures. Head and Neck patients can be treated with integrated boosts with minimal dose spillover, dumping, hot spots, or critical structure problems. Shrinking lungtumors can be treated with a simultaneous integrated volume‐adapted boost (SIVAB) that allows dose escalation to the residual tumor mass without compromising normal tissue tolerance and dose to areas at risk for microscopic tumor spread. For prostate patients that are at risk for nodal involvement, the regional lymph nodes, the seminal vesicles, and the prostate can be simultaneously targeted and treated to different doses.
This lecture will provide an overview of clinical scenarios utilizing differential dosedelivery with helical tomotherapy. The advantages and disadvantages of helical tomotherapy will be discussed for each clinical example.
1. Provide a brief overview of the helical tomotherapy system's imaging and treatment capabilities.
2. Present clinical scenarios for utilizing differential dosedelivery with helical tomotherapy.
3. Discuss potential applications of adaptive differential dosedelivery.
34(2007); http://dx.doi.org/10.1118/1.2761572View Description Hide Description
Purpose: The purpose of this presentation is to show in selected case studies where differential dose planning is used in Cyberknife treatments, and which challenges are associated with each approach. Method and Materials: Three case studies were selected to demonstrate different situations were differential dose planning would be used in Cyberknife SRS. Case1: Multiple, anatomically close cranial lesions of varying sizes. As experience with Gammaknife SRS has shown, the risk of complications in treating brainmetastasis is correlated with dose and lesion size. Lesions larger than 2 cm in diameter are typically treated with a lower dose. In the situation were multiple lesions of different size are located closely together in the brain, creating separate treatment plans on the Cyberknife would lead to longer treatment times, higher whole‐body dose due to scatter and leakage, and difficulty to asses multiple plan dose overlays.
Case 2: The second situation involved mostly head and neck cancers where the GTV and CTV are treated to different doses as a boost after conventional IMRT treatments. Typically, the GTV encompasses the PET positive areas. Around the GTV, a CTV enclosing the microscopic extension of the disease as determined by the physician based on the case history.
Case 3: Unusual cases. A patient with pituitary adenoma was also diagnosed with Graves disease. Because of the proximity of the lesion and the orbital muscles affected by Graves disease, the tumor and orbital muscles were treated at the same time with different dose. Results: All three categories could be planned to the physicians' satisfaction. In case 1 patients, attention has to be paid to minimizing the dose to the healthy brain between the lesions. This can be achieved by using tuning structures or tuning shells around the tumors. In Case 2, the limiting factor is the dose fall‐off outside the GTV. A fraction of the CTV will be treated at a higher dose than planned. In case 3, the combined planning of the Graves disease and the pituitary adenoma achieved good dose sparing of the optic apparatus. Conclusion: The Cyberknife software is very flexible and allows relatively easy planning for complex situations.