Volume 27, Issue 5, May 2000
 POINT/COUNTERPOINT


Candidacy for board certification in radiological/medical physics should be restricted to graduates of accredited training programs
View Description Hide Description  Top

 RADIATION PROTECTION

 Educational Treatise

Calculation of effective dose
View Description Hide DescriptionThe concept of “effective dose” was introduced in 1975 to provide a mechanism for assessing the radiation detriment from partial body irradiations in terms of data derived from whole body irradiations. The effective dose is the mean absorbed dose from a uniform wholebody irradiation that results in the same total radiation detriment as from the nonuniform, partialbody irradiation in question. The effective dose is calculated as the weighted average of the mean absorbed dose to the various body organs and tissues, where the weighting factor is the radiation detriment for a given organ (from a wholebody irradiation) as a fraction of the total radiation detriment. In this review, effective dose equivalent and effective dose, as established by the International Commission on Radiological Protection in 1977 and 1990, respectively, are defined and various methods of calculating these quantities are presented for radionuclides, radiography,fluoroscopy,computed tomography and mammography. In order to calculate either quantity, it is first necessary to estimate the radiationdose to individual organs. One common method of determining organdoses is through Monte Carlo simulations of photon interactions within a simplified mathematical model of the human body. Several groups have performed these calculations and published their results in the form of data tables of organdose per unit activity or exposure. These data tables are specified according to particular examination parameters, such as radiopharmaceutical, xray projection, xray beam energy spectra or patient size. Sources of these organdose conversion coefficients are presented and differences between them are examined. The estimates of effective dose equivalent or effective dose calculated using these data, although not intended to describe the dose to an individual, can be used as a relative measure of stochastic radiation detriment. The calculated values, in units of sievert (or rem), indicate the amount of wholebody irradiation that would yield the equivalent radiation detriment as the exam in question. In this manner, the detriment associated with partial or organspecific irradiations, as are common in diagnostic radiology, can be assessed.

Effective doses to patients undergoing thoracic computed tomography examinations
View Description Hide DescriptionThe purpose of this study was to investigate how xray technique factors and effective doses vary with patient size in chest CT examinations. Technique factors (kVp, mAs, section thickness, and number of sections) were recorded for 44 patients who underwent a routine chest CT examination. Patient weights were recorded together with dimensions and mean Hounsfield unit values obtained from representative axial CTimages. The total mass of directly irradiated patient was modeled as a cylinder of water to permit the computation of the mean patient dose and total energy imparted for each chest CT examination. Computed values of energy imparted during the chest CT examination were converted into effective doses taking into account the patient weight. Patient weights ranged from 4.5 to 127 kg, and half the patients in this study were children under 18 years of age. All scans were performed at 120 kVp with a 1 s scan time. The selected tube current showed no correlation with patient weight indicating that chest CT examination protocols do not take into account for the size of the patient. Energy imparted increased with increasing patient weight, with values of energy imparted for 10 and 70 kg patients being 85 and 310 mJ, respectively. The effective dose showed an inverse correlation with increasing patient weight, however, with values of effective dose for 10 and 70 kg patients being 9.6 and 5.4 mSv, respectively. Current CT technique factors (kVp/mAs) used to perform chest CT examinations result in relatively high patient doses, which could be reduced by adjusting technique factors based on patient size.
 Top
 IMAGING PHYSICS


An edge spread technique for measurement of the scattertoprimary ratio in mammography
View Description Hide DescriptionAn experimental measurement technique that directly measures the magnitude and spatial distribution of scatter in relation to primary radiation is presented in this work. The technique involves the acquisition of magnified edge spread function (ESF) images with and without scattering material present. The ESFs are normalized and subtracted to yield scattertoprimary ratios (SPRs), along with the spatial distributions of scatter and primary radiation. Mammography is used as the modality to demonstrate the ESF method, which is applicable to all radiographic environments. Sets of three images were acquired with a modified clinical mammography system employing a flat panel detector for 2, 4, 6, and 8 cm thick breast tissue equivalent material phantoms composed of 0%, 43%, and 100% glandular tissue at four different kV settings. Beam stop measurements of scatter were used to validate the ESF methodology. There was good agreement of the mean SPRs between the beam stop and ESF methods. There was good precision in the ESFdetermined SPRs with a coefficient of variation on the order of 5%. SPRs ranged from 0.2 to 2.0 and were effectively independent of energy for clinically realistic kVps. The measured SPRs for 2, 4, and 6 cm 0% glandular phantoms imaged at 28 kV were 0.21±0.01, 0.39±0.01, and 0.57±0.02, respectively. The measured SPRs for 2, 4, and 6 cm 43% glandular phantoms imaged at 28 kV were 0.20±0.01, 0.35±0.02, and 0.53±0.02, respectively. The measured SPRs for 2, 4, and 6 cm 100% glandular phantoms imaged at 28 kV were 0.22±0.02, 0.42±0.03, and 0.88±0.08, respectively.

Using light sensitometry to evaluate mammography film performance
View Description Hide DescriptionThe performance of commercially available light sensitometers was compared with two other methods of xray sensitometry to determine whether commercially available sensitometers are viable for evaluating clinical performance of mammography film. Xray sensitometry was performed using mammography screens that were modified to accommodate a graded optical step tablet (screen sensitometry). Finally, a means for performing intensityscale xray sensitometry was configured (inversesquare sensitometry). Clinical mammographyxray exposure conditions were used and film processing quality was closely monitored during the study. Statistical results for chisquare probabilities on the resulting contrast curves yielded good agreement for most of the configurations investigated. Comparison of film gradient versus optical density curves showed good agreement for maximum contrast values and the corresponding optical density for maximum contrast for three of the four screen–film combinations used when comparing light sensitometry to screen sensitometry. A similar comparison of light sensitometers to inversesquare sensitometry showed good agreement for maximum contrast, but less agreement for the corresponding optical density of maximum contrast. Based on these results, the authors concluded that commercially available sensitometers could be used to estimate clinical film performance for the screen–film systems tested. In particular they can be used to determine the range of optical densities that provide optimal film contrast.

High temporal resolution for multislice helical computed tomography
View Description Hide DescriptionMultislice helical computed tomography(CT) substantially reduces scanning time. However, the temporal resolution of individual images is still insufficient for imaging rapidly moving organs such as the heart and adjacent pulmonary vessels. It may, in some cases, be worse than with current singleslice helical CT. The purpose of this study is to describe a novel image reconstruction algorithm to improve temporal resolution in multislice helical CT, and to evaluate its performance against existing algorithms. The proposed image reconstruction algorithm uses helical interpolation followed by data weighting based on the acquisition time. The temporal resolution, the longitudinal (zaxis) spatial resolution, the image noise, and the inplane image artifacts created by a moving phantom were compared with those from the basic multislice helical reconstruction (helical filter interpolation, HFI) algorithm and the basic singleslice helical reconstruction algorithm (180° linear interpolation, 180LI) using computer simulations. Computer simulation results were verified with CT examinations of the heart and lung vasculature using a 0.5 second multislice scanner. The temporal resolution of HFI algorithm varies from 0.28 and 0.86 s, depending on helical pitch. The proposed method improves the resolution to a constant value of 0.29 s, independent of pitch, allowing moving objects to be imaged with reduced blurring or motion artifacts. The spatial (z) resolution was slightly worse than with the HFI algorithm; the image noise was worse than with the HFI algorithm but was comparable to axial (stepandshoot) CT. The proposed method provided sharp images of the moving objects, portraying the anatomy accurately. The proposed algorithm for multislice helical CT allowed us to obtain CTimages with high temporal resolution. It may improve the image quality of clinical cardiac, lung, and vascular CTimaging.

A gain nonuniformity correction for multislice volumetric CT scanners
View Description Hide DescriptionThis paper presents a calibration and correction method for detectorcell gain variations. A key functionality of current CTscanners is to offer variable slice thickness to the user. To provide this capability in multislice volumetric scanners, while minimizing costs, it is necessary to combine the signals of several detectorcells in when the desired slice thickness is larger than the minimum provided by a single cell. These combined signals are then preamplified, digitized, and transmitted to the system for further processing. The process of combining the output of several detectorcells with nonuniform gains can introduce numerical errors when the impinging xray signal presents a variation along over the range of combined cells. These numerical errors, which by nature are scan dependent, can lead to artifacts in the reconstructed images, particularly when the numerical errors vary from channeltochannel (as the filteredbackprojection filter includes a highpass filtering along the channel direction, within a given slice). A projection data correction algorithm has been developed to subtract the associated numerical errors. It relies on the ability of calibrating the individual cell gains. For effectiveness and data flow reasons, the algorithm works on a single slice basis, without slicetoslice exchange of information. An initial error vector is calculated by applying a highpass filter to the projection data. The essence of the algorithm is to correlate that initial error vector, with a calibration vector obtained by applying the same highpass filter to various combinations of the cell gains (each combination representing a basis function for a expansion). The solution of the leastsquare problem, obtained via singular value decomposition, gives the coefficients of a polynomial expansion of the signal slope and curvature. From this information, and given the cell gains, the final error vector is calculated and subtracted from the projection data.

A localization algorithm and error analysis for stereo xray image guidance
View Description Hide DescriptionStereo xrayradiography attracts increasing attention in major clinical applications. The purpose of this paper is to analyze the 3D localization error for breast biopsy procedures and provide guidelines for improving its accuracy. Our prototype is a CCD based digital stereo xray imaging system. The mathematical model consists of two xray sources and one stationary detector plane. A closed form leastsquares solution is derived for 3D localization of feature points, particularly a biopsy needle tip, from a pair of 2D digital radiographs. Based on the leastsquares formula and its first order approximation, the 3D localization error is analyzed in terms of object location, measurement error, separation between the two xray sources, and distance from the source to the detector. The stereo imaging and error estimation formulas are numerically simulated and experimentally validated. The data are in agreement with theoretical prediction. These results can be used for the purpose of system design and protocol optimization.

Anthropomorphic versus geometric chest phantoms: A comparison of scatter properties
View Description Hide DescriptionPreviously, we have used an anthropomorphic chest phantom to study scatter reduction in digital chest radiography.Image metrics, such as scatter fractions, contrast, noise, and resolution, are not easily measured due to the anatomical structure in the phantom. A geometric chest phantom, recently developed for quality control purposes, offers the possibility of being used to calculate image quality measurements. Here, we compare the scatter properties of the two phantoms to determine if the geometric phantom can be used in our studies of scatter compensation techniques. A calibrated photostimulable phosphor system was used to acquire images of the two phantoms. An array of beam stops was placed in front of each phantom to calculate scatter fractions. Each phantom had approximately 2 in. of polystyrene material added to the posterior to increase scatter fractions to those normally seen in patients. Exposure parameters were 300 mA for 0.009 sec with a source to image distance of 100 cm. Energies were varied from 60 to 130 kVp. Scatter fractions were determined for different areas of anatomy for each energy and each phantom. For all energies examined, the two phantoms compare well for scatter fractions in each of six regions. For example, at 95 kVp, the geometric phantom had average scatter fractions of 0.72 and 0.88 in the lung and mediastinum regions, respectively. These values were 0.74 and 0.90 for the anatomic phantom. For comparison, measurements of scatter fractions in patients at these values have been reported as 0.65 and 0.90 for the lung and mediastinum regions. The geometric phantom is an excellent tool which can be used in place of the anthropomorphic phantom for studies involving scatter compensation. In addition to having a gray level histogram typical of a human chest, this phantom has uniform regions where image quality measurements can be calculated.

Characterization of a fluoroscopic imaging system for kV and MV radiography
View Description Hide DescriptionAn online kilovoltage (kV) imagingsystem has been implemented on a medicallinear accelerator to verify radiotherapy field placement. A kV xray tube is mounted on the accelerator at 90° to the megavoltage (MV) source and shares the same isocenter. Nearly identical CCDbased fluoroscopic imagers are mounted opposite the two xraysources. These systems are being used in a clinical study of patient setup error that examines the advantage of kV imaging for online localization. In the investigation reported here, the imaging performance of the kV and MV systems are characterized to provide support to the conclusions of the studies of setup error. A spatialfrequencydependent linear systems model is used to predict the detective quantum efficiencies (DQEs) of the two systems. Each is divided into a series of gain and spreading stages. The parameters of each stage are either measured or obtained from the literature. The model predicts the system gain to within 7% of the measured gain for the MV system and to within 10% for the kV system. The systems’ noise power spectra (NPSs) and modulation transfer functions (MTFs) are measured to construct the measured DQEs. Xray fluences are calculated using modeled polyenergetic spectra. Measured DQEs agree well with those predicted by the model. The model reveals that the MV system is well optimized, and is xray quantum noise limited at low spatial frequencies. The kV system is suboptimal, but for purposes of patient positioning yields images superior to those produced by the MV system. This is attributed to the kV system’s higher DQE and to the inherently higher contrasts present at kV energies.

Lens distortion in optically coupled digital xray imaging
View Description Hide DescriptionThe objectives of this research are to analyze geometrical distortions introduced by relaylenses in optically coupled digital xray imaging systems and to introduce an algorithm to correct such distortions. Methods: The radial and tangential errors introduced by a relay lens in digital xray imaging were experimentally measured, using a lenscoupled CCD(charge coupled device) prototype. An algorithm was introduced to correct these distortions. Based on an xrayimage of a standard calibration grid, the algorithm first identified the location of the optical axis, then corrected the radial and tangential distortions using polynomial transformation technique. Results: Lens distortions were classified and both radial and tangential distortions introduced by lenses were corrected using polynomial transformation. For the specific lensCCD prototype investigated, the mean positional error caused by the relay lens was reduced by the correction algorithm from about eight pixels (0.69 mm) to less than 1.8 pixels (0.15 mm). Our investigation also shows that the fourth order of polynomial for the correction algorithm provided the best correction result. Conclusions: Lens distortions should be considered in positiondependant, quantitative xray imaging and such distortions can be minimized in CCDxray imaging by appropriate algorithm, as demonstrated in this paper.
 Top

 RADIATION TREATMENT PHYSICS


Photon scatter in portal images: Accuracy of a fluence based pencil beam superposition algorithm
View Description Hide DescriptionThe accuracy of a pencil beam algorithm to predict scatteredphoton fluence into portal imaging systems was studied. A data base of pencil beam kernels describing scatteredphoton fluence behind homogeneous water slabs (1–50 cm thick) at various air gap distances (0–100 cm) was generated using the EGS Monte Carlo code. Scatter kernels were partitioned according to particle history: singlyscattered, multiplyscattered, and bremsstrahlung and positron annihilation photons. Mean energy and mean angle with respect to the incident photon pencil beam were also scored. This data allows fluence, mean energy, and mean angular data for each history type to be predicted using the pencil beam algorithm. Pencil beam algorithm predictions for 6 and 24 MV incident photon beams were compared against full Monte Carlo simulations for several inhomogeneous phantoms, including approximations to a lateral neck, and a mediastinum treatment. The accuracy of predicted scatteredphoton fluence, mean energy, and mean angle was investigated as a function of air gap, field size,photon history, incident beam resolution, and phantom geometry. Maximum errors in mean energies were 0.65 and 0.25 MeV for the higher and lower energy spectra, respectively, and 15° for mean angles. The ability of the pencil beam algorithm to predict scatter fluence decreases with decreasing air gap, with the largest error for each phantom occurring at the exit surface. The maximum predictive error was found to be 6.9% with respect to the total fluence on the central axis. By maintaining even a small air gap (∼10 cm), the error in predicted scatter fluence may be kept under 3% for the phantoms and beam energies studied here. It is concluded that this pencil beam algorithm is sufficiently accurate (using International Commission on Radiation Units and Measurements Report No. 24 guidelines for absorbed dose) over the majority of clinically relevant air gaps, for further investigation in a portal dose prediction algorithm.

Slit xray beam primary dose profiles determined by analytical transport of Compton recoil electrons
View Description Hide DescriptionAccurate measurement of radiation beam penumbras is essential for conformal radiotherapy. For this purpose a detailed knowledge of the dosimeter’s spatial response is required. However, experimental determination of detector spatial response is cumbersome and restricted to the specific detector type and beam spectrum used. A model has therefore been developed to calculate in slit beam geometry both dose profiles and detector response profiles. Summations over representative photon beam spectra yield profiles for polyenergetic beams. In the present study the model is described and resulting dose profiles verified. The model combines Compton scattering of incident photons,transport of resulting electrons by Fermi–Eyges smallangle multiple scattering theory, and functions to limit electrontransport. This analytic model thus yields line spread kernels of primary dose in a water phantom. It is shown that the spatial response of an ideal point detector to a primary photon beam can be well described by the model; the calculations are verified by measurements with a diamond detector in a telescopic slit geometry in which all dose contributions except for the primary dose can be excluded. Effects of photondetector behavior, source size of the linear accelerator(linac) and detector size are studied. Measurements show that slit dose profiles calculated by means of the kernel are accurate within 0.1 mm of the fullwidth at halfmaximum. For a theoretical point source and point detector combined with a 0.2 mm wide slit, the fullwidth halfmaximum values of the slit beam dose profiles are calculated as 0.37 mm and 0.42 mm in a 6 MV and 25 MV xray beam, respectively. The present study shows that the model is adequate to calculate local dose effects that are dominated by approximately monodirectional, primary photon fluence. The analytic model further provides directional electron fluence information and is designed to be applied to various detectors and linac beam spectra.

Patientdependent beammodifier physics in Monte Carlo photon dose calculations
View Description Hide DescriptionModel pencilbeam on slab calculations are used as well as a series of detailed calculations of photon and electron output from commercial accelerators to quantify level(s) of physics required for the Monte Carlo transport of photons and electrons in treatmentdependent beam modifiers, such as jaws, wedges, blocks, and multileaf collimators, in photon teletherapy dose calculations. The physics approximations investigated comprise (1) not tracking particles below a given kinetic energy, (2) continuing to track particles, but performing simplified collision physics, particularly in handling secondary particle production, and (3) not tracking particles in specific spatial regions. Figuresofmerit needed to estimate the effects of these approximations are developed, and these estimates are compared with fullphysics Monte Carlo calculations of the contribution of the collimating jaws to the onaxis depthdose curve in a water phantom. These figures of merit are next used to evaluate various approximations used in coupled photon/electron physics in beam modifiers. Approximations for tracking electrons in air are then evaluated. It is found that knowledge of the materials used for beam modifiers, of the energies of the photonbeams used, as well as of the length scales typically found in photon teletherapy plans, allows a number of simplifying approximations to be made in the Monte Carlo transport of secondary particles from the accelerator head and beam modifiers to the isocenter plane.

Intensity modulation delivery techniques: “Step & shoot” MLC autosequence versus the use of a modulator
View Description Hide DescriptionTwo intensity modulation radiotherapy(IMRT) delivery systems, the “step & shoot” multileaf collimator(MLC) autosequence and the use of an intensity modulator, are compared with emphasis on the doseoptimization quality and the treatment irradiation time. The intensity modulation (IM) was created by a dose gradient optimization algorithm which maximizes the target dose uniformity while maintaining dose to critical structures below a set tolerance defined by the user in terms of either a single dose value or a dose volume histogram curve for each critical structure. Two clinical cases were studied with and without doseoptimization: a threefield sinus treatment and a sixfield nasopharyngeal treatment. The optimization goal of the latter case included the sparing of several nearby normal structures in addition to the target dose uniformity. In both cases, the target dose uniformity initially improved quickly as the IM level increased to 5, then started to approach saturation when the MLC technique was used. In the absence of the both space and intensity discreteness intrinsic to the MLC technique, the modulator technique produced greater tumordose uniformity and normal structure sparing. The latter showed no systematic improvement with increasing IM level using the MLC technique. For the sinus tumor treatment of 2 Gy the treatment irradiation time of the modulator technique is no more than that of the conventional treatment. For the MLC technique the irradiation time increased rapidly from 4.4 min to 12.4 min as the IM level increased from 2 to 10. Both clinical cases suggested that an IM level of 5 offered a good compromise between the doseoptimization quality and treatment irradiation time. We showed that a realistic photon source model is necessary for dose computation accuracy in the MLCIM treatments.

Dose calculation and verification of intensity modulation generated by dynamic multileaf collimators
View Description Hide DescriptionWhile the development of inverse planning tools for optimizing dose distributions has come to a level of maturity, intensity modulation has not yet been widely implemented in clinical use because of problems related to its practical delivery and a lack of verification tools and quality assurance (QA) procedures. One of the prerequisites is a dose calculation algorithm that achieves good accuracy. The purpose of this work was twofold. A primaryscatter separation dose model has been extended to account for intensity modulation generated by a dynamic multileaf collimator(MLC). Then the calculation procedures have been tested by comparison with carefully carried out experiments. Intensity modulation is being accounted for by means of a 2D (twodimensional) matrix of correction factors that modifies the spatial fluence distribution, incident to the patient. The dose calculation for the corresponding open field is then affected by those correction factors. They are used in order to weight separately the primary and the scatter component of the dose at a given point. In order to verify that the calculated dose distributions are in good agreement with measurements on our machine, we have designed a set of test intensity distributions and performed measurements with 6 and 20 MV photons on a Varian Clinac 2300C/D linear accelerator equipped with a 40 leaf pair dynamic MLC. Comparison between calculated and measured dose distributions for a number of representative cases shows, in general, good agreement (within 3% of the normalization in low dose gradient regions and within 3 mm distancetodose in high dose gradient regions). For absolute dose calculations (monitor unit calculations), comparison between calculation and measurement reveals good agreement (within 2%) for all tested cases (with the condition that the prescription point is not located on a high dose gradient region).

A dosimetric leafsetting strategy for shaping radiation fields using a multileaf collimator
View Description Hide DescriptionA dosimetric leafsetting strategy of using multileaf collimators(MLC) for shaping radiation fields has been developed. Existing MLC leafsetting strategies are all based upon geometric criteria. This new approach, however, matches a prescribed field contour with a MLC using clinically consistent dosimetric criteria. The leaf positions are determined using an iterative optimization algorithm. An empirical dose model was developed to compare the dosimetricbased leafsetting strategy with the geometricbased leafsetting strategies. Differences up to half a centimeter in the leaf positions and isodose lines were found between setting the MLC geometrically and setting the MLCdosimetrically. The dosimetric leafsetting strategy provides the ability to achieve better dose conformation for a clinically desired isodose line. Since the desired isodose line that covers a treatment volume is typically higher than 50% of the maximum dose, the scalloping effects due to the finite leaf width at the leaf edge or 50% isodose lines are much reduced. Another benefit of the dosimetric leafsetting is that it separates the leafsetting process from the treatment planning process, and this frees the treatment planning vendors from developing detailed dose models for various existing types and future upgrades of MLC systems.

A practical technique for verification of threedimensional conformal dose distributions in stereotactic radiosurgery
View Description Hide DescriptionThe trend toward conformal techniques in stereotactic radiosurgery necessitates an accurate and practical method for verification of irregular threedimensional dose distributions. This work presents the design and evaluation of a phantom system facilitating the measurement of conformal dose distributions using one or more arrays of up to 20 radiographic films separated by 3.2 mmthick tissueequivalent spacers. Using Electron Gamma Shower version 4 (EGS4) Monte Carlo simulation, we show that for 6 MV radiosurgical photon beams this arrangement preserves tissueequivalence to within 1%. The phantom provides 0.25 mm inplane spatial resolution and multiple sets of films may be used to resample the dose volume in orthogonal planes. Dedicated software has been developed to automate the process of ordering and orienting of scanned film images, conversion of scanned pixel value to dose, resampling of one or more sets of film images and subsequent export of images in DICOM format for coregistration of planned and measured dose volumes. Calculated and measured isodose surfaces for a simple, circularbeam treatment agree to within 1.5 mm throughout the dose volume. For conformal radiosurgical applications, the measured and planned dose distributions agree to within the uncertainty of the manufacture of irregularly shaped collimators. The sensitivity of this technique to minor spatial inaccuracies in beam shaping is also demonstrated.

Truncation of blood curves to enhance imaging and therapy with monoclonal antibodies
View Description Hide DescriptionTargeting of monoclonal antibody (Mab) to solid tumor sites is a function of the blood curve of activity versus time. It has been suggested that the blood curve be artificially reduced to approach zero so that the contrast between tumor and blood uptake is maximized. We analyzed tumor uptake as a function of the time of blood curve truncation. By using a convolution approach, we were able to find the optimal times for setting the blood curve to zero in either diagnostic or therapeutic animal examples. Two iodinated antiCEA engineered fragments, diabody and minibody, were considered using previous data from nude mouse studies involving the LS174T colorectal tumor model. Figures of merit (FOMs) were used to compare ordinary and truncated blood curves and their associated tumor accumulations. Using a label, it was seen that the appropriate time for diagnostic truncation occurred when tumor uptake, as measured, was a maximum. The corresponding point for therapy (with as a label) was at infinite time. We also demonstrated that the use of traditional indices led to ambiguities in the choice of truncation times. The traditional therapy index, the ratio of the integral of the tumor uptake to the integral of the blood uptake, was found to be a numerical constant independent of This ratio was proved to be the integral of the tumor impulse response function. Use of such convolution techniques to assess truncation of the perfused material is probably also applicable to multistep processes as well as to lesion targeting with other tumorspecific pharmaceuticals.

Optimization of radiosurgery treatment planning via mixed integer programming
View Description Hide DescriptionAn automated optimization algorithm based on mixed integer programming techniques is presented for generating highquality treatment plans for LINACradiosurgerytreatment. The physical planning in radiosurgerytreatment involves selecting among a large collection of beams with different physical parameters an optimal beam configuration (geometries and intensities) to deliver the clinically prescribed radiationdose to the tumor volume while sparing the nearby critical structure and normal tissue. The proposed mixed integer programming models incorporate strict dose restrictions on tumor volume, and constraints on the desired number of beams, isocenters, couch angles, and gantry angles. The model seeks to deliver full prescription dose coverage and uniform radiationdose to the tumor volume while minimizing the excess radiation to the periphery normal tissue. In particular, it ensures that proximal normal tissues receive minimal dose via rapid dose falloff. Preliminary numerical tests on a single patient case indicate that this approach can produce exceptionally highquality plans in a fraction of the time required using the procedure currently employed by clinicians. The resulting plans provide highly uniform prescription dose to the tumor volume while drastically reducing the irradiation received by the proximal critical normal tissue.
