Index of content:
Volume 25, Issue 1, January 1998
25(1998); http://dx.doi.org/10.1118/1.598167View Description Hide Description
The Los Alamos code MCNP4A (M_onte C_arlo N_-P_article version 4A) is currently used to simulate a variety of problems ranging from nuclear reactor analysis to boronneutron capture therapy. A graphical user interface has been developed that automatically sets up the MCNP4A geometry and radiation source requirements for a three-dimensional Monte Carlo simulation using computed tomography data. The major drawback for this dosimetry system is the amount of time to obtain a statistically significant answer. A specialized patch file has been developed that optimizes photon particle transport and dose scoring within the standard MCNP4A lattice geometry. The transport modifications produce a performance increase (number of histories per minute) of approximately 4.7 based upon a 6 MV point source centered within a lattice water phantom and voxels. The dose scoring modifications produce a performance increase of approximately 470 based upon a tally section of greater than lattice elements and a voxel size of Homogeneous and heterogeneous benchmark calculations produce good agreement with measurements using a standard water phantom and a high- and low-density heterogeneity phantom. The dose distribution from a typical mediastinum treatment planning setup is presented for qualitative analysis and comparison versus a conventional treatment planning system.
25(1998); http://dx.doi.org/10.1118/1.598169View Description Hide Description
The contribution from contaminant electrons in the buildup region of a photonbeam must be separated when calculating the dose using a photon convolution kernel. Their contribution can be extrapolated from fractional depth dose (FDD) data using the fractional depth kerma (or the “equilibrium dose”) derived from measured quantities such as beam attenuation with depth, phantom scatter factor as a function of field size and depth, and inverse-square law for the incident photonbeam. Good agreement is observed between the extrapolated and the EGS4 Monte Carlo simulated, primary dose-to-kerma ratios in the surface region for the photonbeams, excluding electron contamination. The FDD was measured using a Scanditronix photon diode and was normalized to a reference depth far beyond maximum range of contaminant electrons. An analysis for the 8 and 18 MV photonbeams from a Varian 2100CD indicates that at a source-to-surface distance (SSD) of 100 cm, the maximum electron contaminant dose (relative to its maximum FDD) varies from 1% to 33% for 8 MV and 2% to 44% for 18 MV, for square collimator settings ranging from 5 to 40 cm (defined at 100 cm from the source). This value at a depth of maximum dose (2 cm for 8 MV and 3.5 cm for 18 MV) can reach 1% for 8 MV and 2.3% for 18 MV. This contaminant electron dose is almost independent of SSD for 8 MV and starts to fall off for 18 MV at SSDs larger than 120 cm. Compared with the open beam, the contaminant electron dose increases when a solid tray is used, and the magnitude of increase increases with field size, reaching 19% and 16% for a field for 8 and 18 MV photons, respectively. The contaminant electron dose increases slightly for a blocked beam compared with an open beam of the same field size if a tray is used in both cases. The contaminant electron dose for the wedged field is less than that for an open field. However, the reduction is less significant at larger collimator settings and may increase slightly for 8 MV photons.
25(1998); http://dx.doi.org/10.1118/1.598164View Description Hide Description
Percutaneous transluminal coronary angioplasty (PTCA) is currently one of the most common treatments for obstructive coronary artery disease. The long term success of the treatment, however, is severely limited by restenosis. Recently, different investigators have begun to study the possibility of radiation therapy in restenosis prevention and have shown promising results. However, an optimal radiation delivery device for delivering a highly localized radiationdose to the arterial medial layer while preserving the viability of the artery has yet to be established. In this article, we discuss the development of a unique mixed gamma/beta brachytherapy source capable of delivering high radiationdose to a 0.5 mm thick vessel wall by proton-beam activating an existing nickel titanium stent to produce vanadium-48. The dose distribution of the activated stent is determined by computer simulation using MCNP Monte Carlo code and is verified by radiochromic film measurement.
25(1998); http://dx.doi.org/10.1118/1.598434View Description Hide Description
Using a applicator for brachytherapy for the reduction of recurrence rates after pterygium excisions has been an effective therapeutic procedure. Accurate knowledge of the dose being applied to the affected area on the sclera has been lacking, and for decades inaccurate estimates for lens dose have thus been made. Small errors in the assumptions which are required to make these estimates lead to dose rates changing exponentially because of the attenuation of beta particles.Monte Carlo simulations have been used to evaluate the assumptions that are now being used for the calculation of the surface dose rate and the corresponding determination of lens dose. For an ideal applicator, results from this study indicate dose rates to the most radiosensitive areas of the lens ranging from 8.8 to 15.5 cGy/s. This range is based on different eye dimensions that ultimately corresponds to a range in distance between the applicator surface and the germinative epithelium of the lens of 2–3 mm. Furthermore, the conventional 200 cGy threshold for whole lens cataractogenesis is questioned for predicting complications from scleral brachytherapy. The dose to the germinative epithelium should be used for studying radiocataractogenesis.
25(1998); http://dx.doi.org/10.1118/1.598171View Description Hide Description
In our previous study we used the linear-quadratic model [J. Nucl. Med. 35, 1861 (1994)] to confirm our initial finding, based on the time-dose-fractionation model [J. Nucl. Med. 34, 1801 (1993)], that longer-lived radionuclides (e.g., ) can offer a substantial therapeutic advantage over the shorter-lived radionuclides presently used in radioimmunotherapy (e.g., ). The original calculations using the linear-quadratic (LQ) model did not account for proliferation of the tumor and critical bone marrow tissues. It has been suggested that inclusion of a proliferation term in the LQ model can have a substantial impact on the biologically effective dose (BED). With this in mind, we have reexamined the therapeutic efficacy of longer versus short-lived radionuclides using the LQ model replete with proliferation terms for tumor and bone marrow. Relative advantage factors (RAF), which quantify the overall therapeutic advantage of a long-lived compared to short-lived radionuclide, were calculated accordingly. While the extrapolated initial dose rate required to achieve a given BED can be affected by the inclusion of proliferation terms for both the tumor and marrow, the relative advantage factors for the longer-lived radionuclides were not significantly affected. Longer-lived radionuclides such as and are about three times more therapeutically effective than the shorter-lived which is currently used in RIT. In other words, for a given therapeutic effect in the tumor, a longer-lived radionuclide can result in a lower deleterious effect to the bone marrow than a short-lived radionuclide. Given that bone marrow is generally considered to be the dose-limitingorgan, these results have important implications for radioimmunotherapy.
25(1998); http://dx.doi.org/10.1118/1.598168View Description Hide Description
The theory of electron penetration as predicted by the Fokker–Planck equation is first reviewed within a restricted context that considers the multiple scattering and transport of charged particles. We then broaden the context and show that range straggling effects also fit successfully into this framework, which completes an electron model initiated by Yang. We introduce those effects with a superposition of Fokker–Planck solutions, i.e., by using an incident beam that contains a spectrum of initial energies, or equivalently, a set of csda ranges. Straggling effects appear to be a beam property in this approach but are returned to the material when we use it. All the information needed to construct the spectrum is obtained from a measurement of the electron rest charge distribution in polystyrene. To illustrate the correctness of this procedure, we consider the case of a 20 MeV electron beam incident on water. We predict the absorbed dose distribution as a function of depth and also measure it with an ionization chamber in a water tank. We find nearly perfect agreement between calculation and experiment in this case where all the results derive and apply to a clinically operational machine.
Calculating dose distributions and wedge factors for photon treatment fields with dynamic wedges based on a convolution/superposition method25(1998); http://dx.doi.org/10.1118/1.598173View Description Hide Description
A convolution/superposition based method was developed to calculate dose distributions and wedge factors in photon treatment fields generated by dynamic wedges. This algorithm used a dual source photon beam model that accounted for both primary photons from the target and secondary photonsscattered from the machine head. The segmented treatment tables (STT) were used to calculate realistic photon fluence distributions in the wedged fields. The inclusion of the extra-focal photons resulted in more accurate dose calculation in high dose gradient regions, particularly in the beam penumbra. The wedge factors calculated using the convolution method were also compared to the measured data and showed good agreement within 0.5%. The wedge factor varied significantly with the field width along the moving jaw direction, but not along the static jaw or the depth direction. This variation was found to be determined by the ending position of the moving jaw, or the STT of the dynamic wedge. In conclusion, the convolution method proposed in this work can be used to accurately compute dose for a dynamic or an intensity modulated treatment based on the fluence modulation in the treatment field.
25(1998); http://dx.doi.org/10.1118/1.598161View Description Hide Description
The output factor for an enhanced dynamic wedge (EDW) field, like that for a dynamic wedge field, is a complex function of the field dimension in the wedge direction. The large change in output for different field sizes (varying more than 40% for a wedge angle) is due to rescaling of the golden segmented treatment table (GSTT), which specifies the cumulative monitor unit weighting as a function of moving jaw position. The rescaling of the GSTT results in increased output on the central axis with a decrease in the value of the final moving jaw position in the wedge plane. The output factor (in air or in water) on the central axis of an EDW field can be predicted to within 1% by multiplying the output factor (in air or in water) of an open field of the same size with the ratio of the normalized GSTT (NGSTT) value at 4.5 cm, which corresponds to a 10 cm×10 cm square field, to the NGSTT value at the final moving jaw position for the EDW field. Once the NGSTT factor is separated from the output for EDW fields, the field-size-dependent wedge factor varies less than 1%. This approach allows a simple and accurate determination of the output factor for rectangular and asymmetric EDW fields. The equivalent square method for determining output for rectangular fields applies to EDW fields with the same precision as it does to open fields. The output for EDW fields strongly depends on the final moving jaw position. Every 5-mm change in the final moving jaw position causes 3.5-5.4% error in monitor unit calculation.
25(1998); http://dx.doi.org/10.1118/1.598172View Description Hide Description
Results of a validation study of a commercial virtual wedge device recently installed at our institution are presented. The wedge simulation produces an energy fluence from the treatment head that is equivalent to the primary energy fluence attenuated through a wedge-shaped slab of water with the central axis fluence set to unity. A simple exponential formula used to compute off-axis wedge factors is compared to beam profiles measured in a water phantom. A fast Fourier transform (FFT) convolution dose calculation is compared to measured dose profiles. Measured and calculated central axis wedged/open field ratios as a function of depth are also compared.
Magnetic resonance quantification of the myocardial perfusion reserve with a Fermi function model for constrained deconvolution25(1998); http://dx.doi.org/10.1118/1.598163View Description Hide Description
The myocardial perfusion reserve, defined as the ratio of hyperemic and basal myocardial blood flow, is a useful indicator of the functional significance of a coronary artery lesion. Rapid magnetic resonance(MR)imaging for the noninvasive detection of a bolus-injected contrast agent as a MR tracer is applied to the measurement of regional tissue perfusion during rest and hyperemia, in patients with microvascular dysfunction. A Fermi function model for the distribution of tracer residence times in the myocardium is used to fit the MR signal curves. The myocardial perfusion reserve is calculated from the impulse response amplitudes for rest and hyperemia. The assumptions of the model are tested with Monte Carlo simulations, using a multiple path, axially distributed mathematical model of blood tissue exchange, which allows for systematic variation of blood flow, vascular volume, and capillary permeability. For a contrast-to-noise ratio of 6:1, and over a range of flows from 0.5 to 4.0 ml/min per g of tissue, the ratio of the impulse response amplitudes for hyperemic and basal flows is linearly proportional to the ratio of model blood flows, if the mean transit time of the input function is shorter than approximately 9 s. The uncertainty in the blood flow reserve estimates grows both at low and high flows. The predictions of the Monte Carlo simulations agree with the results of MR first pass studies in patients without significant coronary artery lesions and microvascular dysfunction, where the perfusion reserve in the territory of the left anterior descending coronary artery (LAD) correlates linearly with the intracoronary Doppler ultrasound flow reserve in the LAD in agreement with previous PET studies.
25(1998); http://dx.doi.org/10.1118/1.598162View Description Hide Description
Two side-by-side energy windows, one at the photopeak and one at lower energy, are sometimes employed in quantitative SPECT studies. We measured the count-rate losses at moderately high activities of for two multihead Anger cameras in such a dual-window-acquisition mode by imaging a decaying source composed of two hot spheres within a warm cylinder successively over a total of 23 days. The window locations were kept fixed and the paralyzable model was assumed. In addition, for the Picker Prism 3000 XP camera, the source was viewed from three different angles separated by 120° and the final results are from an average over these three angles. For the Picker camera, the fits to the data from the individual windows are good (the mean of the squared correlation coefficient equals 0.98) while for the Siemens Multispect camera fits to the data from head 1 and from the lower-energy, monitor window are relatively poor. Therefore, with the Siemens camera the data from the two windows are combined for deadtime computation. Repeated autopeaking might improve the fits. At the maximum count rate, corresponding to a total activity of 740 MBq (20 mCi) in the phantom, the multiplicative deadtime correction factor is considerably larger for the Picker than for the Siemens camera. For the Picker camera, it is 1.11, 1.12, and 1.12 for heads 1–3 with the photopeak window and 1.10 for all heads with the lower-energy monitor window. For the Siemens camera, the combined-window deadtime correction factor is 1.02 for head 1 and 1.03 for head 2. Differences between the deadtime correction factor for focal activity and for the total activity do not support the hypothesis of count misplacement between foci of activity at these count rates. Therefore, the total-image dead time correction is recommended for any and all parts of the image.
Fast image reconstruction for optical absorption tomography in media with radially symmetric boundaries25(1998); http://dx.doi.org/10.1118/1.598160View Description Hide Description
In this paper we present a reconstruction algorithm to invert the linearized problem in optical absorptiontomography for objects with radially symmetric boundaries. This is a relevant geometry for functional volume imaging of body regions that are sensitive to ionizing radiation, e.g., breast and testis. From the principles of diffuse light propagation in scattering media we derive the governing integral equations describing the effects of absorption variations on changes in the measurement data. Expansion of these equations into a Neumann series and truncation of higher-order terms yields the linearized forward imaging operator. For the proposed geometry we utilize an invariance property of this operator, which greatly reduces the problem dimensionality. This allows us to compute the inverse by singular value decomposition and consequently to apply regularization techniques based on the knowledge of the singular value spectrum. The inversion algorithm is highly efficient computing slice images as fast as convolution-backprojection algorithms in computed tomography(CT). To demonstrate the capacity of the inversion scheme we present reconstruction results for synthetic and phantom measurement data.
25(1998); http://dx.doi.org/10.1118/1.598165View Description Hide Description
The modulation transfer function(MTF) of radiographic systems is frequently evaluated by measuring the system’s line spread function (LSF) using narrow slits. The slit method requires precise fabrication and alignment of a slit and high radiation exposure. An alternative method for determining the MTF uses a sharp, attenuating edge device. We have constructed an edge device from a 250-μm-thick lead foil laminated between two thin slabs of acrylic. The device is placed near the detector and aligned with the aid of a laser beam and a holder such that a polished edge is parallel to the x-ray beam. A digital image of the edge is processed to obtain the presampled MTF. The image processing includes automated determination of the edge angle, reprojection, sub-binning, smoothing of the edge spread function (ESF), and spectral estimation. This edge method has been compared to the slit method using measurements on standard and high-resolution imaging plates of a digital storage phosphor (DSP) radiography system. The experimental results for both methods agree with a mean MTF difference of 0.008. The edge method provides a convenient measurement of the presampled MTF for digital radiographic systems with good response at low frequencies.
Diagnostic x-ray spectra: A comparison of spectra generated by different computational methods with a measured spectrum25(1998); http://dx.doi.org/10.1118/1.598170View Description Hide Description
A number of computer codes, developed using semi-empirical models, are available to compute x-ray spectra from a tungsten target for different tube parameters. In this study x-ray spectra measured with a high-purity germanium detector are compared with those computed using the empirical models and previously published measured data. The computer codes used to generate the spectra are based on models proposed by Birch et al. and Tucker et al. The measured x-ray spectra agreed well with the computed x-ray spectra using the model of Tucker et al. whereas the model of Birch et al. produced a “harder” x-ray spectrum compared to the measured spectra. Our measured x-ray spectra compared well with the previously published measured spectral data of Fewell et al.
25(1998); http://dx.doi.org/10.1118/1.598166View Description Hide Description
This paper presents a new algorithm for frame registration. Our algorithm requires only that the frame be comprised of straight rods, as opposed to the structures or an accurate frame model required by existing algorithms. The algorithm utilizes the full 3D information in the frame as well as a least squares weighting scheme to achieve highly accurate registration. We use simulated CT data to assess the accuracy of our algorithm. We compare the performance of the proposed algorithm to two commonly used algorithms. Simulation results show that the proposed algorithm is comparable to the best existing techniques with knowledge of the exact mathematical frame model. For CT data corrupted with an unknown in-plane rotation or translation, the proposed technique is also comparable to the best existing techniques. However, in situations where there is a discrepancy of more than 2 mm (0.7% of the frame dimension) between the frame and the mathematical model, the proposed technique is significantly better than the existing techniques. The proposed algorithm can be applied to any existing frame without modification. It provides better registration accuracy and is robust against model mis-match. It allows greater flexibility on the frame structure. Lastly, it reduces the frame construction cost as adherence to a concise model is not required.
25(1998); http://dx.doi.org/10.1118/1.598174View Description Hide Description
This thesis is directed at the development, evaluation, and application of a novel scatter measurement technique for digital radiography. The use of an aperture for scatter measurement is examined from both a theoretical and an experimental perspective. The theoretical consideration is based on a broad beam analytical model which predicts that scatter signals should be negligible at narrow beam sizes. The experimental examination is based on analyzing features in images of narrow apertures using a digital fluoroscopy system in simulated clinical imaging conditions. It is established that a narrow aperture can generate a scatter-free signal which can be related to the open field signal to determine the scatter signal and that this method has advantages over the conventional approach in terms of practical simplicity, accuracy, and dose efficiency. This is achieved through examination of the signature of skirt data in images of apertures 0.5 to 10 mm in diameter and through determination of the air gap dependence of aperture signals and the densitometric fidelity of image data. The approach is extended to the development and evaluation of a computerized scatter correction scheme based on two-dimensional interpolation of scatter samples determined using an array of apertures. The evaluation is directed at the physical imaging performance of the scheme and encompasses the image acquisition and processing stages of the scatter correction process. It is established that the scheme generates substantial improvement in broad-area contrast, densitometric linearity, square wave response factor, and spatial uniformity, has no direct effect on edge sharpness and limiting resolution, and gives rise to a substantial increase in image mottle. This conclusion is valid for any correction scheme which involves subtraction of a smooth background image be it based on spatial domain convolution filtering, interpolated scatter sampling or other process. The performance of the scheme is also compared with that of a high ratio grid and with spatial domain convolution filtering.