Volume 21, Issue 8, August 1994
Index of content:
21(1994); http://dx.doi.org/10.1118/1.597205View Description Hide Description
Analytic models for calculation of scatter distributions from flattening filters in therapy photonbeams are presented. It is shown that the amount of scatter with high atomic number filters can vary from 2% in 4‐MV beams to 10% for 24‐MV beams. The use of low atomic number filters can increase the amount of scatter by a factor of 2. The dependence on the opening angle of the primary collimator is quite large since a larger opening angle requires a thicker filter, which increases the scattered fraction of the filtered beam. The scatter makes the filter act as an extended source of extra‐focal radiation. The source distribution for monomedia filters is shown to be almost triangular. Integration in the calculation‐point’s eye view over the visible part of the filter scattersource yields the scatter fraction of the total energy fluence incident upon the patient. The experimentally well‐known ‘‘tilt’’ of dose profiles for asymmetrical fields is explained by the model. For complete modeling of head scatter distributions in treatment planning, the model presented must be combined with models also describing the scatter from the collimators, auxiliary modulators such as wedges and compensating filters, and collimator backscatter to the beam monitor.
Dose calculation for photon beams with intensity modulation generated by dynamic jaw or multileaf collimations21(1994); http://dx.doi.org/10.1118/1.597206View Description Hide Description
A dose calculation algorithm has been developed for photon beams with intensity modulation generated by dynamic jaw or multileaf collimations. First, an in‐air fluence distribution is constructed based on the dynamic motion of the jaws or leaves, taking into account the variation of output with field size defined by the jaws. The fluence distribution is then convolved with the appropriate pencil beam kernel to give correction factors which are used to calculate the dose distribution for an intensity‐modulated photon field. The proposed algorithm is strictly valid in homogeneous media only, patient heterogeneity correction is accounted for in an approximate manner. Dose distributions at several depths and for several field sizes were calculated for 6‐ and 15‐MV x‐ray beams for a set of standard wedges produced by dynamic jaws. Measurements were made with film and an ion chamber. Comparisons between calculated and measured data show good agreement (within 2%) for both dose profiles and wedge factors. Similar calculations and measurements were also made for a 25‐MV intensity‐modulated photon field produced by dynamic motion of a multileaf collimator. Agreement between calculations and measurements is also good (within 3%). The ‘‘tongue‐and‐groove’’ effect associated with a multileaf collimator design is also examined using a ring‐shaped field produced by matching two component fields. The computation time for a dynamic‐collimated field is the same as that for an irregular field shaped by conventional blocks. The algorithm is applicable to any pattern of jaw or multileaf motions. The strengths and remaining problems of the algorithm are discussed.
21(1994); http://dx.doi.org/10.1118/1.597207View Description Hide Description
In electron beams, the dose in phantom under the central shielding depends on electron‐beam energy, depth in phantom, and shield area and thickness. In our experiments, all shield thicknesses were larger than the range of electrons in the shield material. At a given depth, the dose under the shield never exceeds the open field value; however, it can attain quite a large proportion of the open field value despite a shield thickness which exceeds the range of electrons in the shield material. The effects of shield area on the dose under the central shields were studied in detail and dose distributions are given as a function of shield lateral dimensions and electron‐beam energy. It is shown that in clinical use of central shielding, the best approach to dose estimation under the shield is direct measurement in phantom under conditions of the actual clinical setup.
The calibration and use of plane‐parallel ionization chambers for dosimetry of electron beams: An extension of the 1983 AAPM protocol report of AAPM Radiation Therapy Committee Task Group No. 3921(1994); http://dx.doi.org/10.1118/1.597359View Description Hide Description
This report is an extension of the 1983 AAPM protocol, popularly known as the TG‐21 Protocol. It deals with the calibration of plane‐parallel ionization chambers and their use in calibrating therapy electron beams. A hierarchy of methods is presented. The first is to calibrate the plane‐parallel chamber in a high energy electron beam against a cylindrical chamber which has an value that has been obtained from a NIST traceable 60Co beam calibration. The second method, which is recommended for implementation by the ADCLs is an in‐air calibration against a NIST‐traceable calibrated cylindrical chamber in a Cobalt‐60 beam to obtain a plane‐parallel‐chamber calibration factor in terms of exposure or air kerma. The third method places the two chambers in a phantom in a Cobalt‐60 beam, and leads to an value for the plane‐parallel chamber. This report also gives and values for five commonly used commercially available plane‐parallel chambers: the Capintec PS‐033, the Exradin P‐11, the Holt, the NACP and the PTW‐Markus. The calculation of these N gas ratios introduces a K comp factor which is also calculated for the five parallel plate chambers. The use of the plane‐parallel chambers follows the 1983 AAPM protocol for absorbed dosecalibrations of electrons, except that new energy‐dependent P repl values are given for the Capintec PS‐033 and PTW‐Markus chambers consistent with the consensus of reports in the literature. For all the chambers, however, P repl is unity for 20 MeV electrons. This report does not address the issue of the use of plane‐parallel chambers in calibratingphoton beams.
21(1994); http://dx.doi.org/10.1118/1.597360View Description Hide Description
The limit at which quantization noise becomes dominant in video‐based real‐time portal imaging has been studied. Quantization noise due to truncation in integer frame averaging is shown to be dominant over the input analog‐to‐digital converter (A/D) quantization noise, unless image addition is used in video‐based real‐time portal imaging systems. Portal images acquired with the Newvicon camera by averaging more than 64 frames are found to be dominated by the quantization noise due to truncation. It has shown that the signal‐to‐noise ratio (SNR) is limited to 886.8 when using an 8‐bit A/D with digital frame averaging, but higher values can be achieved with digital frame addition. It is also shown that digital frame addition together with 16‐bit processing can achieve higher contrast resolution than digital frame averaging and 8‐bit processing.
21(1994); http://dx.doi.org/10.1118/1.597209View Description Hide Description
Gd2O2S phosphor screens between 250 and 1000 mg/cm2 thick were evaluated for use in megavoltage imaging systems. The phosphor layers were placed on brass plates ranging from 1 to 5 mm thick, each with and without an optical back reflector (white paint). Light output and spatial resolution were measured at 6‐ and 23‐MV x‐ray energies. Light output was found to increase linearly with phosphor thickness up to 500 mg/cm2, reaching a plateau at 1000 mg/cm2. Spatial resolution[modulation transfer function(MTF)] decreased exponentially with phosphor thickness up to 750 mg/cm2, where a minimum was reached. The variation of MTF with phosphor thickness was found to obey a simple empirical relation.
21(1994); http://dx.doi.org/10.1118/1.597236View Description Hide Description
Using fiber opticmanufacturing techniques, it is possible to produce a radiographic grid that discriminates against scattered radiation in two dimensions. Such grids consist of septa composed of glass with a high lead content; the interspace material is air, so that approximately 80% of the grid area is open. In this way, effective high ratio grids can be produced with relatively low Bucky factors. The performance of samples of such grid material is characterized in terms of both scatter rejection and dose efficiency for application in digital mammography in both slot–beam and area–beam geometry. For area beams, five‐ to tenfold improved scatter rejection relative to conventional grids was observed. In slot configurations, such grids could provide improved SNR/dose performance and more effective utilization of the heat loading capability of the x‐ray source.
21(1994); http://dx.doi.org/10.1118/1.597210View Description Hide Description
Several authors have proposed variations of the iterative filtered backprojection (IFBP) reconstruction algorithms claiming fast initial convergence rates. We have found that these algorithms are trying to minimize an unusual squared‐error criterion in a suboptimal way. As a result, existing IFBP algorithms are inefficient in the minimization of the criterion, and may become unstable at higher iteration numbers. We show that existing IFBP algorithms can be modified to use the steepest descent technique by simply optimizing the step size at each iteration. Further gains in convergence rates can be achieved with conjugate gradient IFBP algorithms derived from the same criterion. The steepest descent and conjugate gradient IFBP algorithms are guaranteed to converge, unlike some IFBP algorithms, and will do so in fewer iterations than existing IFBP algorithms.
The measurement of radiation dose profiles for electron‐beam computed tomography using film dosimetry21(1994); http://dx.doi.org/10.1118/1.597200View Description Hide Description
The unique geometry of electron‐beam CT (EBCT) scanners produces radiationdose profiles with widths which can be considerably different from the corresponding nominal scan width. Additionally, EBCT scanners produce both complex (multiple‐slice) and narrow (3 mm) radiation profiles. This work describes the measurement of the axial dose distribution from EBCT within a scattering phantom using film dosimetry methods, which offer increased convenience and spatial resolution compared to thermoluminescent dosimetry(TLD) techniques. Therapy localization film was cut into 8×220 mm strips and placed within specially constructed light‐tight holders for placement within the cavities of a CTDose Index (CTDI) phantom. The film was calibrated using a conventional overhead x‐ray tube with spectralcharacteristics matched to the EBCT scanner (130 kVp, 10 mm Al HVL). The films were digitized at five samples per mm and calibrated dose profiles plotted as a function of z‐axis position. Errors due to angle‐of‐incidence and beam hardening were estimated to be less than 5% and 10%, respectively. The integral exposure under film dose profiles agreed with ion‐chamber measurements to within 15%. Exposures measured along the radiation profile differed from TLDmeasurements by an average of 5%. The film technique provided acceptable accuracy and convenience in comparison to conventional TLD methods, and allowed high spatial‐resolution measurement of EBCT radiationdose profiles.
21(1994); http://dx.doi.org/10.1118/1.597402View Description Hide Description
Most radiologists do not use texture information contained in the trabecular patterns of hand radiographs to diagnose erosive changes and demineralization due to systemic inflammatory diseases that affect the skeletal system. However, high‐resolution digitization achievable by a laser digitizer now makes it possible to access texture information that may not be perceived visually. We are studying the feasibility of computer‐assisted early detection of these processes with particular attention to patients with hyperparathyroidism. In this paper the methods used to extract a region of interest (ROI) for texture analysis are discussed. The techniques include multiresolution sensing, automatic adaptive thresholding, detection of orientation angle, and projection taken perpendicular to the line of least second moment. The methods were tested on a database of 50 pairs of hand radiographs. We segmented the middle and the index fingers with an average success rate of 83% per hand. For the segmented finger strips, we located ROIs on both the middle and the proximal phalanges correctly over 84% of the times. Texture information was collected in the form of a concurrence matrix within the ROI. This study is a prelude to evaluating the correlation between classification based on texture analysis and diagnosis made by experienced radiologists.
21(1994); http://dx.doi.org/10.1118/1.597403View Description Hide Description
The problem of accurate stereotactic localization and registration of targets in computed tomography(CT)data sets is addressed, in particular the effect of using a single transformation matrix to map voxel coordinates onto stereotactic coordinates. An algebraic approach to the calculation of stereotactic target coordinates in tomographic data acquired with conventional stereotactic localizers is presented. The volume transformation matrix (VTM) is discussed, which is useful for the registration of volumetric data sets, and also corresponds to the rigid body transformation matrix used in many so‐called frameless registration methods. The VTM can lead to accuracy degradation, in particular due to patient movement during scanning. Simulations were performed and CTdata sets acquired with patients fitted with the CRW or the GTC stereotactic localizer were analyzed. Comparison of STM‐ and VTM‐derived stereotactic coordinates shows an average overall registration error of 0.1 mm for anesthetized patients and in the range 0.6–1.4 mm for nonanesthetized patient. Accuracy maps are described that enable the user to visualize the registration error in relation to the data. It is shown that the effect of fiducial point localization error and patient movement for VTM‐based localization is minimized when all available fiducials in the region of interest are used. The significance of these results is discussed, and methods are proposed to minimize these effects for frame‐based and frameless registration methods.
Photon propagation and detection in single‐photon emission computed tomography—An analytical approach21(1994); http://dx.doi.org/10.1118/1.597201View Description Hide Description
An analytical theory of photon propagation and detection in single‐photon emission computed tomography(SPECT) for collimated detectors is developed from first principles. The total photon detection kernel is expressed as a sum of terms due to the primary and the Compton scatteredphotons. The primary as well as contributions due to every order of Compton scattering are calculated separately. The model accounts for the three‐dimensional depth dependence of the collimator holes as well as for nonhomogeneous attenuation. No specific assumptions about the boundary or the homogeneity of the attenuating medium are made. The energy response of the detector is also modeled by the theory. Analytical expressions are obtained for various contributions to the photon detection kernel, and the multidimensional integrals involved are calculated using standard numerical integration methods. Theoretically calculated projections and scatter fractions for the primary and the first through second scattering orders are compared with our own experimental results for a small cylindrical primary radiation source immersed at various positions in a uniform cylindrical phantom. Also, theoretically calculated scatter fractions for a small spherical (pointlike) source in a uniform elliptic phantom are compared with experimental and Monte Carlo simulation results taken from the recent literature. The results from the analytical method are essentially exact and are free from the inaccuracies inherent in the numerical simulation methods used to deal with the photon propagation and detection problem in SPECT so far. The method developed here is unique in the sense that it provides accurate theoretical predictions of results averaged over an infinite number of simulations or experiments. We believe that our theory enhances an intuitive understanding of the complex image formation process in SPECT and is an important step toward solving the inverse problem, that of reconstructing the primary radiation source distribution from the measured gamma camera projections.
Laser‐induced thermoelastic deformation: A three‐dimensional solution and its application to the ablation of biological tissue21(1994); http://dx.doi.org/10.1118/1.597202View Description Hide Description
Under certain conditions, laser light incident on a target material can induce an explosive removal of some material, a process called laser ablation. The photomechanical model of laser ablation asserts that this process is initiated when the laser‐induced stresses exceed the strength of the material in question. Although one‐dimensional calculations have shown that short pulsed lasers can create significant transient tensile stresses in target materials, the stresses last for only a few nanoseconds and the spatial location of the peak stresses is not consistent with experimental observations of material failure in biological tissues. Using the theory of elasticity, analytical expressions have been derived for the thermoelastic stresses and deformations in an axially symmetric three‐dimensional solid body caused by the absorption of laser light. The full three‐dimensional solution includes three stresses, radial, circumferential and shear, which are necessarily absent in the simple one‐dimensional solution. These stresses have long‐lived components that exist for eight orders of magnitude longer in time than the acoustic transients, an important point when the details of dynamic fracture are considered. Many important qualitative features are revealed including the spatial location of the peak stresses, which is more consistent with experimental observations of failure.
21(1994); http://dx.doi.org/10.1118/1.597405View Description Hide Description
A fully automated two‐dimensional image registration technique based on cross correlation is presented. This technique is evaluated using magnetic resonance images of human brain. Results indicate that this algorithm is capable of accurately estimating both linear and angular offsets.