Index of content:
Volume 18, Issue 5, September 1991

The design of megavoltage projection imaging systems: Some theoretical aspects
View Description Hide DescriptionThis study investigates factors associated with the imaging of a patient using a high‐energy radiotherapy treatment beam. Both single‐stage (e.g., solid‐state detector) and two‐stage (e.g., scintillation screen plus TV) systems are considered. First an expression is derived that relates dose at the buildup depth in the object to the structure of the object, the scatter‐to‐primary signal‐variance ratio and the differential‐signal‐to‐noise ratio in the image. Second the number of bits required to digitize the image is derived. Third the effect of scattered radiation is investigated for photon counting, photopeak, and Compton detector types. Fourth the effect of noise in the detection process is considered. Finally, the relationship between x‐ray source size, detector aperture, and image magnification is derived. The optimum magnification for given source size and detector aperture is discussed in terms of the system transfer function. The study indicates that at a primary beam energy of 2 MeV, a dose of 10^{−3} cGy is required to detect reliably the presence of a bone section of area 10×10 mm and thickness 4 mm in 250 mm of soft tissue. For this example, it is also estimated that a digitization accuracy of 10 bits is required. The calculations indicate that for a Compton detector, the scatter‐to‐primary signal‐variance ratio drops from a value of around 30% at the exit surface of the object to 5% at a distance of 80 cm from the object with a consequent small reduction in the dose required to form the image.

Analytic approximation of the log‐signal and log‐variance functions of x‐ray imaging systems, with application to dual‐energy imaging
View Description Hide DescriptionIn the analysis of x‐ray system performance, the log‐signal function, or negative logarithm of the relative detector signal, and the analogously defined log‐variance function, are of central importance. These are smooth, monotonic functions of object thickness, which are nonlinear for nonmonoenergetic x‐ray source spectra. If we assume a dual‐energy decomposition of the object into two basis materials, then they can be written as analytic functions f (x,y) and f ^{*}(x,y), respectively, of the component thicknesses (x,y) of the object. In this paper, we analytically develop the Taylor series of these functions, prove that they converge everywhere, and parametrize their coefficients via suitable central spectral moments of the basis‐material attenuation coefficients. We then show how the lower‐order moments can be used to construct, in closed form, smooth, monotonic, second‐order (conic) surface functions which closely approximate f (x,y) and f ^{*}(x,y) over the entire feasible domain. A simplified construction, based on using appropriate asymptotic values of the basis‐material attenuation coefficients to match the asymptotic behavior of these functions, is also given. The inclusion of image components with K‐edge absorption spectra, such as iodine, is done without effort. Extension of the results to the construction of similar (virtually exact) third‐order (cubic) surface approximations is straightforward. As an illustration of the broad applicability of this approach, we extend our analysis to the construction of similar approximations to the inverse (decomposition) functions for an arbitrary dual‐energy system, and investigate their numerical accuracy for a model dual‐kVp system. We conclude that this extended analysis provides an accurate description of the system behavior in terms of a small number of physically meaningful parameters. This parametrization permits greater physical insight into the system behavior, while at the same time simplifying its mathematical description, and similarly facilitates the analysis of various measures of imaging performance via either analytic or numerical methods.

A K _{α} dual energy x‐ray source for coronary angiography
View Description Hide DescriptionThe use of characteristic‐line radiation from rare‐earth targets bombarded by high‐energy (up to 1 MeV) electron beams has been evaluated as an x‐ray source for dual energy K‐edge subtraction imaging of the human coronary arteries. Two characteristic‐line x‐ray sources, one using the split K _{α1} and K _{α2} lines of lanthanum excited by a high‐energy electron beam and the other using the K _{α} lines of barium and cerium, were studied. A Monte Carlo electron–photon simulation was used to calculate x‐ray spectra and energy deposition profiles from targets of these elements bombarded by electrons in the energy range 140 keV to 1 MeV. A general dual‐energy imaging model was developed that used these calculated source spectra to numerically investigate the dependence of the subtraction image signal‐to‐noise ratio on such factors as the ratio of K‐line to x‐ray continuum yield, continuum spectral shape, x‐ray filtering, and detector response. A signal averaging technique for enhancing the signal‐to‐noise ratio was also evaluated. The results of these calculations were used to identify an optimum electron beam, target, filter, and detector configuration. A compact electron accelerator capable of providing the required electron beam parameters was designed. Calculations indicate that under ideal conditions the optimized system would be capable of imaging 2 mg/cm^{2} of iodine contrast agent in 20 g/cm^{2} of tissue with a signal‐to‐noise ratio of 5, a detector pixel size of 0.25 mm^{2}, and a total image acquisition time of 10 ms. These parameters are consistent with those needed to image the human coronary arteries after an intravenous injection of iodine contrast agent. These capabilities, along with the relatively modest hardware requirements of this system, make it attractive as an x‐ray source for dual energy transvenous coronary angiography.

Assessing fluoroscopic contrast resolution: A practical and quantitative test tool
View Description Hide DescriptionFluoroscopic contrast resolution is commonly determined at a specified kVp by imaging a test object comprised of targets where contrast decreases gradually and sequentially. Threshold contrast or contrast resolution is the contrast of the lowest contrast target that can be perceived. This approach suffers from two problems. First, test object contrast is specified at a x‐ray tube voltage that is not always obtainable in practice. Second, the small change in contrast between adjacent targets contributes to observer variability making consistent and reproducible contrast threshold determinations difficult. Described is a contrast resolution test tool that eliminates or reduces these problems. The novel target arrangement allows one to quickly and easily specify the contrast resolution of a fluoroscopic imaging chain to a precision of ≂0.5%. Tables of target contrast versus x‐ray tube potential are developed that permit one to employ the test object for contrast resolution determination over the normal range of tube potentials encountered on clinical units.

A prototype high‐purity germanium detector system with fast photon‐counting circuitry for medical imaging
View Description Hide DescriptionA data‐acquisition system designed for x‐ray medical imaging utilizes a segmented high‐purity germanium (HPGe) detector array with 2‐mm wide and 6‐mm thick elements. The detectors are contained within a liquid‐nitrogen cryostat designed to minimize heat losses. The 50‐ns pulse‐shaping time of the preamplifier electronics is selected as the shortest time constant compatible with the 50‐ns charge collection time of the detector. This provides the detection system with the fastest count‐rate capabilities and immunity from microphonics, with moderate energy resolution performance. A theoretical analysis of the preamplifier electronics shows that its noise performance is limited primarily by its input capacitance, and is independent of detector leakage current up to approximately 100 nA. The system experimentally demonstrates count rates exceeding 1 million counts per second per element with an energy resolution of 7 keV for the 60‐keV gamma ray photon from ^{241}Am. The results demonstrate the performance of a data acquisitionsystem utilizing HPGe detector systems which would be suitable for dual‐energy imaging as well as systems offering simultaneous x‐ray transmission and radionuclide emission imaging. Key words: high‐purity germanium detectors, data acquisition electronics, x‐ray radiography, radionuclide imaging

Physical performance characteristics of spiral CT scanning
View Description Hide DescriptionCT scanning in spiral geometry is achieved by continuously transporting the patient through the gantry in synchrony with continuous data acquisition over a multitude of 360‐deg scans. Data for reconstruction of images in planar geometry are estimated from the spiral data by interpolation. The influence of spiral scanning on image quality is investigated. Most of the standard physical performance parameters, e.g., spatial resolution,image uniformity, and contrast, are not affected; results differ for pixel noise and slice sensitivity profiles. For linear interpolation, pixel noise is expected to be reduced by a factor of 0.82; reduction factors of 0.81 to 0.83 were measured. Slice sensitivity profiles are changed as a function of table feed d, measured in millimeters per 360‐deg scan; they are smoothed as the original profile is convolved with the object motion function. The motion function is derived for linear interpolation that constitutes a triangle with a base line width of 2d and a maximal height equal to 1/d. Calculations of both the full width at half‐maximum and the shape of the profiles were in good agreement with experimental results. The effect of the widened profiles, in particular of their extended tail ends, on image quality is demonstrated in phantom measurements.

A feasibility study of i n v i v o 14‐MeV neutron activation analysis using the associated particle technique
View Description Hide DescriptionThe feasibility of using the time correlated associated particle technique for i n v i v o 14‐MeV neutronactivation analysis has been investigated. Gamma rays following neutron inelastic scattering with nitrogen, carbon, and oxygen have been measured with a 12.5×10‐cm NaI(Tl) detector. The results have been scaled to a proposed facility comprising four such detectors past which the subject would be scanned. Based on counting statistics, the precision of estimation of these elements has been determined to be 2.1%, 1.0%, and 1.1%, respectively for experimental measurements on a sample containing physiological concentrations of the major body elements. The average body dose level would be restricted to 0.3 mSv.

Effects of voltage ripple and current mode on diagnostic x‐ray spectra and exposures
View Description Hide DescriptionThe voltage‐ripple dependence relationship on x‐ray energy‐spectral values (energy fluence per unit interval of photon energy) and exposures at 70‐kV peak were obtained theoretically by using the semiempirical formula of emission spectra given by Birch and Marshall [Phys. Med. Biol. 2 4, 505 (1979)]. The calculations were performed with and without various thicknesses of aluminum. As the ripple increases, the energy‐spectral values decrease as expected. When the ripple is large, however, energy‐spectral values (per mAs) take the minimum values; therefore, the exposure (per mAs) also reaches the minimum value for the unsaturating current modes, contrary to expectation. The reasons for this phenomenon were clarified. Exposures clearly take the minimum value in 2‐pulse units. This phenomenon was experimentally verified.

Calibration of Mg_{2}SiO_{4}(Tb) thermoluminescent dosimeters for use in determining diagnostic x‐ray doses to Adult Health Study participants
View Description Hide DescriptionCharacteristics of Mg_{2}SiO_{4}(Tb) thermoluminescent dosimeters(TLD) were ascertained preparatory to measuring doses from diagnostic x‐ray examinations received by Adult Health Study participants. These detectors are small, relatively sensitive to low‐dose x rays, and are appropriate for precise dosimetry. Extensive calibration is necessary for precisely determining doses according to their thermoluminescent intensities. Their sensitivities were investigated by dose, according to x‐ray tube voltage, and by exposure direction, to obtain directional dependence. Dosimeter sensitivity lessened due to the fading effect and diminution of the planchet. However, these adverse effects can be avoided by storing the dosimeters at least 1.5 h and by using fresh silver‐plated planchets. Thus the TLDs, for which sensitivities were determined in this study, will be useful in subsequent diagnostic x‐ray dosimetry.

Determination of x‐ray spectra and of the scattered component up to 300 kV
View Description Hide DescriptionSeveral x‐ray spectra, including those of the ISO reference radiations, were measured by 2‐hp Gedetectors. Measurements were carried out in different experimental conditions with regard to detector size, beam collimation, and SDD. A stripping procedure to improve the spectrum analysis was developed on the basis of a detailed evaluation (by means of a Monte Carlo method) of the detector’s spurious effects. These effects include K‐photon escape, Compton photon escape, electron escape, and collimation effect. The stripping procedure also allows us to determine directly the spectra of possible scattered radiation reaching the detector in addition to the primary beam. When the primary beam is heavily filtered, the leakage radiation from the x‐ray tube housing scattered onto the detector may not be negligible even when the x‐ray tube is provided with appreciable shielding. Possible practical consequences of these effects are discussed. The results obtained on the ISO x‐ray spectra are in agreement with previous determinations. The advantage of the present procedure is its more immediate applicability to Gedetectors of any size and with different beam collimation conditions.

A method for splitting digital value in radiological image compression
View Description Hide DescriptionA new decomposition method using image splitting and gray‐level remapping has been proposed for image compression, particularly for images with high contrast resolution. The effects of this method are especially evident in this radiological image compression study. In these experiments, the impact of this decomposition method was tested on image compression by employing it with two coding techniques on a set of clinically used CTimages and several laser film digitized chest radiographs. One of the compression techniques used as zonal full‐frame bit‐allocation in the discrete cosine transform (DCT) domain, which is an enhanced full‐frame DCT technique that has been proven to be an effective technique for radiological image compression. The other compression technique used was vector quantization with pruned tree‐structured encoding, which through recent research has also been found to produce a low mean‐square error and a high compression ratio. The parameters used in this study were mean‐square error and the bit rate required for the compressed file. In addition to these parameters, the differences between the original and reconstructed images were presented so that the specific artifacts generated by both techniques could be discerned through visual perception.

Coronary blood flow measurement using an angiographic first pass distribution technique: A feasibility study
View Description Hide DescriptionDue to the well‐documented problems associated with visual interpretation of coronary angiograms, more physiologic means of assessing coronary artery stenosis are being investigated. One physiologic parameter that has been suggested is coronary flow reserve (CFR). A digital subtraction angiographic technique based on first pass distribution analysis (FPA) is proposed as a means of measuring CFR and absolute coronary flow. The theory of the FPA method is first outlined, and the implementation of a preliminary version of the FPA algorithm is described. Experiments verifying the utility of this algorithm for measuring absolute flow through a flow phantom, and through the canine circumflex artery are reported. It was determined that the preliminary FPA algorithm is capable of measuring canine coronary flow ratios (R) with accuracy and precision characteristics meeting or exceeding those reported for the parametric imaging technique (R _{FPA}=0.933⋅R _{true}, SEE=0.16, r=0.984). Accurate absolute flow (Q) measurements were obtained in all of the phantom experiments (Q _{FPA}=1.054⋅Q _{true}, r=0.993), and in one of the three dogs that were studied (Q _{FPA}=0.977⋅Q _{true}, r=0.935). The difficulty encountered in the other two dog experiments is attributed to the effects of system temporal lag, and would likely be corrected through the use of improved cameras. The feasibility of the general FPA method for measuring relative flow is established, and the potential for routine, absolute flow measurement is demonstrated.

Computerized detection of masses in digital mammograms: Analysis of bilateral subtraction images
View Description Hide DescriptionA computerized scheme is being developed for the detection of masses in digital mammograms. Based on the deviation from the normal architectural symmetry of the right and left breasts, a bilateral subtraction technique is used to enhance the conspicuity of possible masses. The scheme employs two pairs of conventional screen‐film mammograms (the right and left mediolateral oblique views and craniocaudal views), which are digitized by a TV camera/Gould digitizer. The right and left breast images in each pair are aligned manually during digitization. A nonlinear bilateral subtraction technique that involves linking multiple subtracted images has been investigated and compared to a simple linear subtraction method. Various feature‐extraction techniques are used to reduce false–positive detections resulting from the bilateral subtraction. The scheme has been evaluated using 46 pairs of clinical mammograms and was found to yield a 95% true–positive rate at an average of three false–positive detections per image. This preliminary study indicates that the scheme is potentially useful as an aid to radiologists in the interpretation of screening mammograms.

Image feature analysis and computer‐aided diagnosis in digital radiography: Automated delineation of posterior ribs in chest images
View Description Hide DescriptionIn order to facilitate computerized quantitative analysis of digital chest radiographs, an automated method for accurate delineation of posterior ribs in frontal chest images is being developed. This method is based on an analysis of vertical profiles in the lung regions and a statistical analysis of edge gradients and their orientations in small selected regions‐of‐interest (ROIs). A shift‐variant function is fitted to vertical profiles to obtain initial estimates of locations of rib edges. Rib edges are then determined more accurately by analyzing cumulative edge gradients and their orientations in small ROIs that are located adjacent to the initially estimated edges. The present computerized method can achieve a good agreement between the detected and the actual rib structures for posterior ribs in 74% of 50 cases examined. This suggests that automated detection of posterior ribs by a computerized method is feasible, and may be useful for computer‐aided diagnostic schemes in the chest.

Improving fluoroscopic image quality with continuously variable zoom magnification
View Description Hide DescriptionConing down is commonly used during fluoroscopy to increase image contrast by reducing scatter. However, the resulting image fills only part of a video display whose resolution is limited by line rate and bandwidth. Optical or electron‐optical zooming can be used to magnify the collimated image so that it fills a larger fraction of the viewable area of the video frame to make more effective use of the available video‐display capacity. Modulation‐transfer functions (MTFs) were measured for various zoom factors achieved using a zoom lens and the image‐intensifier (II) electronic magnification mode. Significant and continuing improvement in total system MTF was observed up to zoom magnifications of greater than 3.3. For larger zoom factors, the resolution limit becomes dominated by the intrinsic resolving power of the II and by geometric unsharpness rather than by the line rate of the video system. When the MTF at infinite zoom factor, obtained by extrapolation, was divided into the measured MTFs, the resultant MTF_{ Z }’s were shown to scale predictably with zoom factor. Only a slight improvement in MTF was obtained using the II’s electronic magnification mode compared to the same magnification using a zoom lens. It is concluded that, if improved image quality is the motivation for the use of coning down in fluoroscopy, then zooming to use fully the available video frame is warranted.

Bone mineral densitometry with x‐ray and radionuclide sources: A theoretical comparison
View Description Hide DescriptionTwo methods of dual‐photon absorptiometry (DPA) utilizing an x‐ray tube instead of a radionuclide source have recently been introduced. In one method kVp switching is employed and two transmitted intensities at each pixel are determined. In the other method, K‐edge filtration combined with a single kVp spectrum is used, but photons in two energy windows are counted. We present a theoretical analysis of the two methods, focusing on a figure of merit which is essentially the exposure efficiency (the precision for a given entrance exposure) and tube loading. We also compare their exposure efficiencies to theoretical limits that no DPA system can exceed. Our study indicates that the K‐edge‐filtered method is more exposure efficient by about a factor of 2. The switched‐kVp method requires less heat units per scan by about a factor of 3. A hybrid K‐edge switched‐kVp method is suggested which achieves the same exposure efficiency as the K‐edge‐filtered method at lesser tube loading. Our theoretical model is based on published x‐ray spectra and attenuation coefficients and is in good agreement with other simulation work. It is of interest that a point source of Gd‐153 would be even more exposure efficient, achieving about 90% of the theoretical limit. However, in practice, the Gd source is of finite size and limited strength, and consequently the radionuclide method cannot achieve as good a precision as either x‐ray method in similar scan times.

Coherent scattering and bone mineral measurement: The dependence of sensitivity on angle and energy
View Description Hide DescriptionThe sensitivity of a technique for the measurement of trabecular bone mineral concentration has been examined theoretically and experimentally. The technique is based on coherent gamma ray scattering and corrections for attenuation are obtained from transmitted photons rather than Compton scatteredphotons. For an incident photon energy of 60 keV, the minimum detectable bone mineral difference is practically independent of scattering angle while for an incident energy of 100 or 122 keV the scattering angle must be less than 70° to optimize the minimum detectable difference.

The use of importance sampling techniques to improve the efficiency of photon tracking in emission tomography simulations
View Description Hide DescriptionMonte Carlo simulations are widely used to study the transmission and scattering of γ rays. Use of this method for simulations of emission tomographs suffers from geometric inefficiency resulting from the low solid angle of acceptance of most tomograph designs. We have applied several importance sampling techniques—stratification, forced detection, and weight control through Russian roulette and splitting—to increase the computational efficiency of the Monte Carlo method 10‐ to 300‐fold. A description of these techniques, their validation, and sample performance results are given. Application of importance sampling methods makes it practical to study photon scattering in heterogeneous attenuators on workstations and minicomputers.

Noise, resolution, and sensitivity considerations in the design of a single‐slice emission–transmission computed tomographic system
View Description Hide DescriptionA prototype Emission–Transmission Computed Tomography (ETCT) system is being developed that will acquire single‐slice x‐ray transmission CTimages simultaneously with single photon emission computed tomography(SPECT)images. This system will permit the correlation of anatomical information from x‐ray CT with functional information from SPECTimages. The patient‐specific attenuation map derived from the x‐ray CTimages can be used to perform attenuation correction of the SPECTimages, so that accurate quantitative information can be obtained. The fan‐beam scanning geometry and the use of a segmented HPGe detector array impose special constraints on the design of the collimator for the system. Based on a signal detection model, an efficiency‐resolution figure of merit (ERFM) as a function of the collimator geometric efficiency, system resolution width, and object diameter is defined. The ERFM is proportional to the square of the detection signal‐to‐noise ratio. The collimator design parameters can then be optimized by optimizing the ERFM for an anticipated object diameter. The collimator point‐spread function, geometric efficiency, and resolution are calculated. The collimator optimized for the detection of a 1‐cm object will have a single‐slice point source efficiency of 1.2×10^{−4}, and a FWHM of 6.5 mm at the center of the reconstruction circle, at 12 cm from the collimator face. The minimum object contrast which will give a detection SNR of 5 is 74%, for a total accumulated count per slice of 2×10^{6}.

SPECT volume quantitation: Influence of spatial resolution, source size and shape, and voxel size
View Description Hide DescriptionA number of factors influence the accuracy of estimation of source volume with single‐photon emission computed tomography(SPECT)imaging. This study investigated the role of a number of factors including system spatial resolution (which includes the influence of low‐pass filters applied to suppress noise), source size and shape, and voxel size in determining volume. A rectangular parallelepiped (bar), a right cylinder, and a sphere were mathematically modeled as being imaged with a SPECT system by calculating the three‐dimensional (3‐D) convolution of them with symmetric Gaussian functions of 20 different full widths at half maximums (FWHM’s). The resulting activity profiles were analyzed to determine the location of the edges as a function of the source size relative to the FWHM of the system. The edge definition criteria studied were (1) the location of the 50% count threshold and (2) the maximum in the local gradient. In addition, the threshold which yielded the correct edge location was also determined. A nonstationary computer simulation of SPECTimaging, based on the serial model of the system transfer function, was used to test the predictions of the mathematical model and investigate the influence of (1) voxel size and sampling with a discrete array of voxels; (2) attenuation; (3) scatter; (4) variable spatial resolution; (5) low‐pass filtering; and (6) noise. The mathematical model predicted that both the 50% threshold and the maximum in the local gradient methods of estimating edge location would show either an under‐ or overestimate of source volume depending on both the ratio of source diameter to system FWHM and the source shape. The predictions were found to be in good agreement with the measured volumes of simulated spherical sources. The use of finite‐size voxels was determined to yield a discrete set of possible volumes, where the possible magnitudes were dependent on voxel size and centering of the object within the array of voxels. Decreasing voxel size was found to improve the accuracy of volume quantitation in the absence of noise.