Volume 28, Issue 6, June 2001
 RADIATION TREATMENT PHYSICS

 Point/Counterpoint

 Task Group Report

AAPM protocol for 40–300 kV xray beam dosimetry in radiotherapy and radiobiology
View Description Hide DescriptionThe American Association of Physicists in Medicine (AAPM) presents a new protocol, developed by the Radiation Therapy Committee Task Group 61, for reference dosimetry of low and mediumenergy x rays for radiotherapy and radiobiology It is based on ionization chambers calibrated in air in terms of air kerma. If the point of interest is at or close to the surface, one unified approach over the entire energy range shall be used to determine absorbed dose to water at the surface of a water phantom based on an inair measurement (the “inair” method). If the point of interest is at a depth, an inwater measurement at a depth of 2 cm shall be used for tube potentials ⩾100 kV (the “inphantom” method). The inphantom method is not recommended for tube potentials <100 kV. Guidelines are provided to determine the dose at other points in water and the dose at the surface of other biological materials of interest. The protocol is based on an uptodate data set of basic dosimetry parameters, which produce consistent dose values for the two methods recommended. Estimates of uncertainties on the final dose values are also presented.

Leaf sequencing with secondary beam blocking under leaf positioning constraints for continuously modulated radiotherapy beams
View Description Hide DescriptionThe creation of arbitrary photon fluence patterns for intensity modulated radiotherapy is addressed. The proposed method is intended for a class of multileaf collimators with a requirement for minimum leaf separation. Unlike the solution of Convery and Webb in which discrete beam intensity modulation was assumed, the present method deals with continuous modulation or that consisting of infinitely small bixels. The method begins with the timeoptimal solution of Spirou–Stein–Svensson disregarding the minimum gap requirement. Subsequently, the gaps are restored by mobilizing the secondary beam blocking devices to prevent overexposure resulting from the leaf separation process. The secondary beam blocking is provided by means of two orthogonal backup diaphragms that are computer controlled. The results indicate that the method can be used to accurately deliver the desired modulation while satisfying the leaf positioning constraints. Furthermore, an example is presented which illustrates the efficacy of using the horizontal backup diaphragms (moving in perpendicular direction of the leaves) in addition to the vertical backup diaphragms (moving in the parallel direction of the leaves) to generate zero fluence regions.

Acceleration of dose calculations for intensitymodulated radiotherapy
View Description Hide DescriptionThe requirements and tradeoffs between accuracy and speed for radiotherapydose computations have been discussed for decades. Inverse planning used for intensitymodulated radiotherapy(IMRT)optimization imposes additional demands on dose calculation since it is an iterative process in which dose calculations might be repeated many (10’s to 1000’s) of times. This work discusses the accuracy and speed issues as related to IMRTdose calculations. A hybrid dose calculation method which accelerates the optimization process is proposed and applied in which a fastpencil beam (PB) model is used for initial optimization iterations, followed by superposition/convolution (SC) calculations. Optimizationdose results are compared for pure PB optimization, pure SC optimization, and PB optimization followed by SC optimization. Plans were evaluated in terms of isodose coverage, dosevolume histograms, and total dose calculation time for five head and neck cases with diverse locations, sizes, and shapes for tumors and critical structures. Patient plans were designed for nine equispaced beams. For one patient, an additional fivebeam configuration was tested. We found that gross features of intensity distributions resulting from all schemes were similar, however there were differences in the fine detail. Differences were small between composite dose distributions optimized with PB and SC methods, yet differences in individual beam dose distributions were quite significant. When the SC method was used to compute dose following optimization with PB method, dose differences were reduced significantly both for composite plans and for individual beams. Substantial overall timesavings were observed, allowing IMRTdose planning to become a more interactive activity.

Dosimetric investigation and portal dose image prediction using an amorphous silicon electronic portal imaging device
View Description Hide DescriptionA two step algorithm to predict portal dose images in arbitrary detector systems has been developed recently. The current work provides a validation of this algorithm on a clinically available, amorphous silicon flat panel imager. The highatomic number, indirect amorphous silicon detector incorporates a gadolinium oxysulfide phosphor scintillating screen to convert deposited radiation energy to optical photons which form the portal image. A water equivalent solid slab phantom and an anthropomorphic phantom were examined at beam energies of 6 and 18 MV and over a range of air gaps (∼20–50 cm). In the many examples presented here, portal dose images in the phosphor were predicted to within 5% in lowdose gradient regions, and to within 5 mm (isodose line shift) in highdose gradient regions. Other basic dosimetric characteristics of the amorphous silicon detector were investigated, such as linearity with dose rate (±0.5%), repeatability (±2%), and response with variations in gantry rotation and source to detector distance. The latter investigation revealed a significant contribution to the image from optical photon spread in the phosphor layer of the detector. This phenomenon is generally known as “glare,” and has been characterized and modeled here as a radially symmetric blurring kernel. This kernel is applied to the calculated dose images as a convolution, and is successfully demonstrated to account for the optical photon spread. This work demonstrates the flexibility and accuracy of the two step algorithm for a highatomic number detector. The algorithm may be applied to improve performance of dosimetric treatment verification applications, such as direct image comparison, backprojected patient dose calculation, and scatter correction in megavoltage computed tomography. The algorithm allows for dosimetric applications of the new, flat panel portal imager technology in the indirect configuration, taking advantage of a greater than tenfold increase in detector sensitivity over a direct configuration.

Modeling the output ratio in air for megavoltage photon beams
View Description Hide DescriptionThe output ratio in air, OR, for a highenergy xray beam describes how the incident central axis photon fluence varies with collimator setting. For field sizes larger than its variation is caused by the scatter of photons in structures in the accelerator head (primarily the flattening filter and the wedge, if one is used) and by the backscatter of radiation into the monitor ionization chamber. The objective of this study was to evaluate the use of an analytical function to parametrize OR for square collimator setting For open beams, these parameters can be attributed to explicit physical meanings within the systematical uncertainty of the model: accounts for backscatter into the monitor, is the maximum scattertoprimary ratio for headscattered photons, and λ represents the effective width of the “source” of headscatter photons. is a constant that sets for This formula also fits OR for wedge beams and a Co60 unit, although the fitting parameters lose their physical interpretations. To calculate the output ratio for a rectangular field, an equivalent square can be used: where k is a constant. The study included a number of different accelerators and a cobalt60 unit. The fits for square fields agreed with measurements with a standard deviation (SD) of less than 0.5%. Using where and are the sourcetocollimator distances and f is the sourcetodetector distance, measurements and calculations agreed within a SD of 0.7% for rectangular fields. Sufficient data for the three parameters are presented to suggest constraints that can be used for quality assurance of the measured output ratio in air.

Monte Carlo calculation of output factors for circular, rectangular, and square fields of electron accelerators (6–20 MeV)
View Description Hide DescriptionMonte Carlo(MC) techniques can be used to build a simulation model of an electron accelerator to calculate output factors for electron fields. This can be useful during commissioning of electron beams from a linac and in clinical practice where irregular fields are also encountered. The Monte Carlo code BEAM/EGS4 was used to model electron beams (6–20 MeV) from a Varian 2100C linear accelerator. After optimization of the Monte Carlo simulation model, agreement within 1% to 2% was obtained between calculated and measured (with a Si diode) lateral and depth dose distributions or within 1 mm in the penumbral regions. Output factors for square, rectangular, and circular fields were measured using two different planeparallel ion chambers (Markus and NACP) and compared to MC simulations. The agreement was usually within 1% to 2%. This study was not primarily concerned with minimizing the simulation time required to obtain output factors but some considerations with respect to this are presented. It would be particularly useful if the MC model could also be used to calculate output factors for other, similar linacs. To see if this was possible, the primary electron energies in the MC model were retuned to model a recently commissioned similar linac. Good agreement between calculated and measured output factors was obtained for most field sizes for this second accelerator.

A measured data set for evaluating electronbeam dose algorithms
View Description Hide DescriptionThe purpose of this work was to develop an electronbeam dose algorithm verification data set of high precision and accuracy. Phantom geometries and treatmentbeam configurations used in this study were similar to those in a subset of the verification data set produced by the Electron Collaborative Working Group (ECWG). Measurement techniques and qualitycontrol measures were utilized in developing the data set to minimize systematic errors inherent in the ECWG data set. All measurements were made in water with ptype diode detectors and using a Wellhöfer dosimetry system. The 9 and 20 MeV, beams from a single linear accelerator composed the treatment beams. Measurements were made in water at 100 and 110 cm sourcetosurface distances. Irregular surface measurements included a “stepped surface” and a “noseshaped surface.” Internal heterogeneity measurements were made for bone and air cavities in differing orientations. Confidence in the accuracy of the measured data set was reinforced by a comparison with Monte Carlo (MC)calculated dose distributions. The MCcalculated dose distributions were generated using the OMEGA/BEAM code to explicitly model the accelerator and phantom geometries of the measured data set. The precision of the measured data, estimated from multiple measurements, was better than 0.5% in regions of lowdose gradients. In general, the agreement between the measured data and the MCcalculated data was within 2%. The quality of the data set was superior to that of the ECWG data set, and should allow for a more accurate evaluation of an electron beamdose algorithm. The data set will be made publicly available from the Department of Radiation Physics at The University of Texas M. D. Anderson Cancer Center.

Backscatter dose from metallic materials due to obliquely incident highenergy photon beams
View Description Hide DescriptionIf metallic material is exposed to ionizing radiation of sufficient high energy, an increase in dose due to backscatterradiation occurs in front of this material. Our purpose in this study was to quantify these doses at variable distances between scattering materials and the detector at axial beam angles between (zero angle in beams eye view) and Copper,silver and lead sheets embedded in a phantom of perspex were exposed to 10 MVbremsstrahlung. The detector we developed is based on the fluorescence property of pyromellitic acid (1,2,4,5 benzenetetracarboxylic acid) after exposure to ionizing radiation. Our results show that the additional doses and the corresponding dose distribution in front of the scattering materials depend quantitatively and qualitatively on the beam angle. The backscatter dose increases with varying beam angle from to up to a maximum at for copper and silver. At angles of and the integral backscatter doses over a tissueequivalent depth of 2 mm are 11.2% and 21.6% for copper and 24% and 28% for silver, respectively. In contrast, in front of lead there are no obvious differences of the measured backscatter doses at angles between and With a further increase of the beam angle from to the backscatter dose decreases steeply for all three materials. In front of copper a markedly lower penetrating depth of the backscattered electrons was found for an angle of compared to This dependence from the beam angle was less pronounced in front of silver and not detectable in front of lead. In conclusion, the dependence of the backscatter dose from the angle between axial beam and scattering material must be considered, as higher scattering doses have to be considered than previously expected. This may have a clinical impact since the surface of metallic implants is usually curved.

Variation of sensitometric curves of radiographic films in high energy photon beams
View Description Hide DescriptionFilmdosimetry is an important tool for the verification of irradiation techniques. The shape of the sensitometric curve depends on the type of film as well as on the irradiation and processing conditions. Existing data concerning the influence of irradiation geometry on the sensitometric curve are conflicting. In particular the variation of optical density, OD, with field size and depth in a phantom shows large differences in magnitude between various authors. This variation, as well as the effect of beam energy and film plane orientation on OD, was therefore investigated for two types of film, Kodak XOmat V and Agfa Structurix D2. Films were positioned in a solid phantom, either perpendicular or (almost) parallel to the beam axis, and irradiated to different dose levels using various photon beams (Co60, 6 MV, 15 MV, 18 MV, 45 MV). It was found that the sensitometric curves of the Kodak film derived at different depths are almost identical for the four xray beams. For the Kodak film the differences in OD with depth are less than 2%, except for the Co60 beam, where the difference is about 4% at 10 cm depth for a 15 cm × 15 cm field. The slope of the sensitometric curve of the Agfa film is somewhat more dependent on photon beam energy, depth and field size. The sensitometric curves of both types of film are almost independent of the film plane orientation, except for shallow depths. For Co60 and for the same dose, the Kodak and Agfa films gave at dose maximum an OD lower by 4% and 6%, respectively, for the parallel compared to the perpendicular geometry. Good dosimetric results can be obtained if films from the same batch are irradiated with small to moderate field sizes (up to about 15 cm × 15 cm), at moderate depths (up to about 15 cm), using a single calibration curve, e.g., for a 10 cm × 10 cm field.

Experimental determination and verification of the parameters used in a proton pencil beam algorithm
View Description Hide DescriptionWe present an experimental procedure for the determination and the verification under practical conditions of physical and computational parameters used in our proton pencil beam algorithm. The calculation of the dose delivered by a single pencil beam relies on a measured spreadout Bragg peak, and the description of its radial spread at depth features simple specific parameters accounting individually for the influence of the beam line as a whole, the beam energy modulation, the compensator, and the patient medium. For determining the experimental values of the physical parameters related to proton scattering, we utilized a simple relation between Gaussian radial spreads and the width of lateral penumbras. The contribution from the beam line has been extracted from lateral penumbra measurements in air: a linear variation with the distance collimatorpoint has been observed. Analytically predicted radial spreads within the patient were in good agreement with experimental values in water under various reference conditions. Results indicated no significant influence of the beam energy modulation. Using measurements in presence of Plexiglas slabs, a simple assumption on the effective source of scattering due to the compensator has been stated, leading to accurate radial spread calculations. Dose measurements in presence of complexly shaped compensators have been used to assess the performances of the algorithm supplied with the adequate physical parameters. One of these compensators has also been used, together with a reference configuration, for investigating a set of computational parameters decreasing the calculation time while maintaining a high level of accuracy. Faster dose computations have been performed for algorithm evaluation in the presence of geometrical and patient compensators, and have shown good agreement with the measured dose distributions.

A theoretical model for event statistics in microdosimetry. I: Uniform distribution of heavy ion tracks
View Description Hide DescriptionIn this work we describe a novel approach to solving microdosimetry problems using conditional probabilities and geometric concepts. The intersection of a convex site with a field of randomly oriented straight track segments is formulated in terms of the relative overlap between the chord associated with the action line of the track and the track itself. This results in a general formulation that predicts the contribution of crossers, stoppers, starters, and insiders in terms of two separate functions: the chord length distribution (characteristic of the site geometry and the type of randomness) and an independent set of conditional probabilities. A Monte Carlo code was written in order to validate the proposed approach. The code can represent the intersection between an isotropic field of charged particle tracks and a general ellipsoid of unrestricted geometry. This code was used to calculate the event distribution for a sphere as well as the expected mean value and variance of the track length distribution and to compare these against the deterministic calculations. The observed agreement was shown to be very good, within the precision of the Monte Carlo approach. The formulation is used to calculate the event frequency, lineal energy, and frequency mean specific energy for several monoenergetic and isotropic proton fields in a spherical site, as a function of the site diameter, proton energy, and the event type.

A theoretical model for event statistics in microdosimetry. II: Nonuniform distribution of heavy ion tracks
View Description Hide DescriptionA microdosimetry model, described in Part I, applies to the case of a convex site immersed in a uniform distribution of heavy particle tracks, and assumes no restrictions in site geometry or the kind of randomness. In Part II, this model is extended to include nonuniform distributions of particle tracks. This situation is relevant to the study of microdosimetry, for example, in boronneutron capture, in irradiation experiments using heavy ion particle beams, where the sources of particle tracks are external to the cell, or in irradiation from internally incorporated particleemitting radionuclides, such as environmental radon or occupational exposure to radioactive materials. The formalism developed permits the calculation of statistical properties, track length distributions, and microdosimetric spectra for convex sites where the “inner” and “outer” concentrations of sources may be different, or for tracks originating on the surface of a convex site. Expressions applicable to the case of surfacedistributed sources of tracks are presented that may represent situations such as boron compounds bound to the membrane of a cellular nucleus in boronneutron capture. A series of Monte Carlo calculations and analytical solutions, illustrating the case of spherical site geometry, are presented and compared. Finally, microdosimetric spectra and specific energy averages are calculated for alpha and lithium particles originating from thermal neutron capture in showing their dependence on localization (extrasite, uniform, intrasite, or surfacedistributed).

Attenuation and activation characteristics of steel and tungsten and the suitability of these materials for use in a fast neutron multileaf collimator
View Description Hide DescriptionA computer controlled multileaf collimator (MLC) is being designed to replace the multirod collimator (MRC) at present used to shape the d(48.5)+Be neutron beam from the Harper Hospital superconducting cyclotron. The computer controlled MLC will improve efficiency and allow for the future development of intensity modulated radiation therapy with neutrons. The existing MRC uses tungsten rods, while the new MLC will use steel as the leaf material. In the current study the attenuation and activation characteristics of steel are compared with those of tungsten to ensure that (a) the attenuation achieved in the MLC is at least equivalent to that of the existing MRC, and (b) that the activation of the steel will not result in a significant change in the activation levels within the treatment room. The latter point is important since personnel exposure (particularly to the radiation therapy technologists) from induced radioactivity must be minimized. Measurement of the neutron beam attenuation in a broad beam geometry showed that a 30 cm thick steel leaf yielded 2.5% transmission. This compared favorably with the 4% transmission obtained with the existing MRC. Irradiation of steel and tungsten samples at different depths in a 30 cm steel block indicated that the activation of steel should be no worse than that of tungsten.

The waterequivalence of phantom materials beta particles
View Description Hide DescriptionIntravascular brachytherapy requires that the dose be specified within millimeters of the source. High dose gradients near brachytherapy sources require that the sourcedetector distance be accurately known for dosimetry purposes. Solid phantoms can be designed to accommodate these stringent requirements. This study reports dosimeter readings from sources measured in water, A150, polystyrene and in an epoxybased waterequivalent plastic. Measurements showed that while A150 and the epoxybased plastic agreed well with water when the surface of the source contacted the detector housing, the relative response in the phantoms decreased with increasing depth in phantom, falling to ∼0.55 those of water at a depth of 5 mm. Readings in polystyrene were within 4% of those in water between 1 and 2 mm depth. However, while polystyrene followed water more closely than the other two materials, at greater depths the relative response in polystyrene to water varied from 0.65 to 1.34. When the density of the materials is accounted for, the relative response in A150 is nearly constant with increasing areal density. Furthermore, the response in A150 shows the closest agreement with that in water of any of the solid materials for higher areal densities. For values below 0.3 polystyrene shows the closest agreement with water.

Calculation of mean central dose in interstitial brachytherapy using Delaunay triangulation
View Description Hide DescriptionIn 1997 the ICRU published Report 58 “Dose and Volume Specification for Reporting Interstitial Therapy” with the objective of addressing the problem of absorbed dose specification for reporting contemporary interstitial therapy. One of the concepts proposed in that report is “mean central dose.” The fundamental goal of the mean central dose(MCD) calculation is to obtain a single, readily reportable and intercomparable value which is representative of dose in regions of the implant “where the dose gradient approximates a plateau.” Delaunay triangulation (DT) is a method used in computational geometry to partition the space enclosed by the convex hull of a set of distinct points P into a set of nonoverlapping cells. In the threedimensional case, each point of P becomes a vertex of a tetrahedron and the result of the DT is a set of tetrahedra. All treatment planning for interstitial brachytherapy inherently requires that the location of the radioactive sources, or dwell positions in the case of HDR, be known or digitized. These source locations may be regarded as a set of points representing the implanted volume. Delaunay triangulation of the source locations creates a set of tetrahedra without manual intervention. The geometric centers of these tetrahedra define a new set of points which lie “in between” the radioactive sources and which are distributed uniformly over the volume of the implant. The arithmetic mean of the dose at these centers is a three dimensional analog of the twodimensional triangulation and inspection methods proposed for calculating MCD in ICRU 58. We demonstrate that DT can be successfully incorporated into a computerized treatment planning system and used to calculate the MCD.
 Top
 RADIATION IMAGING PHYSICS


Validation of a two to threedimensional registration algorithm for aligning preoperative CT images and intraoperative fluoroscopy images
View Description Hide DescriptionWe present a validation of an intensity based two to threedimensional image registration algorithm. The algorithm can register a CT volume to a singleplane fluoroscopyimage. Four routinely acquired clinical data sets from patients who underwent endovascular treatment for an abdominal aortic aneurysm were used. Each data set was comprised of two intraoperative fluoroscopyimages and a preoperative CTimage. Regions of interest (ROI) were drawn around each vertebra in the CT and fluoroscopyimages. Each CTimage ROI was individually registered to the corresponding ROI in the fluoroscopyimages. A cross validation approach was used to obtain a measure of registration consistency. Spinal movement between the preoperative and intraoperative scene was accounted for by using two fluoroscopyimages. The consistency and robustness of the algorithm when using two similarity measures, pattern intensity and gradient difference, was investigated. Both similarity measures produced similar results. The consistency values were rotational errors below 0.74° and inplane translational errors below 0.90 mm. These errors approximately relate to a twodimensional projection error of 1.3 mm. The failure rate was less than 8.3% for three of the four data sets. However, for one of the data sets a much larger failure rate (28.5%) occurred.

Advanced singleslice rebinning for tilted spiral conebeam CT
View Description Hide DescriptionFuture medicalCT scanners and today’s micro CT scanners demand conebeam reconstruction algorithms that are capable of reconstructing data acquired from a tilted spiral trajectory where the vector of rotation is not necessarily parallel to the vector of table increment. For the medicalCT scanner this case of nonparallel object motion is met for nonzero gantry tilt: the table moves into a direction that is not perpendicular to the plane of rotation. Since this is not a special application of medicalCT but rather a daily routine in head exams, there is a strong need for corresponding reconstruction algorithms. In contrast to medicalCT, where the special case of nonperpendicular motion is used on purpose, micro CT scanners cannot avoid aberrations of the rotational axis and the table increment vector due to alignment problems. Especially for those micro CT scanners that have the lifting stage mounted on the rotation table (in contrast to setups where the lifting stage holds the rotation table), this kind of misalignment is equivalent to a gantry tilt. We therefore generalize the advanced singleslice rebinning algorithm (ASSR), which is considered a very promising approach for medical conebeam reconstruction due to its high image quality and its high reconstruction speed [Med. Phys. 27, 754–772 (2000)], to the case of tilted gantries. We evaluate this extended ASSR approach (which we will denote as for convenience) in comparison to the original ASSR algorithm using simulated phantom data for reconstruction. For the case of nonparallel object motion shows significant improvements over ASSR, however, its computational complexity is slightly increased due to the broken symmetry of the spiral trajectory.

A cone beam filtered backprojection (CBFBP) reconstruction algorithm for a circleplustwoarc orbit
View Description Hide DescriptionThe circleplusarc orbit possesses advantages over other “circleplus” orbits for the application of xray cone beam (CB) volume CT in imageguided interventional procedures requiring intraoperative imaging, in which movement of the patient table is to be avoided. A CB circleplustwoarc orbit satisfying the data sufficiency condition and a filtered backprojection (FBP) algorithm to reconstruct longitudinally unbounded objects is presented here. In the circle suborbit, the algorithm employs Feldkamp’s formula and another FBP implementation. In the arc suborbits, an FBP solution is obtained originating from Grangeat’s formula, and the reconstruction computation is significantly reduced using a window function to exclude redundancy in Radon domain. The performance of the algorithm has been thoroughly evaluated through computersimulated phantoms and preliminarily evaluated through experimental data, revealing that the algorithm can regionally reconstruct longitudinally unbounded objects exactly and efficiently, is insensitive to the variation of the angle sampling interval along the arc suborbits, and is robust over practical xrayquantum noise. The algorithm’s merits include: only 1D filtering is implemented even in a 3D reconstruction, only separable 2D interpolation is required to accomplish the CB backprojection, and the algorithm structure is appropriate for parallel computation.

Computerized image analysis: Estimation of breast density on mammograms
View Description Hide DescriptionAn automated image analysis tool is being developed for the estimation of mammographic breast density. This tool may be useful for risk estimation or for monitoring breast density change in prevention or intervention programs. In this preliminary study, a data set of 4view mammograms from 65 patients was used to evaluate our approach. Breast density analysis was performed on the digitized mammograms in three stages. First, the breast region was segmented from the surrounding background by an automated breast boundarytracking algorithm. Second, an adaptive dynamic range compression technique was applied to the breast image to reduce the range of the gray level distribution in the low frequency background and to enhance the differences in the characteristic features of the gray level histogram for breasts of different densities. Third, rulebased classification was used to classify the breast images into four classes according to the characteristic features of their gray level histogram. For each image, a gray level threshold was automatically determined to segment the dense tissue from the breast region. The area of segmented dense tissue as a percentage of the breast area was then estimated. To evaluate the performance of the algorithm, the computer segmentation results were compared to manual segmentation with interactive thresholding by five radiologists. A “true” percent dense area for each mammogram was obtained by averaging the manually segmented areas of the radiologists. We found that the histograms of 6% (8 CC and 8 MLO views) of the breast regions were misclassified by the computer, resulting in poor segmentation of the dense region. For the images with correct classification, the correlation between the computerestimated percent dense area and the “truth” was 0.94 and 0.91, respectively, for CC and MLO views, with a mean bias of less than 2%. The mean biases of the five radiologists’ visual estimates for the same images ranged from 0.1% to 11%. The results demonstrate the feasibility of estimating mammographic breast density using computer vision techniques and its potential to improve the accuracy and reproducibility of breast density estimation in comparison with the subjective visual assessment by radiologists.
