1887
banner image
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.
oa
Anniversary Paper: Evaluation of medical imaging systems
Rent:
Rent this article for
Access full text Article
/content/aapm/journal/medphys/35/2/10.1118/1.2830376
1.
1.R. S. Weinstein et al., “An array microscope for ultrarapid virtual slide processing and telepathology. Design, fabrication, and validation study,” Hum. Pathol. 35, 13031314 (2004).
http://dx.doi.org/10.1016/j.humpath.2004.09.002
2.
2.E. Krupinski, M. Nypaver, R. Poropatich, D. Ellis, R. Safwat, and H. Sapci, “Clinical applications in telemedicine/telehealth,” Telemed. J. 8, 1334 (2002).
3.
3.R. F. Wagner, C. E. Metz, and G. Campbell, “Assessment of medical imaging systems and computer aids: A tutorial review,” Acad. Radiol. 14, 723748 (2007).
4.
4.D. J. Manning, A. Gale, and E. A. Krupinski, “Perception research in medical imaging,” Br. J. Radiol. 78, 683685 (2005).
5.
5.E. A. Krupinski, S. Dimmick, J. Grigsby, G. Mogel, D. Puskin, S. Speedie, B. Stamm, B. Wakefield, J. Whited, P. Whitten, and P. Yellowlees, “Research recommendations for the American Telemedicine Association,” Telemed. J. 12, 579589 (2006).
6.
6.H. H. Barrett and K. J. Myers, Foundations of Image Science (Wiley, Hoboken, NJ, 2004).
7.
7.National Institute of Biomedical Imaging and Bioengineering (NIBIB), http://www.nibib.nih.gov/About/MissionHistory, last checked June 7, 2007.
8.
8.R. L. Ehman, W. R. Hendee, M. J. Welch, N. R. Dunnick, L. B. Bresolin, R. L. Arenson, S. Baum, H. Hricak, and J. H. Thrall, “Blueprint for imaging in biomedical research,” Radiology 244, 1227 (2007).
9.
9.D. G. Fryback and J. R. Thornbury, “The efficacy of diagnostic imaging,” Med. Decis. Making 11, 8894 (1991).
10.
10.J. R. Thornbury, D. G. Fryback, R. A. Goepp, L. B. Lusted, K. I. Marton, B. J. McNeil, C. E. Metz, and M. C. Weinstein, NCRP Commentary No. 13—An Introduction to Efficacy in Diagnostic Radiology and Nuclear Medicine (National Council on Radiation Protection and Measurements, Bethesda, MD, 1995).
11.
11.J. M. Boone, A. L. Kwan, K. Yang, G. W. Burkett, K. K. Lindfors, and T. R. Nelson, “Computed tomography for imaging the breast,” J. Mammary Gland Biol. Neoplasia 11, 103111 (2006).
12.
12.C. E. Metz, “Receiver operating characteristic analysis: A tool for the quantitative evaluation of observer performance and imaging systems,” J. Am. Coll. Radiol. 3, 413422 (2006).
13.
13.C. D. Lehman, J. D. Blume, D. Thickman, D. A. Bluemke, E. Pisano, C. Kuhl, T. B. Julian, N. Hylton, P. Weatherall, M. O’loughlin, S. J. Schnitt, C. Gatsonis, and M. D. Schnall, “Added cancer yield of MRI in screening the contralateral breast of women recently diagnosed with breast cancer: Results from the International Breast Magnetic Resonance Consortium (IBMC) trial,” J. Surg. Oncol. 92, 915 (2005).
14.
14.New York Early Lung Cancer Action Project Investigators, “CT screening for lung cancer: Diagnoses resulting from the New York Early Lung Cancer Action Project,” Radiology 243, 239249 (2007).
15.
15.M. Freedman and T. Osicka, “Reader variability: What can we learn from computer-aided detection experiments,” J. Am. Coll. Radiol. 3, 446455 (2006).
16.
16.K. Doi, “Current status and future potential of computer-aided diagnosis in medical imaging,” Br. J. Radiol. 78, S3S19 (2005).
http://dx.doi.org/10.1259/bjr/82933343
17.
17.K. Awai, K. Murao, A. Ozawa, Y. Nakayama, T. Nakaura, D. Liu, K. Kawanaka, Y. Funama, S. Mirishita, and Y. Yamashita, “Pulmonary nodules: Estimation of malignancy at thin-section helical CT—Effect of computer-aided diagnosis on performance of radiologists,” Radiology 239, 276278 (2006).
18.
18.Q. Li, F. Li, K. Suzuki, J. Shiraishi, H. Abe, R. Engelmann, Y. Nie, H. MacMahon, and K. Doi, “Computer-aided diagnosis in thoracic CT,” Semin. Ultrasound CT MR 26, 357363 (2005).
19.
19.K. Horsch, M. L. Giger, C. J. Vyborny, L. Lan, E. B. Mendelson, and R. E. Hendrick, “Classification of breast lesions with multimodality computer-aided diagnosis: Observer study results on an independent clinical data set,” Radiology 240, 357368 (2006).
http://dx.doi.org/10.1148/radiol.2401050208
20.
20.H. S. Kim, A. D. Malhotra, P. C. Rowe, J. M. Lee, and A. C. Venbrux, “Embolotherapy for pelvic congestion syndrome: Long-term results,” J. Vasc. Interv. Radiol. 17, 289297 (2006).
21.
21.D. A. Mankoff, F. O’Sullivan, W. E. Barlow, and K. A. Krohn, “Molecular imaging research in the outcomes era: Measuring outcomes for individualized cancer therapy,” Acad. Radiol. 14, 398405 (2007).
22.
22.W. Hollingworth and D. E. Spackman, “Emerging methods in economic modeling of imaging costs and outcomes: A short report on discrete event simulation,” Acad. Radiol. 14, 406410 (2007).
23.
23.A. Z. Kielar, R. H. El-Maraghi, and R. C. Carlos, “Health-related quality of life and cost-effectiveness analysis in radiology,” Acad. Radiol. 14, 411419 (2007).
24.
24.B. J. Hillman, “Health services research of medical imaging: My impressions,” Acad. Radiol. 14, 381384 (2007).
25.
25.U.S. Department of Health and Human Services Centers for Medicare and Medicaid Services, http://www.cms.hhs.gov/PhysicianFeeSched/, last accessed June 15, 2007.
26.
26.M. Perrone, “MRI, x-ray firms fight Medicare cuts,” Associated Press, June 6, 2007.
27.
27.H. L. Kundel, “History of research in medical image perception,” J. Am. Coll. Radiol. 3, 402408 (2006).
28.
28.E. A. Krupinski, “The future of image perception in radiology: Synergy between humans and computers,” Acad. Radiol. 10, 13 (2003).
29.
29.C. C. Birkelo, W. E. Chamberlain, and P. S. Phelps, “Tuberculosis case finding. A comparison of the effectiveness of various roentgenographic and photofluorographic methods,” JAMA, J. Am. Med. Assoc. 133, 359366 (1947).
30.
30.L. H. Garland, “On the scientific evaluation of diagnostic procedures,” Radiology 52, 309328 (1949).
31.
31.R. R. Newell, W. E. Chamberlain, and L. Rigler, “Descriptive classification of pulmonary shadows. Revelation of unreliability in roentgenographic diagnosis of tuberculosis,” Am. Rev. Tuberc. 69, 566584 (1954).
32.
32.A. Wald, Statistical Decision Functions (Wiley, Inc., New York, 1950).
33.
33.W. W. Peterson, T. L. Birdsall, and W. C. Fox, “The theory of signal detectability,” IEEE Trans. Inf. Theory 4, 171212 (1954).
http://dx.doi.org/10.1109/TIT.1954.1057460
34.
34.W. P. Tanner and J. A. Swets, “A decision-making theory of visual detection,” Psychol. Rev. 61, 401409 (1954).
http://dx.doi.org/10.1037/h0058700
35.
35.D. M. Green and J. A. Swets, Signal Detection Theory and Psychophysics (Krieger, Huntington, NY, 1974).
36.
36.J. P. Egan, Signal Detection Theory and ROC Analysis (Academic, New York, 1975).
37.
37.L. B. Lusted, “Logical analysis in roentgen diagnosis,” Radiology 74, 178193 (1960).
38.
38.L. B. Lusted, Introduction to Medical Decision Making (Charles C. Thomas, Springfield, IL, 1968).
39.
39.L. B. Lusted, “Perception of the Roentgen image: Applications of signal detection theory,” Radiol. Clin. North Am. 7, 435459 (1969).
40.
40.L. B. Lusted, “Signal detectability and medical decision making,” Science 171, 12171219 (1971).
http://dx.doi.org/10.1126/science.171.3977.1217
41.
41.B. J. McNeil and S. J. Adelstein, “Determining the value of diagnostic and screening tests,” J. Nucl. Med. 17, 439448 (1976).
42.
42.B. J. McNeil and J. A. Hanley, “Statistical approaches to the analysis of receiver operating characteristic (ROC) curves,” Med. Decis. Making 4, 137150 (1984).
43.
43.B. J. McNeil, E. Keeler, and S. J. Adelstein, “Primer on certain elements of medical decision making,” J. Nucl. Med. 17, 293 (1976).
44.
44.J. A. Swets and R. M. Pickett, Evaluation of Diagnostic Systems. Methods from Signal Detection Ttheory (Academic, New York, 1982).
45.
45.University of Chicago receiver operating characteristic program software downloads, http://xray.bsd.uchicago.edu/krl/KRL_ROC/software_index6.htm, last checked June 20, 2007.
46.
46.University of Iowa receiver operating characteristic program software downloads, http://perception.radiology.uiowa.edu/, last checked June 20, 2007.
47.
47.Free-response receiver operating characteristic software downloads, http://www.devchakraborty.com/downloads.html, last checked June 20, 2007.
48.
48.H. E. Rockette, W. Li, M. L. Brown, C. A. Britton, J. T. Towers, and D. Gur, “Statistical test to assess rank-order imaging studies,” Acad. Radiol. 8, 2430 (2001).
49.
49.W. F. Good et al., “Observer sensitivity to small differences: a multipoint rank order experiment,” AJR Am. J. Roentgenol. 173, 275278 (1999).
50.
50.C. A. Britton et al., “Subjective quality assessment of computed radiography hand images,” J. Digit. Imaging 9, 2124 (1996).
51.
51.J. D. Towers, J. M. Holbert, C. A. Britton, P. Costello, R. Sciulli, and D. Gur, “Multipoint rank order study methodology: Observer issues,” Invest. Radiol. 35, 125130 (2000).
http://dx.doi.org/10.1097/00004424-200002000-00006
52.
52.D. Gur, D. A. Rubin, B. H. Kart, A. M. Peterson, C. R. Fuhrman, H. E. Rockette, and J. L. King, “Forced choice and ordinal discrete rating assessment of image quality: A comparison,” J. Digit. Imaging 10, 103107 (1997).
53.
53.R. M. Slone, D. H. Foos, B. R. Whiting, E. Muka, D. A. Rubin, T. K. Pilgram, K. S. Kohm, S. S. Young, P. Ho, and D. D. Hendrickson, “Assessment of visually lossless irreversible image compression: Comparison of three methods by using an image-comparison workstation,” Radiology 240, 869877 (2000).
54.
54.K. H. Lee, Y. H. Kim, B. H. Kim, K. J. Kim, T. J. Kim, H. J. Kim, and S. Hahn, “Irreversible JPEG 2000 compression of abdominal CT for primary interpretation: Assessment of visually lossless threshold,” Eur. Radiol. 17, 15291534 (2007).
55.
55.R. M. Slone, E. Muka, and T. K. Pilgram, “Irreversible JPEG compression of digital chest radiographs for primary interpretation: Assessment of visually lossless threshold,” Radiology 228, 425429 (2003).
56.
56.O. Kocsis, L. Costaridou, L. Varaki, E. Likaki, C. Kalogeropoulou, S. Skiadopoulos, and G. Panayiotakis, “Visually lossless threshold determination for microcalcification detection in wavelet compressed mammograms,” Eur. Radiol. 13, 23902396 (2003).
57.
57.H. Ringl, R. E. Schernthaner, A. A. Bankier, M. Weber, M. Prokop, C. J. Herold, and C. Schaefer-Prokop, “JPEG2000 compression of thin-section CT images of the lung: Effect of compression ratio on image quality,” Radiology 240, 869877 (2006).
58.
58.H. S. Woo, K. J. Kim, T. J. Kim, S. Hahn, B. Kim, Y. H. Kim, and K. H. Lee, “JPEG 2000 compression of abdominal CT: Difference in tolerance between thin- and thick-section images,” AJR Am. J. Roentgenol. 189, 535541 (2007).
59.
59.C. E. Metz, “Some practical issues of experimental design and data analysis in radiological ROC studies,” Invest. Radiol. 24, 234245 (1989).
http://dx.doi.org/10.1097/00004424-198903000-00012
60.
60.T. Kobayashi, X. W. Xu, H. MacMahon, C. E. Metz, and K. Doi, “Effect of a computer-aided diagnosis scheme on radiologists’ performance in detection of lung nodules on radiographs,” Radiology199, 843848 (1996).
61.
61.C. E. Metz, in Handbook of Medical Imaging, edited by J. Beutel, H. L. Kundel, and R. L. Van-Metter (SPIE, Bellingham, WA, 2000), Vol. 1, pp. 751769.
62.
62.S. V. Beiden et al., “Independent versus sequential reading in ROC studies of computer-assist modalities: Analysis of components of variance,” Acad. Radiol. 9, 10361043 (2002).
63.
63.H. E. Rockette, W. L. Campbell, C. A. Britton, J. M. Holbert, J. L. King, and D. Gur, “Empiric assessment of parameters that affect the design of multireader receiver operating characteristic studies,” Acad. Radiol. 6, 723729 (1999).
64.
64.N. A. Obuchowski and R. C. Zepp, “Simple steps for improving multiple-reader studies in radiology,” AJR Am. J. Roentgenol. 166, 517521 (1996).
65.
65.J. L. King, C. A. Britton, D. Gur, H. E. Rockette, and P. L. Davis, “On the validity of the continuous and discrete confidence rating scales in receiver operating characteristic studies,” Invest. Radiol. 28, 962963 (1993).
66.
66.H. E. Rockette, D. Gur, and C. E. Metz, “The use of continuous and discrete confidence judgments in Receiver operating characteristic studies of diagnostic imaging techniques,” Invest. Radiol. 27, 169172 (1992).
67.
67.K. S. Berbaum, D. D. Dorfman, E. A. Franken, Jr., and R. T. Caldwell, “An empirical comparison of discrete ratings and subjective probability ratings,” Acad. Radiol. 9, 756763 (2002).
68.
68.American College of Radiology (ACR), The Breast Imaging Reporting and Data System Atlas (American College of Radiology, Reston, VA, 2004).
69.
69.W. E. Barlow et al., “Accuracy of screening mammography interpretation by characteristics of radiologists,” J. Natl. Cancer Inst. 96, 18401850 (2004).
70.
70.J. J. Fenton et al., “Influence of computer-aided detection on performance of screening mammography,” N. Engl. J. Med. 356, 13991409 (2007).
http://dx.doi.org/10.1056/NEJMoa066099
71.
71.R. F. Wagner, S. V. Beiden, and C. E. Metz, “Continuous versus categorical data for ROC analysis: Some quantitative considerations,” Acad. Radiol. 8, 328334 (2001).
72.
72.Y. Jiang, R. M. Nishikawa, R. A. Schmidt, C. E. Metz, M. L. Giger, and K. Doi, “Improving breast cancer diagnosis with computer-aided diagnosis,” Acad. Radiol. 6, 2233 (1999).
73.
73.D. D. Dorfman, K. S. Berbaum, and C. E. Metz, “Receiver operating characteristic rating analysis. Generalization to the population of readers and patients with the jackknife method,” Invest. Radiol. 27, 723731 (1992).
http://dx.doi.org/10.1097/00004424-199209000-00015
74.
74.N. A. Obuchowski, “Multireader, multimodality receiver operating characteristic curve studies: Hypothesis testing and sample size estimation using an analysis of variance approach with dependent observations,” Acad. Radiol. 2, 522529;
74.Acad. Radiol. 2, S57S64;
74.Acad. Radiol.2, S70S21 (1995).
75.
75.A. Toledano and C. A. Gatsonis, “Regression analysis of correlated receiver operating characteristic data,” Acad. Radiol. 2, S30S36;
75.Acad. Radiol. 2, S61S34;
75.Acad. Radiol. 2, S70S31 (1995).
76.
76.S. L. Hillis, N. A. Obuchowski, K. M. Schartz, and K. S. Berbaum, “A comparison of the Dorfman–Berbaum–Metz and Obuchowski–Rockette methods for receiver operating characteristic (ROC) data,” Stat. Med. 24, 15791607 (2005).
77.
77.S. V. Beiden, R. F. Wagner, and G. Campbell, “Components-of-variance models and multiple-bootstrap experiments: An alternative method for random-effects, receiver operating characteristic analysis,” Acad. Radiol. 7, 341349 (2000).
78.
78.S. V. Beiden, R. F. Wagner, G. Campbell, C. E. Metz, and Y. Jiang, “Components-of-variance models for random-effects ROC analysis: The case of unequal variance structures across modalities,” Acad. Radiol. 8, 605615 (2001).
79.
79.S. V. Beiden, R. F. Wagner, G. Campbell, and H. P. Chan, “Analysis of uncertainties in estimates of components of variance in multivariate ROC analysis,” Acad. Radiol. 8, 616622 (2001).
80.
80.H. H. Barrett, M. A. Kupinski, and E. Clarkson, “Probabilistic foundations of the MRMC method,” Proc. SPIE 5749, 2131 (2005).
81.
81.B. D. Gallas, “One-shot estimate of MRMC variance: AUC,” Acad. Radiol. 13, 353362 (2006).
82.
82.F. Wang and C. A. Gatsonis, “Hierarchical models for ROC curve summary measures: Design and analysis of multi-reader, multi-modality studies of medical tests,” Stat. Med. 27, 243256 (2008).
83.
83.S. J. Starr, C. E. Metz, L. B. Lusted, and D. J. Goodenough, “Visual detection and localization of radiographic images,” Radiology 116, 533538 (1975).
84.
84.R. G. Swensson, “Unified measurement of observer performance in detecting and localizing target objects on images,” Med. Phys. 23, 17091725 (1996).
http://dx.doi.org/10.1118/1.597758
85.
85.P. C. Bunch, J. F. Hamilton, G. K. Sanderson, and A. H. Simmons, “A free-response approach to the measurement and characterization of radiographic-observer performance,” J. Appl. Photogr. Eng. 4, 166171 (1978).
86.
86.D. P. Chakraborty and K. S. Berbaum, “Observer studies involving detection and localization: Modeling, analysis, and validation,” Med. Phys. 31, 23132330 (2004).
http://dx.doi.org/10.1118/1.1769352
87.
87.D. C. Edwards, C. E. Metz, and M. A. Kupinski, “Ideal observers and optimal ROC hypersurfaces in -class classification,” IEEE Trans. Med. Imaging 23, 891895 (2004).
http://dx.doi.org/10.1109/TMI.2004.828358
88.
88.X. He, C. E. Metz, B. M. Tsui, J. M. Links, and E. C. Frey, “Three-class ROC analysis—A decision theoretic approach under the ideal observer framework,” IEEE Trans. Med. Imaging 25, 571581 (2006).
http://dx.doi.org/10.1109/TMI.2006.871416
89.
89.D. P. Chakraborty, “Recent advances in observer performance methodology: Jackknife free-response ROC (JAFROC),” Radiat. Prot. Dosimetry 114, 2631 (2005).
90.
90.D. P. Chakraborty, “Analysis of location specific observer performance data: Validated extensions of the jackknife free-response (JAFROC) method,” Acad. Radiol. 13, 11871193 (2006).
91.
91.B. Zheng, D. P. Chakraborty, H. E. Rockette, G. S. Maitz, and D. Gur, “A comparison of two data analyses from two observer performance studies using jackknife ROC and JAFROC,” Med. Phys. 32, 10311034 (2005).
http://dx.doi.org/10.1118/1.1884766
92.
92.J. Shiraishi, D. Appelbaum, Y. Pu, Q. Li, L. Pesce, and K. Doi, “Usefulness of temporal subtraction images for identification of interval changes in successive whole-body bone scans: JAFROC analysis of radiologists’ performance,” Acad. Radiol. 14, 959966 (2007).
93.
93.K. Ueda, S. Iwasaki, M. Nagasawa, S. Sueyoshi, J. Takahama, K. Ide, and K. Kichikawa, “Hard-copy versus soft-copy image reading for detection of ureteral stones on abdominal radiography,” Radiat. Med. 21, 210213 (2003).
94.
94.E. A. Berns, R. E. Hendrick, M. Solari, L. Barke, D. Reddy, J. Wolfman, L. Segal, P. DeLeon, S. Benjamin, and L. Willis, “Digital and screen-film mammography: comparison of image acquisition and interpretation times,” AJR Am. J. Roentgenol. 187, 3841 (2006).
95.
95.H. M. Zafar, R. S. Lewis, and J. H. Sunshine, “Satisfaction of radiologists in the United States: A comparison between 2003 and 1995,” Radiology 244, 223231 (2007).
96.
96.A. Zuger, “Dissatisfaction with medical practice,” N. Engl. J. Med. 350, 6975 (2004).
97.
97.S. P. Prabhu, S. Gandhi, and P. R. Goddard, “Ergonomics of digital imaging,” Br. J. Radiol. 78, 582586 (2005).
98.
98.P. L. Spath, “Caring on empty: Fatigue in healthcare,” Radiol. Today, July, 20–24 (2006).
99.
99.T. Vertinsky and B. Forster, “Prevalence of eye strain among radiologists: Influence of viewing variables on symptoms,” AJR Am. J. Roentgenol. 184, 681686 (2005).
100.
100.E. A. Krupinski and M. Kallergi, “Choosing a radiology workstation: technical and clinical considerations,” Radiology 242, 671682 (2007).
101.
101.S. Halligan, D. G. Altman, S. Mallett, S. A. Taylor, D. Burling, M. Roddie, L. Honeyfield, J. McQuillan, H. Amin, and J. Dehmeshki, “Computed tomographic colonography: Assessment of radiologist performance with and without computer-aided detection,” Gastroent. 131, 20062009 (2006).
102.
102.S. Kakeda, K. Kamada, Y. Hatakeyama, T. Aoki, Y. Korogi, S. Katsuragawa, and K. Doi, “Effect of temporal subtraction technique on interpretation time and diagnostic accuracy of chest radiography,” AJR Am. J. Roentgenol. 187, 12531259 (2006).
103.
103.S. H. Kim, J. M. Lee, Y. J. Kim, J. Y. Choi, G. H. Kim, H. Y. Lee, and B. I. Choi, “Detection of hepatocellular carcinoma on CT in liver transplant candidates: Comparison of PACS tile and multisynchronized stack modes,” AJR Am. J. Roentgenol. 188, 13371342 (2007).
104.
104.C. Mariani, A. Tronchi, L. Oncini, O. Pirani, and R. Murri, “Analysis of the x-ray work flow in two diagnostic imaging departments with and without a RIS/PACS system,” J. Digit. Imaging 19, 1828 (2006).
105.
105.B. I. Reiner, E. L. Siegel, and K. M. Siddiqui, in Decision Support in the Digital Medical Enterprise, edited by B. I. Reiner, E. L. Siegel, and B. J. Erickson (Society for Computer Applications in Radiology, Great Falls, VA, 2005), pp. 121133.
106.
106.K. M. Schartz, K. S. Berbaum, R. T. Caldwell, and M. T. Madsen, “Workstation J: Workstation emulation software for medical image perception and technology evaluation research,” Proc. SPIE 6515, 111 (2007).
107.
107.E. A. Krupinski, “Using the human observer to assess medical image display quality,” J. Soc. Inf. Disp. 14, 927932 (2006).
http://dx.doi.org/10.1889/1.2372427
108.
108.W. J. Tuddenham and W. P. Calvert, “Visual search patterns in roentgen diagnosis,” Radiology 76, 255256 (1961).
109.
109.E. L. Thomas and E. L. Lansdown, “Visual search patterns of radiologists in training,” Radiology 81, 288291 (1963).
110.
110.H. L. Kundel, C. F. Nodine, and D. P. Carmody, “Visual scanning, pattern recognition and decision-making in pulmonary tumor detection,” Invest. Radiol. 13, 175181 (1978).
111.
111.E. A. Krupinski, “Visual scanning patterns of radiologists searching mammograms,” Acad. Radiol. 3, 137144 (1996).
112.
112.C. F. Nodine, C. Mello-Thoms, H. L. Kundel, and S. P. Weinstein, “Time course of perception and decision making during mammographic interpretation,” AJR Am. J. Roentgenol. 179, 917923 (2002).
113.
113.E. A. Krupinski, “Technology and perception in the 21st-century reading room,” J. Am. Coll. Radiol. 3, 433439 (2006).
114.
114.C. F. Nodine, H. L. Kindel, L. C. Toto, and E. A. Krupinski, “Recording and analyzing eye-position data using a microcomputer workstation,” Behav. Res. Methods Instrum. Comput. 24, 475485 (1992).
115.
115.E. Krupinski, H. Roehrig, and T. Furukawa, “Influence of film and monitor display luminance on observer performance and visual search,” Acad. Radiol. 6, 411418 (1999).
116.
116.E. A. Krupinski and H. Roehrig, “The influence of a perceptually linearized display on observer performance and visual search,” Acad. Radiol. 7, 813 (2000).
117.
117.E. A. Krupinski, H. Roehrig, J. Fan, and T. Yoneda, “High luminance monochrome vs low luminance monochrome and color softcopy displays: Observer performance and visual search efficiency,” Proc. SPIE 65150R, 105 (2007).
118.
118.E. A. Krupinski, K. Siddiqui, E. Siegel, R. Shrestha, E. Grant, H. Roehrig, and J. Fan, “Influence of 8-bit vs 11-bit displays on observer performance and visual search: A multi-center evaluation,” J. Soc. Inf. Disp. 15, 385390 (2007).
http://dx.doi.org/10.1889/1.2749324
119.
119.E. A. Krupinski and P. J. Lund, “Differences in time to interpretation for evaluation of bone radiographs with monitor and film viewing,” Acad. Radiol. 4, 177182 (1997).
120.
120.P. R. G. Bak, “Will the use of irreversible compression become a standard of practice?SIIM News 18, 110 (2006).
121.
121.Y. Zhang, B. T. Pham, and M. P. Eckstein, “The effect of nonlinear human visual system components on performance of a channelized Hotelling observer model in structured backgrounds,” IEEE Trans. Med. Imaging 25, 13481362 (2006).
http://dx.doi.org/10.1109/TMI.2006.880681
122.
122.Y. Jiang, D. Huo, and D. L. Wilson, “Methods for quantitative image quality evaluation of MRI parallel reconstructions: Detection and perceptual difference model,” Magn. Reson. Imaging 25, 712721 (2007).
123.
123.W. B. Jackson, P. Beebee, D. A. Jared, D. K. Biegelsen, J. O. Larimer, J. Lubin, and J. L. Gille, “X-ray system design using a human visual model,” Proc. SPIE 2708, 2940 (1996).
124.
124.E. Krupinski, J. Johnson, H. Roehrig, and J. Lubin, “Using a human visual system model to optimize soft-copy mammography display: Influence of display phosphor,” Acad. Radiol. 10, 161166 (2003).
125.
125.J. P. Johnson, J. Nafziger, E. A. Krupinski, J. Lubin, and H. Roehrig, “Effects of grayscale window/level parameters on breast-lesion detectability,” Proc. SPIE 5034, 462473 (2003).
http://dx.doi.org/10.1117/12.480340
126.
126.E. A. Krupinski, J. Lubin, H. Roehrig, J. Johnson, and J. Nafziger, “Using a human visual system model to optimize soft-copy mammography display: Influence of veiling glare,” Acad. Radiol. 13, 289295 (2006).
127.
127.N. A. Obuchowski, “Sample size tables for receiver operating characteristic studies,” AJR Am. J. Roentgenol. 175, 603608 (2000).
128.
128.J. A. Hanley and B. J. McNeil, “The meaning and use of the area under a receiver operating characteristic (ROC) curve,” Radiology 143, 2936 (1982).
129.
129.J. A. Hanley and B. J. McNeil, “A method of comparing the areas under receiver operating characteristic curves derived from the same cases,” Radiology 148, 839843 (1983).
130.
130.X. H. Zhou, N. A. Obuchowski, and D. K. McClish, Statistical Methods in Diagnostic Medicine (Wiley, New York, 2002).
131.
131.H. P. Chan et al., “Improvement in radiologists’ detection of clustered microcalcifications on mammograms. The potential of computer-aided diagnosis,” Invest. Radiol. 25, 11021110 (1990).
http://dx.doi.org/10.1097/00004424-199010000-00006
132.
132.W. P. Kegelmeyer, J. M. Pruneda, P. D. Bourland, A. Hillis, M. W. Riggs, and M. L. Nipper, “Computer-aided mammographic screening for spiculated lesions,” Radiology 191, 331337 (1994).
133.
133.D. Gur, A. I. Bandos, C. R. Fuhrman, A. H. Klym, J. L. King, and H. E. Rockette, “The prevalence effect in a laboratory environment: Changing the confidence ratings,” Acad. Radiol. 14, 4953 (2007).
134.
134.K. S. Berbaum, G. Y. El-Khoury, E. A. Franken, D. M. Kuehn, D. M. Meis, D. D. Dorfman, N. G. Warnock, B. H. Thompson, S. C. S. Kao, and M. H. Kathol, “Missed fractures resulting from satisfaction of search effect,” Emerg. Radiol. 1, 242249 (1994).
135.
135.K. S. Berbaum, E. A. Franken, D. D. Dorfman, E. M. Miller, E. A. Krupinski, K. Kreinbring, R. T. Caldwell, and C. H. Lu, “The cause of satisfaction of search effects in contrast studies of the abdomen,” Acad. Radiol. 3, 815826 (1996).
136.
136.K. S. Berbaum, G. Y. El-Khoury, K. Ohashi, K. M. Schartz, R. T. Caldwell, M. T. Madsen, and E. A. Franken, “Satisfaction of search in multi-trauma patients: Severity of detected fractures,” Acad. Radiol. 14, 711722 (2007).
137.
137.C. T. Loy and L. Irwig, “Accuracy of diagnostic tests read with and without clinical information: A systematic review,” JAMA, J. Am. Med. Assoc. 292, 16021609 (2004).
138.
138.L. Ruess, S. C. O’Connor, K. H. Cho, F. H. Hussain, W. J. Howard, R. C. Slaughter, and A. Hedge, “Carpal tunnel syndrome and cubital tunnel syndrome: Work-related musculoskeletal disorders in four symptomatic radiologists,” AJR Am. J. Roentgenol. 181, 3742 (2003).
139.
139.E. A. Krupinski, A. Johns, and K. S. Berbaum, “Measurement of visual strain in radiologists,” Proc. SPIE (in press).
140.
140.J. M. Lewin et al., “Comparison of full-field digital mammography with screen-film mammography for cancer detection: Results of 4,945 paired examinations,” Radiology 218, 873880 (2001).
141.
141.T. W. Freer and M. J. Ulissey, “Screening mammography with computer-aided detection: Prospective study of 12,860 patients in a community breast center,” Radiology 220, 781786 (2001).
http://dx.doi.org/10.1148/radiol.2203001282
142.
142.D. Gur et al., “Changes in breast cancer detection and mammography recall rates after the introduction of a computer-aided detection system,” J. Natl. Cancer Inst. 96, 185190 (2004).
143.
143.E. D. Pisano et al., “American College of Radiology Imaging Network digital mammographic imaging screening trial: Objectives and methodology,” Radiology 236, 404412 (2005).
144.
144.E. D. Pisano et al., “Diagnostic performance of digital versus film mammography for breast-cancer screening,” N. Engl. J. Med. 353, 17731783 (2005).
http://dx.doi.org/10.1056/NEJMoa052911
145.
145.J. Warwick, L. Tabar, B. Vitak, and S. W. Duffy, “Time-dependent effects on survival in breast carcinoma: results of 20 years of follow-up from the Swedish Two-County Study,” Cancer 100, 13311336 (2004).
146.
146.National Lung Screening Trial (NLST) National Cancer Institute web site, http://www.cancer.gov/nlst, last checked August 231, 2007.
147.
147.P. B. Bach, J. R. Jett, U. Pastorino, M. S. Tockman, S. J. Swensen, and C. B. Begg, “Computed tomography screening and lung cancer outcomes,” JAMA, J. Am. Med. Assoc. 297, 953961 (2007).
148.
148.P. M. Marcus, E. J. Bergstralh, M. H. Zweig, A. Harris, K. P. Offord, and R. S. Fontana, “Extended lung cancer incidence follow-up in the Mayo Lung Project and overdiagnosis,” J. Natl. Cancer Inst. 98, 748756 (2006).
149.
149.M. M. Oken, P. M. Marcus, P. Hu, T. M. Beck, W. Hocking, P. A. Kvale, J. Cordes, T. L. Riley, S. D. Winslow, S. Peace, D. L. Levin, P. C. Prorok, and J. K. Gohagan, “Baseline chest radiograph for lung cancer detection in the randomized Prostate, Lung, Colorectal and Ovarian Cancer Screening Trial,” J. Natl. Cancer Inst. 97, 18321839 (2005).
150.
150.J. L. Weissfeld, R. E. Schoen, P. F. Pinsky, R. S. Bresalier, T. Church, S. Yurgalevitch, J. H. Austin, P. C. Prorok, and J. K. Gohagan, “Flexible sigmoidoscopy in the PLCO cancer screening trial: Results from the baseline screening examination of a randomized trial,” J. Natl. Cancer Inst. 97, 989997 (2005).
151.
151.G. L. Andriole, D. L. Levin, E. D. Crawford, E. P. Gelmann, P. F. Pinsky, D. Chia, B. S. Kramer, D. Reding, T. R. Church, R. L. Grubb, G. Izmirlian, L. R. Ragard, J. D. Clapp, P. C. Prorok, and J. K. Gohagan, “Prostate Cancer Screening in the Prostate, Lung, Colorectal and Ovarian (PLCO) Cancer Screening Trial: Findings from the initial screening round of a randomized trial,” J. Natl. Cancer Inst. 97, 433438 (2005).
152.
152.S. S. Buys, E. Partridge, M. H. Greene, P. C. Prorok, D. Reding, T. L. Riley, P. Hartge, R. M. Fagerstrom, L. R. Ragard, D. Chia, G. Izmirlian, M. Fouad, C. C. Johnson, and J. K. Gohagan, “Ovarian cancer screening in the Prostate, Lung, Colorectal and Ovarian (PLCO) cancer screening trial: Findings from the initial screen of a randomized trial,” Am. J. Obstet. Gynecol. 193, 16301639 (2005).
153.
153.R. M. Nishikawa, in Digital Mammography, edited by S. M. Astley, M. Brady, C. Rose, and R. Zwiggelaar (Springer, London, 2006), pp. 4653.
154.
154.N. Breen et al., “Reported drop in mammography: Is this cause for concern?,” Cancer 109, 24052409 (2007).
155.
155.L. Tabar, S. W. Duffy, B. Vitak, H. H. Chen, and T. C. Prevost, “The natural history of breast carcinoma: What have we learned from screening?,” Cancer 86, 449462 (1999).
156.
156.D. A. Berry, “Benefits and risks of screening mammography for women in their forties: A statistical appraisal,” J. Natl. Cancer Inst. 90, 14311439 (1998).
157.
157.D. A. Berry et al., “Effect of screening and adjuvant therapy on mortality from breast cancer,” N. Engl. J. Med. 353, 17841792 (2005).
158.
158.P. C. Gotzsche and O. Olsen, “Is screening for breast cancer with mammography justifiable?,” Lancet 355, 129134 (2000).
http://dx.doi.org/10.1016/S0140-6736(99)06065-1
159.
159.O. Olsen and P. C. Gotzsche, “Cochrane review on screening for breast cancer with mammography,” Lancet 358, 13401342 (2001).
http://dx.doi.org/10.1016/S0140-6736(01)06449-2
160.
160.S. A. Feig, R. L. Birdwell, and M. N. Linver, “Computer-aided screening mammography,” N. Engl. J. Med. 357, 84; author reply, N. Engl. J. Med. 357, 85 (2007).
161.
161.A. Jemal, R. Siegel, E. Ward, T. Murray, J. Xu, and M. J. Thun, “Cancer statistics, 2007,” Ca Cancer J. Clin. 57, 4366 (2007).
162.
162.Y. Jiang, D. L. Miglioretti, C. E. Metz, and R. A. Schmidt, “Breast cancer detection rate: Designing imaging trials to demonstrate improvements,” Radiology 243, 360367 (2007).
163.
163.R. Ballard-Barbash et al., “Breast Cancer Surveillance Consortium: A national mammography screening and outcomes database,” AJR Am. J. Roentgenol. 169, 10011008 (1997).
164.
164.J. G. Elmore, C. K. Wells, C. H. Lee, D. H. Howard, and A. R. Feinstein, “Variability in radiologists’ interpretations of mammograms,” N. Engl. J. Med. 331, 14931499 (1994).
165.
165.C. A. Beam, P. M. Layde, and D. C. Sullivan, “Variability in the interpretation of screening mammograms by US radiologists. Findings from a national sample,” Arch. Intern. Med. 156, 209213 (1996).
166.
166.D. Gur, “Objectively measuring and comparing performance levels of diagnostic imaging systems and practices,” Acad. Radiol. 14, 641642 (2007).
167.
167.C. M. Rutter and S. Taplin, “Assessing mammographers’ accuracy. A comparison of clinical and test performance,” J. Clin. Epidemiol. 53, 443450 (2000).
168.
168.K. Moberg, N. Bjurstam, B. Wilczek, L. Rostgard, E. Egge, and C. Muren, “Computed assisted detection of interval breast cancers,” Eur. J. Radiol. 39, 104110 (2001).
http://dx.doi.org/10.1016/S0720-048X(01)00291-1
169.
169.C. Marx et al., “Are unnecessary follow-up procedures induced by computer-aided diagnosis (CAD) in mammography? Comparison of mammographic diagnosis with and without use of CAD,” Eur. J. Radiol. 51, 6672 (2004).
http://dx.doi.org/10.1016/S0720-048X(03)00144-X
170.
170.E. Alberdi et al., “Use of computer-aided detection (CAD) tools in screening mammography: A multidisciplinary investigation,” Br. J. Radiol. 78, S31S40 (2005).
171.
171.P. Taylor and R. M. Given-Wilson, “Evaluation of computer-aided detection (CAD) devices,” Br. J. Radiol. 78, S26S30 (2005).
http://dx.doi.org/10.1259/bjr/84545410
172.
172.F. J. Gilbert et al., “Single reading with computer-aided detection and double reading of screening mammograms in the United Kingdom National Breast Screening Program,” Radiology 241, 4753 (2006).
173.
173.S. H. Taplin, C. M. Rutter, and C. D. Lehman, “Testing the effect of computer-assisted detection on interpretive performance in screening mammography,” AJR Am. J. Roentgenol. 187, 14751482 (2006).
174.
174.G. M. te Brake, N. Karssemeijer, and J. H. Hendriks, “Automated detection of breast carcinomas not detected in a screening program,” Radiology 207, 465471 (1998).
175.
175.L. J. Warren Burhenne et al., “Potential contribution of computer-aided detection to the sensitivity of screening mammography,” Radiology 215, 554562 (2000).
176.
176.R. L. Birdwell, D. M. Ikeda, K. F. O’Shaughnessy, and E. A. Sickles, “Mammographic characteristics of 115 missed cancers later detected with screening mammography and the potential utility of computer-aided detection,” Radiology 219, 192202 (2001).
177.
177.B. Zheng, R. Shah, L. Wallace, C. Hakim, M. A. Ganott, and D. Gur, “Computer-aided detection in mammography: An assessment of performance on current and prior images,” Acad. Radiol. 9, 12451250 (2002).
178.
178.R. F. Brem et al., “Improvement in sensitivity of screening mammography with computer-aided detection: A multiinstitutional trial,” AJR Am. J. Roentgenol. 181, 687693 (2003).
179.
179.N. Karssemeijer et al., “Computer-aided detection versus independent double reading of masses on mammograms,” Radiology 227, 192200 (2003).
180.
180.S. V. Destounis, P. DiNitto, W. Logan-Young, E. Bonaccio, M. L. Zuley, and K. M. Willison, “Can computer-aided detection with double reading of screening mammograms help decrease the false-negative rate? Initial experience,” Radiology 232, 578584 (2004).
http://dx.doi.org/10.1148/radiol.2322030034
181.
181.D. M. Ikeda, R. L. Birdwell, K. F. O’Shaughnessy, E. A. Sickles, and R. J. Brenner, “Computer-aided detection output on 172 subtle findings on normal mammograms previously obtained in women with breast cancer detected at follow-up screening mammography,” Radiology 230, 811819 (2004).
182.
182.S. Ciatto et al., “Computer-aided detection (CAD) of cancers detected on double reading by one reader only,” Breast 15, 528532 (2006).
183.
183.P. Skaane, A. Kshirsagar, S. Stapleton, K. Young, and R. A. Castellino, “Effect of computer-aided detection on independent double reading of paired screen-film and full-field digital screening mammograms,” AJR Am. J. Roentgenol. 188, 377384 (2007).
184.
184.M. C. Difazio et al., “Digital chest radiography: Effect of temporal subtraction images on detection accuracy,” Radiology 202, 447452 (1997).
185.
185.L. Monnier-Cholley, H. MacMahon, S. Katsuragawa, J. Morishita, T. Ishida, and K. Doi, “Computer-aided diagnosis for detection of interstitial opacities on chest radiographs,” AJR Am. J. Roentgenol. 171, 16511656 (1998).
186.
186.D. J. Getty, R. M. Pickett, C. J. D’Orsi, and J. A. Swets, “Enhanced interpretation of diagnostic images,” Invest. Radiol. 23, 240252 (1988).
187.
187.H. P. Chan et al., “Improvement of radiologists’ characterization of mammographic masses by using computer-aided diagnosis: An ROC study,” Radiology 212, 817827 (1999).
188.
188.K. Ashizawa et al., “Effect of an artificial neural network on radiologists’ performance in the differential diagnosis of interstitial lung disease using chest radiographs,” AJR Am. J. Roentgenol. 172, 13111315 (1999).
189.
189.J. Shiraishi, H. Abe, R. Engelmann, M. Aoyama, H. MacMahon, and K. Doi, “Computer-aided diagnosis to distinguish benign from malignant solitary pulmonary nodules on radiographs: ROC analysis of radiologists’ performance–initial experience,” Radiology 227, 469474 (2003).
190.
190.M. A. Helvie et al., “Sensitivity of noncommercial computer-aided detection system for mammographic breast cancer detection: Pilot clinical trial,” Radiology 231, 208214 (2004).
http://dx.doi.org/10.1148/radiol.2311030429
191.
191.R. L. Birdwell, P. Bandodkar, and D. M. Ikeda, “Computer-aided detection with screening mammography in a university hospital setting,” Radiology 236, 451457 (2005).
http://dx.doi.org/10.1148/radiol.2362040864
192.
192.T. E. Cupples, J. E. Cunningham, and J. C. Reynolds, “Impact of computer-aided detection in a regional screening mammography program,” AJR Am. J. Roentgenol. 185, 944950 (2005).
193.
193.L. A. Khoo, P. Taylor, and R. M. Given-Wilson, “Computer-aided detection in the United Kingdom National Breast Screening Programme: Prospective study,” Radiology 237, 444449 (2005).
http://dx.doi.org/10.1148/radiol.2372041362
194.
194.J. C. Dean and C. C. Ilvento, “Improved cancer detection using computer-aided detection with diagnostic and screening mammography: Prospective study of 104 cancers,” AJR Am. J. Roentgenol. 187, 2028 (2006).
195.
195.J. M. Ko, M. J. Nicholas, J. B. Mendel, and P. J. Slanetz, “Prospective assessment of computer-aided detection in interpretation of screening mammography,” AJR Am. J. Roentgenol. 187, 14831491 (2006).
196.
196.M. J. Morton, D. H. Whaley, K. R. Brandt, and K. K. Amrami, “Screening mammograms: Interpretation with computer-aided detection–prospective evaluation,” Radiology 239, 375383 (2006).
http://dx.doi.org/10.1148/radiol.2392042121
197.
197.S. A. Feig, E. A. Sickles, W. P. Evans, and M. N. Linver, “Re: Changes in breast cancer detection and mammography recall rates after the introduction of a computer-aided detection system,” J. Natl. Cancer Inst. 96, 1260–1261; author reply, J. Natl. Cancer Inst. 96, 1261 (2004).
198.
198.D. Gur, “Computer-aided screening mammography,” N. Engl. J. Med. 357, 8384; author reply, N. Engl. J. Med. 357, 85 (2007).
199.
199.R. M. Nishikawa, R. A. Schmidt, and C. E. Metz, “Computer-aided screening mammography,” N. Engl. J. Med. 357, 84; author reply, N. Engl. J. Med. 357, 85 (2007).
200.
200.American College of Radiology (ACR), The Breast Imaging Reporting and Data System Atlas (American College of Radiology, Reston, VA, 2004), p. 195.
201.
201.C. A. Roe and C. E. Metz, “Variance-component modeling in the analysis of receiver operating characteristic index estimates,” Acad. Radiol. 4, 587600 (1997).
http://aip.metastore.ingenta.com/content/aapm/journal/medphys/35/2/10.1118/1.2830376
Loading
/content/aapm/journal/medphys/35/2/10.1118/1.2830376
Loading

Data & Media loading...

Loading

Article metrics loading...

/content/aapm/journal/medphys/35/2/10.1118/1.2830376
2008-01-28
2015-03-03

Abstract

Medical imaging used to be primarily within the domain of radiology, but with the advent of virtual pathology slides and telemedicine, imaging technology is expanding in the healthcare enterprise. As new imaging technologies are developed, they must be evaluated to assess the impact and benefit on patient care. The authors review the hierarchical model of the efficacy of diagnostic imaging systems by Fryback and Thornbury [Med. Decis. Making11, 88–94 (1991)] as a guiding principle for system evaluation. Evaluation of medical imaging systems encompasses everything from the hardware and software used to acquire, store, and transmit images to the presentation of images to the interpreting clinician. Evaluation of medical imaging systems can take many forms, from the purely technical (e.g., patient dose measurement) to the increasingly complex (e.g., determining whether a new imaging method saves lives and benefits society). Evaluation methodologies cover a broad range, from receiver operating characteristic (ROC) techniques that measure diagnostic accuracy to timing studies that measure image-interpretation workflow efficiency. The authors review briefly the history of the development of evaluation methodologies and review ROC methodology as well as other types of evaluation methods. They discuss unique challenges in system evaluation that face the imaging community today and opportunities for future advances.

Loading

Full text loading...

/deliver/fulltext/aapm/journal/medphys/35/2/1.2830376.html;jsessionid=125ssjj6ov4u3.x-aip-live-02?itemId=/content/aapm/journal/medphys/35/2/10.1118/1.2830376&mimeType=html&fmt=ahah&containerItemId=content/aapm/journal/medphys
true
true
This is a required field
Please enter a valid email address
752b84549af89a08dbdd7fdb8b9568b5 journal.articlezxybnytfddd
Scitation: Anniversary Paper: Evaluation of medical imaging systems
http://aip.metastore.ingenta.com/content/aapm/journal/medphys/35/2/10.1118/1.2830376
10.1118/1.2830376
SEARCH_EXPAND_ITEM