Volume 35, Issue 6, June 2008
Index of content:
- Imaging: Continuing Education Course: Room 351
CE‐Imaging: Breast Imaging I
35(2008); http://dx.doi.org/10.1118/1.2962324View Description Hide Description
Full Field Digital Mammography (FFDM) systems are fast becoming popular and replacing conventional film‐screen mammographysystems. FFDM systems use a digital detector instead of screen‐film to capture the image of the breast. Several FFDM systems based on different digital detector design and materials are currently available. Some systems are characterized as direct conversion detectors and some are based on indirect conversion process. The digital detectors can be characterized by image quality metrics such as detector quantum efficiency (DQE) and modulation transfer function(MTF).
In this lecture, the physics and technology of the digital detectors used in FFDM will be discussed as well as the advantages and applications arising due to the availability of these systems will be presented. The most recent commercially available systems will also be discussed.
1. Understand the physics of digital detector technology.
2. Understand the image quality metrics such as DQE and MTFRecognize that vendors use varying detector technology in FFDM systems.
3. Appreciate the advantages and disadvantages of digital mammographysystems.
CE‐Imaging: Physics and Technology of Computed Tomography I
35(2008); http://dx.doi.org/10.1118/1.2962332View Description Hide Description
Recent advances in CT scanner technologies have generated a great deal of interest in dual energy CT. The basic research on the fundamental principles, however, dates back to the 1970's, and that work in turn was based in part on even earlier work on x‐ray absorptiometry. The purpose of this presentation is to discuss the basic physics principles of multi‐energy x‐ray methods for material characterization, and the strengths and limitations of the techniques and the various implementations.
A single energy CTimage cannot possibly fully characterize materials. At best it is an image of the linear attenuation coefficient at one energy, and it is well‐known that this does not uniquely identify the tissue. The x‐ray attenuation of materials with different atomic numbers has different energy dependence, and the concept behind multi‐energy imaging is to measure the x‐ray attenuation properties at different energies to more fully characterize the material. At first glance one might assume that measurements at N energies would enable the separation of the object into N component materials. However, an important limitation to the ability of x‐ray attenuation measurements to fully characterize materials is the fact that in the diagnostic energy range the attenuation of all materials is dominated by Compton scattering and photoelectric absorption, and, unless there is an absorption edge in the energy range used, these x‐ray interaction mechanisms have the same energy dependence for all materials. Under these conditions, there are really only two “basis functions” and multi‐energy CT essentially measures the effective atomic number and the electron density of the material. This leaves us with considerable residual ambiguity as to what the material is. Also, measurementnoise limits the ability to measure small changes in atomic number.
The main requirement for dual energy CT is to measure projection data using two x‐ray spectra. There are several implementations available. These include systems that obtain measurements at multiple kVp and/or filtration. There are also methods in which the x‐ray energy discrimination is performed at the detector. These include layered detectors and photon countingsystems with energy analysis. These various approaches to obtaining energy dependent measurements have different dose efficiency and different sensitivity to subject motion.
There are two main ways to process the multi‐energy data. The simplest method reconstructsCTimages from the data with each spectrum and performs the multi‐energy analysis on the values in the reconstructed images. The energy dependent processing can alternatively be applied to the measured projections prior to reconstruction. This latter method is somewhat more complicated but provides additional benefits, such as an implicit beam hardening correction.
Research sponsored by GE Healthcare.
35(2008); http://dx.doi.org/10.1118/1.2962333View Description Hide Description
In CTimaging, materials having different chemical compositions can be represented by the same, or very similar, CT numbers, making the differentiation and classification of different types of tissues extremely challenging. In dual energy CT, an additional attenuation measurement is obtained at a second energy, allowing the differentiation of the two materials. Previously implemented in the 1980s, dual‐kV techniques are again available on clinical CT systems, accomplished with either slow or fast tube potential switching or dual‐source methods. The fundamental principles of dual‐kV techniques and their relative strengths and weaknesses will be reviewed. Clinical applications of dual‐kV CTimaging will be described, including: 1) automatic removal of bony anatomy, including calcified plaque, from a CT data set; 2) semi‐quantitative indication of the perfused blood volume in lung parenchyma or the myocardium; 3) removal of the iodine signal from contrast‐enhanced CT data, which may allow for the elimination of the non‐contrast scan phase in some exams; and 4) characterization of tissue by its chemical composition, as in the discrimination of uric acid from calcium‐containing renal stones.
1. The technical approaches to dual‐kV, dual‐energy CT currently implemented or under investigation on commercial CT systems.
2. The technical strengths and weaknesses of each approach.
3. What clinical applications are currently in use or under investigation.
CE‐Imaging: The Physics and Technology of Magnetic Resonance Imaging III
TU‐A‐351‐01: Small Animal Magnetic Resonance Imaging: Current Trends, Challenges and Perspectives for Pathological Imaging35(2008); http://dx.doi.org/10.1118/1.2962417View Description Hide Description
The utilization of magnetic resonance imaging(MRI) in the study of animal models of human pathology began more than 30 years ago. The continual advancement of imaging technology along with the increasing demand for tools to non‐invasively and serially assess disease progression and/or regression in small animals has driven this field forward since its inception. Standard clinical MRI scans primarily focus on disease detection and are based on anatomical and morphological abnormalities, whereas small animalMRI studies center on the serial characterization of the morphological, functional and molecular properties of diseased tissue. Small animalMRI studies often employ novel contrast mechanisms and exogenous chemical reagents, higher magnetic field strengths, higher spatial resolution and multi‐modality imaging but are complicated by issues such as low signal‐to‐noise ratios, enhanced sensitivity to respiratory and susceptibility artifacts, anesthesia‐induced alterations of disease physiology and demanding data processing requirements.
This lecture will provide an overview of the role of small animalMRI in studying human disease, the technical and experimental challenges of such studies and how it could be used in the future to impact clinical treatment planning.
1. Understand the role small animalMRI plays in understanding human disease and its impact on the development and validation of novel therapeutic strategies.
2. Understand the issues and challenges related to a small animalMRIexperiment including design, data acquisition and processing.
3. Understand the future directions and potential advancements currently being pursued in small animalMRI.
CE‐Imaging: Physics and Technology of Computed Tomography II
35(2008); http://dx.doi.org/10.1118/1.2962425View Description Hide Description
CT technology continues to develop at a rapid pace, offering imaging options and features that can dramatically improve image quality. Multichannel systems are now standard and the number of channels and image acquisition options continues to increase, allowing greater coverage per rotation, shorter scan times, and thinner images. Isotropic volumetric data acquisition permits retrospective reconstructions of many different image thicknesses and reformats can be created through multiple planes. These and many other advances have escalated and expanded the utility of CTimaging as a core diagnostic tool.
However, coupled with the improved CT technology is the increased complexity of operating the scanners and the elevated potential of increasing the radiation dose. CT operators must choose from multiple options, many of which are interdependent, for the control of the multitude of available features. The impact of each of these options on image quality and radiation dose can range from subtle to substantial, and may not necessarily be obvious to the operator.
This lecture will focus on the clinical implications of CT scan parameters and provide guidance on achieving an optimal compromise between image quality and radiation dose when constructing CT scan protocols.
1. Understand the influence of primary CT scan parameters on image quality and radiation dose.
2. Learn how to use imaging task‐specific priorities with consideration for radiation dose when determining scanner settings.
CE‐Imaging: Breast IV
35(2008); http://dx.doi.org/10.1118/1.2962670View Description Hide Description
Stereotactic Breast Biopsy (SBB) often seems to be the poor stepchild of the breast health family. This is exemplified by the fact that the number of units in the voluntary American College of Radiology SBB Accreditation Program measures in the hundreds, compared to over 13,000 units in the Mammography Accreditation Program. However, SBB holds its place as a vital player in the diagnosis of breast cancer. As such, the image quality and performance demands of SBB systems are great. Any object of interest seen on a mammogram must be visible in the SBB image, or the biopsy cannot take place. The purpose of this presentation is to provide an update on Stereotactic Breast Biopsy systems and the clinical demands placed on them. Techniques for testing these systems will be presented. Features of various systems will be discussed. Information regarding the ACR SBBAP will also be presented.
1. To become familiar with the approaches to Stereotactic Breast Biopsy.
2. To gain an understanding of the clinical requirements for these systems.
3. To review quality control requirements and methods for clinical SBB systems.
CE‐Imaging: Physics and Technology of Computed Tomography III
35(2008); http://dx.doi.org/10.1118/1.2962679View Description Hide Description
The assessment of radiation dose from computed tomography has become an important issue due to the increased utilization of computed tomography in a large number of clinical applications, from CT urography to cardiac CT. The traditional metrics used for CTdose assessment (CTDI‐100) have come under assault from a number of investigators, because of its limitations in describing the radiation dose in realistic CT examinations. While there remains no real consensus in the field, a number of groups are working on the development of CTdose metrics which convey a better understanding of the radiation dose received by individual patients for specific CT examinations. In this presentation, the perspective of a number of groups will be presented. Primarily, the work of a committee of the International Commission of Radiological Units (ICRU) commissioned in 2005 will be discussed.
The ICRU committee has preliminarily defined a multi‐tier system for the assessment of radiation dose metrics in computed tomography. At the first level, machine‐dependent performance factors are described and measured, and these include the traditional CTDI‐100 metric for the 16 cm and 32 cm diameter polymethylmethacrylate dosimetry phantoms, at the center and at the periphery. There is a consensus amongst the ICRU community that radiation dose to patients undergoing CT examinations is best established using the known geometry of the CT examination, coupled with measured output characteristics of the specific CT scanner (including bow‐tie characteristics), combined with Monte Carlo computations. In the rare instance in which the CTdose to a specific patient needs to be computed, image‐based methodologies will be presented which enable these computations with a high degree of accuracy. It is emphasized that the effective dose (measured in milliseiverts) is not an appropriate measure for individual patient doses, as this metric includes population‐based radiation epidemiological data which may not be applicable to a specific individual. Therefore, the ICRU efforts towards radiation dosimetry of the individual patient focus on dosimetric units which are physical in nature, and describe the individual organdoses and the overall average dose to the patient, depending upon the specific CT examination and the patient's physical characteristics.
1. Convey some of the issues in accurate dose assessment in CT.
2. Describe some of the ongoing efforts of the ICRU committee on CT towards dose assessment.
35(2008); http://dx.doi.org/10.1118/1.2962680View Description Hide Description
Radiation dose from computed tomography has been an important issue for medical physicists for some time. This issue has increased in significance in the past several years due to advances in multidetector CT, which have resulted in increased utilization in areas such as pediatric CT,cardiacCT and even screening applications. Some of the key issues currently facing the medical physics community are assessing dose to patients from these various exams. One of the key building blocks to these assessments is the estimation of organ dose.
The purpose of this presentation is to describe an approach to estimating organ doses to patients using Monte Carlo‐based simulation methods. In this approach, both the scanner and patient are modeled in some detail and a CT exam may be simulated.
The detailed modeled of the CTscanner is created by including information such as the source spectra and filtration, its geometry, beam collimation and the path that the source travels around the patient (such as the path during a helical CT exam). The development, testing and validation of these models will be discussed.
Patient models generally fall into two categories. The first consists of geometric descriptions of organs (based on cones, cylinders, etc.) such as the MIRD phantom. The second consists of voxelized descriptions of patient anatomy that are created based on actual patient scans. In these, radiosensitive organs are identified in the image data to create a voxelized model of the patient geometry. For both types of models, there are challenges to create models representing patients of different sizes, ages and genders.
Once both the scanner and patient are modeled, then different scan protocols can be simulated using a Monte Carlo based software package (such as MCNP or EGS). This involves selecting a scanner model, a patient model and then selecting a set of technical parameters, such as one would do for an actual scan — including body region being examined, etc. The Monte Carlosoftware then simulates the specified scan and tallies absorbed dose in each voxel or geometric unit of the patient model, which then allows the calculation of either mean organ dose or the distribution of dose within an organ.
In this presentation, the results of this approach in several applications will be described, including: (a) dose to the fetus in pregnant women of early, middle and late gestational ages, (b) comparing dose to glandular breast tissue from thoracic CT scans both with and without tube current modulation.
1. Understand the Monte Carlo simulation based approach to estimating radiation dose to radiosensitive organs from CT scans.
2. Understand the current limitations of the Monte Carlo based approach.
3. Describe the results of some current applications of this approach to estimating fetal dose as well as breast dose reduction from tube current modulation.
CE‐Imaging: Breast V
35(2008); http://dx.doi.org/10.1118/1.2962812View Description Hide Description
Full field digital mammography (FFDM) represents an exciting new frontier in the evaluation of breast cancer. FFDM separates out the components of image acquisition,image processing and image displays compared to traditional film screen mammography, allowing a variety of potential technical advantages. In the ACRIN study FFDM had higher sensitivity for detecting breast cancer compared to SFM in premenopausal women, perimenopausal women, and women with dense breasts. Other reported advantages of digital mammography include higher contrast resolution, reduced noise, reduced need for repeat exposures and thus lower radiation dose to the patient, rapid soft‐copy display image interpretation, linkage to PACS systems and advanced applications such as CAD, telemammography and new‐modality imaging such as contrast‐enhanced digital mammography, dual‐energy subtraction mammography, stereo‐mammography, and tomosynthesis.
Digital mammography has not yet received widespread clinical implementation because of several barriers. Significant costs associated with hardware purchase, additional equipment, professional retraining and development and technical support are incurred by facilities that choose digital mammography. Ongoing operational issues include comparing digital versus film screen images in the radiologist and technologist work stations. Lastly, while storage may be rapid, reliable and convenient with digital mammography, significant sizable amounts of computer memory will ultimately be required to store the amount of data required to maintain high‐quality images.
1. To review the clinically relevant functional components of digital mammography.
2. To introduce some of the advanced applications of digital mammography.
3. To discuss some barriers to widespread clinical use of digital mammography.
4. To review data from clinical trials rerarding digital mammography.
(1) Why digital?
(2) Functional components
(3) Advanced applications
(5) Clinical trials
CE‐Imaging: The Physics and Technology of Ultrasound Imaging I
35(2008); http://dx.doi.org/10.1118/1.2962823View Description Hide Description
Ultrasoundelasticityimaging or elastography is real‐time imaging technique capable of determining tissueelasticity of certain organs such as breast and prostate. Elasticityimaging, applied during standard ultrasound examinations, is a patient‐friendly, reliable and cost‐efficient method to diagnose cancer or other diseases based on the changes in tissueelasticity that are usually related to some abnormal, pathological processes.
The main objective of this course is to expose the audience to elasticityimaging with emphasis to principles, approaches and applications. The course will provide both a broad overview and comprehensive understanding of both static and dynamic approaches in elasticityimaging. Starting with a brief historical introduction to elasticityimaging, we will examine the foundation and basic principles of elasticityimaging(theory of elasticity including both the equation of equilibrium and the wave equation, mechanical properties of soft tissues, etc.). We will then discuss practical aspects of ultrasoundelasticityimaging including imaging hardware, signal and image processing algorithms, etc. Motion tracking methods will be introduced and analyzed. In this part of the course, we will also analyze noise (sources) and primary artifacts. Finally, elasticityimaging techniques and their clinical applications will be presented. The course will conclude with overview of several experimental and commercial systems capable of elasticityimaging.
1. Understand the underlying principles of both static and dynamic approaches in elasticityimaging.
2. Understand the practical aspects related to an elasticityimagingexperiment including design, data acquisition and signal/image processing.
3. Understand the issues related to clinical application of ultrasoundelasticityimaging.
Stanislav Emelianov received the B.S. and M.S. degrees in physics and acoustics in 1986 and 1989, respectively, from the Moscow State University, and the Ph.D. degree in physics in 1993 from Moscow State University, and the Institute of Mathematical Problems of Biology of the Russian Academy of Sciences, Russia. In 1989, he joined the Institute of Mathematical Problems of Biology, where he was engaged in both mathematical modeling of soft tissue biomechanics and experimental studies of noninvasive visualization of tissuemechanical properties. Following his graduate work, he moved to the University of Michigan, Ann Arbor, as a post‐Doctoral Fellow in the Bioengineering Program, and Electrical Engineering and Computer Science Department. From 1996 to 2002, Dr. Emelianov was a Research Scientist at the Biomedical Ultrasonics Laboratory at the University of Michigan. During his tenure at Michigan, Dr. Emelianov was involved primarily in the theoretical and practical aspects of elasticityimaging. Dr. Emelianov is currently an Associate Professor of Biomedical Engineering at the University of Texas at Austin. His research interests are in the areas of medical imaging for therapeutics and diagnostic applications, elasticityimaging,ultrasound microscopy, photoacoustic imaging, cellular/molecular imaging, and functional imaging.