Volume 35, Issue 6, June 2008
Index of content:
- Imaging Moderated Poster Session: Exhibit Hall E
- Moderated Poster — Area 4 (Imaging): Nuclear and X‐ray Imaging
35(2008); http://dx.doi.org/10.1118/1.2961368View Description Hide Description
Purpose: An inherent problem in four‐dimensional (4D) PETimaging is the poor statistics in each phase, because the total coincidence events are divided into several phase bins during image acquisition. We propose in this work a simple but efficient image post‐processing method to improve the 4DPET image quality. Method and Materials: In this method, the entire acquired coincidence events are used for each individual phase to enhance their signal‐to‐noise ratios (SNRs). A summed 3D PETimage is first obtained from the noisy 4DPET images, which represents the maximum SNR achievable. By deconvolving the 3D image with a deformable model derived from 4DCT, an improved 4DPET phase series can be obtained. For the best image quality, voice coaching was used to assure a regular and consistent breathing pattern during the course of PET and CT scans. The method was quantitatively evaluated with numerical and physical phantom experiments. Three clinical studies of pancreatic, lung and liver cancer patients were then carried out. Results: Numerical simulations showed that the model‐based 4D‐PET deconvolution method converged monotonically to the “ground truth” within a few iterations, and the SNR of the physical phantom images showed an increase of 83% over the conventional 4D PET without sacrificing spatial resolution. Similar performance was also observed for the patient study. Conclusion: We have developed a new method for improved 4D‐PET imaging. A salient feature of the method is that the coincidence events acquired at different time points are considered simultaneously when reconstructing each phase‐resolved image, leading to substantially improved 4D‐PET images.
SU‐DD‐A4‐02: Assessment of PET Estimated Tumor Volume by Four‐Dimensional Computed Tomography Measurements35(2008); http://dx.doi.org/10.1118/1.2961369View Description Hide Description
Purpose:PETimaging is increasingly playing a key role in tumor detection, staging, and radiotherapy target definition of different cancer sites. However, the use of PET delineation in treatment planning of lungcancer is hindered by breathing motion blur artifacts. These artifacts complicate the evaluation of segmentation algorithms accuracy for PET. In this work, we propose to assess PET estimated tumor volume by comparing with four‐dimensional computed tomography measurements under different motion conditions. Method and Materials: We analyzed six NSCLC patients' datasets who underwent pre‐treatment FDG‐PET/CT scanning and 4D‐CT simulations. The 4D‐CT were acquired according to a ciné‐mode 4D protocol; 25 scans were collected at each couch position while the patient underwent simultaneous bellows and spirometry measurements. Volumetric datasets were rebinned to the following phases: end‐of exhalation, beginning‐of‐exhalation, mid‐exhalation, mid‐inhalation, and end‐of‐inhalation, in addition to computing average CT and MIP datasets. The tumor volume was manually contoured on nine PET and CT datasets by two physicians for each patient to provide a gold standard for comparison. Different PET segmentation algorithms based on SUV thresholding and active contours were evaluated. Motion artifacts were mitigated using a deconvolution‐based deblurring approach. Results: Our preliminary analysis indicates that manual contouring on 4D‐CT datasets produced relatively consistent results across the different phases by the two observers. Motion deblurring had varying effects on the evaluated algorithms; it increased the estimated volume by the 40% maximum SUV thresholding and reduced it when applying an SUV cutoff of 2.5. The active contour model produced robust results independent of the blurring effect. Conclusion: We have investigated a 4DCT approach for assessing the performance of PET segmentation methods in lungcancer. Our preliminary results indicate high sensitivity of thresholding methods and better robustness by active contouring models. Further investigation is required to improve accuracy relative to the 4DCT gold standard.
35(2008); http://dx.doi.org/10.1118/1.2961370View Description Hide Description
Purpose: A method has been developed to measure low dose in radiology using the Landauer microStar OSL reader and microdot dosimeters. The depletion rate (the fraction of trapped electrons participating in the formation of the signal in a reading) has been established and the noise behavior following consecutive readings modeled. Method and Materials: While microdot dosimeters can be used to measure dose levels as low as 10 μGy, caution must be used as repeated expositions of these dose integrators rapidly limits the accuracy of the readings. Around 4000 doses were measured with a set of 362 dosimeters, each dosimeter being reused after reading. Each dosimeter was read multiple times, and a bank of nearly 70000 measurements was acquired. In order to obtain exposition dose, a method taking the multiple readings of the dosimeter into account was devised to estimate accumulated dose before and after exposition. The difference between these two values was the estimated exposition dose. It was found that low doses were more accurately measured when dosimeter accumulated dose was kept below 1500 μGy. This was achieved by exposing the microdot dosimeters to tungsten light for twelve hours. The dosimeter accumulated dose was reset to 30 μGy, without affecting significantly the dosimeter operating characteristics. Results: We found that the relation between the noise variance and the accumulated dose is quadratic for doses between 50 μGy and 10 mGy in the case of 90–140 kV exposure. Noise variance dictates that 200 readings be done in order to estimate the dosimeter rate of depletion with sufficient accuracy. The rate of depletion for the microdot dosimeter is −0.29% ± 0.03 (2 standard deviations). Conclusion:OSL allows measuring 10 μGy doses with an error of ± 3 μGy, but only with multiple readings after exposition and light induced resetting to zero.
SU‐DD‐A4‐04: Micro Angiographic and Fluoroscopic Real‐Time Image Data Handling Using Parallel Coding Techniques in LabVIEW35(2008); http://dx.doi.org/10.1118/1.2961371View Description Hide Description
Purpose: Multicore programming coding techniques have been designed to meet the high‐speed requirements of a High‐Sensitivity, Micro Angiographic‐Fluoroscope (HSMAF) detector for 30 fps acquisition, image‐processing, display, and rapid frame transfer of high‐resolution, region‐of‐interest (ROI) images.Method and Materials: The HSMAF detector was built by our group using a CsI(Tl) phosphor, a light image intensifier, and a fiber‐optic taper coupled to a charge‐coupleddevice(CCD)camera which provides real‐time 12 bit, 1k × 1k images capable of greater than 10 lp/mm resolution. A graphical user interface (GUI) was developed to control the system and enable real‐time acquisition, image‐processing, display, and rapid storage of high‐resolution images. To accommodate all these processes working in real‐time, parallel coding methods, such as instruction pipelining and task parallelism, were designed and used to take advantage of the available multicore processors (dual and quad cores). Results: The parallel coding techniques of the GUI can handle radiographic procedures that require on‐the‐fly image processing at 30fps such as Roadmapping, Digital‐Subtraction‐Acquisition (DSA), and rotational DSA. On a 2.4 GHz dual processor a high resolution image is acquired, processed and stored in less than 30ms. Recursive temporal filtering, as well as gain correction can be added to the above procedures and still maintain the required high frame rates when a quad‐core processor is used. Moreover, the overall system design offers virtually unlimited memory for acquisition and huge, expandable storage capacity. Conclusion: The ability of high frame‐rate acquisition, image processing and display for this unique high‐resolution detector along with the user friendly implementation of the interface should provide angiographers and interventionalists with a new capability for visualizing details of small vessels and endovascular devices such as stents. Such capability should enable more accurate diagnoses and image guided interventions. (Support from NIH Grants R01NS43924, R01EB002873 and Toshiba Medical Systems Corporation).
SU‐DD‐A4‐05: An Analysis of Signal‐To‐Noise Ratio Differences Between the New High‐Sensitivity, Microangiographic Fluoroscope (HSMAF) and a Standard Flat‐Panel Detector (FPD)35(2008); http://dx.doi.org/10.1118/1.2961372View Description Hide Description
Purpose: To explain the difference in the measured signal‐to‐noise ratio (SNR) for a new high‐resolution detector with that for a standard FPD.Method and Materials: We measured a ratio of 4.3 between the SNR2 of an FPD (194 μm pixels) and HSMAF (35 μm pixels). This ratio cannot be explained by the pixel areas alone since the FPD pixel area is 30.7x larger than that of the HSMAF. To explain this disparity, we investigated the role of instrumentation‐noise and the x‐ray conversion phosphor (600 and 300 μm thick CsI:Tl for the FPD and HSMAF, respectively) considering differences in absorption efficiency, Swank factor and blur. Point spread functions (PSF's) were derived from measured presampled line spread functions (assuming isotropic blur), and the effect was analyzed by convolving with a simulated Poisson‐distributed x‐ray image. The calculated SNR2 ratio was compared to the measured SNR2 ratio for a practical range of exposures (1–100μR). Results: The difference between SNR2 of the detectors was largely accounted for by considering detector characteristics in addition to pixel size. The increase in SNR2 from optical blur was about 8 times greater for the HSMAF than for the FPD, since the signal was spread across more of the smaller pixels for the HSMAF. With consideration of the effect of absorption efficiency and Swank factor on SNR, the calculated SNR2 ratio agreed well with that measured (4.6 versus 4.3, respectively). Conclusion: The measured SNR depends not only on pixel area, but also to a large extent on phospher characteristics. Despite having a much lower number of incident x‐ray photons per pixel, the SNR2 of the HSMAF was much closer to that of the FPD because of a greater spread of light quanta over a larger number of pixels.
(Support: NIH Grants R01‐NS43924, R01‐EB002873, Toshiba Medical Systems Corporation).
SU‐DD‐A4‐06: Instrumentation Noise Equivalent Exposure (INEE): An Investigation of Spatial Frequency Effects35(2008); http://dx.doi.org/10.1118/1.2961373View Description Hide Description
Purpose: To expand on previous development of the instrumentation‐noise equivalent exposure (INEE) metric which directly quantifies the quantum‐limited performance range of a detector in terms of entrance exposure, by investigating spatial frequency dependence. Method and Materials: The INEE formalism assumes the total noise of an x‐ray detector is proportional to the incident number of quanta, with an additive term representing contributions resulting from instrumentation‐noise sources. The INEE then represents the threshold exposure below which the detector's performance is no longer driven by quantum statistics, becoming instrumentation‐noise‐limited and is measured using output signal variance plot versus exposure, fitted to . Similarly, to provide a spatial frequency dependent INEE, INEE(u,v), the two‐dimensional NPS (i.e. the variance of image intensity divided among the various frequency components of the image) was measured and plot as a function of exposure at each spatial‐frequency and fit with NPS(u,v)=k(u,v)(Exposure+INEE(u,v)). Output signal variance and NPS were measured using 90 flat‐field images on a digital x‐ray detector according to IEC guidelines and using the RQA 5 spectrum. Results: The measured INEE(u,v) was radially symmetric with the lowest INEE values at low spatial‐frequencies with an increasing INEE at higher spatial‐frequencies. The frequency behavior was determined to be largely dependent on the blur and scattering of the phosphor as described by the detectorMTF. Frequency independent INEE was determined to be the weighted mean of INEE(u,v) with k(u,v), over all spatial frequencies. Conclusion: The INEE metric addresses the need for a direct, quantitative measure of the quantum‐noise‐limited exposure range of x‐ray detectors by providing the threshold exposure at which the detector instrumentation‐noise exceeds the quantum‐noise. Frequency dependence was investigated to provide a greater understanding of this promising new metric.
(Support: NIH Grants R01‐NS43924, R01‐EB002873, Toshiba Medical Systems Corporation).
- Moderated Poster — Area 4 (Imaging): Magnetic Resonance and Ultrasound Imaging
SU‐EE‐A4‐01: Stereoscopic Visualization Of Diffusion Tensor Imaging Data: A Comparative Survey Of Several DTI Visualization Techniques35(2008); http://dx.doi.org/10.1118/1.2961392View Description Hide Description
Purpose: To compare several methods for displaying DTI data in MRI for clinical use. Method and Materials: A diffusiontensorimaging (DTI) visualization tool was developed at our institution by graphically displaying the principal eigenvector as a headless arrow, using either regular or stereoscopic LCD monitors. This tool utilizes stereoscopic vision to represent diffusiontensor's spatio‐directional information, while allowing color, the traditional tool for displaying directional information, to be used for other diffusion characteristics, such as functional anisotropy (FA). In this tool, the principal eigenvector at each voxel, Vmax, is depicted as a headless arrow, while a color scale is used to encode the FA index. We compared: a) grayscale FA map (GSFM), b) coded orientation map (CCOM), c) Vmax maps using regular non‐stereoscopic display (VM), and d) Vmax maps using stereoscopic display (VMS). A survey of clinical utility was performed by eight board‐certified neuroradiologists, using a paired comparison questionnaire format with forced and graded choices. Five representative cases were selected based on the typical braintumor patient population at our institution. Results: Vmax map was favored over traditional methods of display in most of the cases (80% vs. 10%, 10% no preference). However, when stereoscopic (VMS) and the non‐stereoscopic (VM) modes were compared, VMS was preferred in 45% of them while VM was 35% and 30% had no preference. The main reason given for the preference of the stereoscopic DTI visualization tool (VMS, VM) to the conventional DTI visualization methods (CCOM and GSFM) was better delineation of white matter tract and improved 3D anatomy effect. Conclusion: DTI data displayed by our Vmax based display methodology seems to be preferred over traditional display methods in tests of clinical utility.
SU‐EE‐A4‐02: Modeling Contrast Agent Extravasation in Dynamic Susceptibility Contrast MRI of Very Leaky Brain Tumors35(2008); http://dx.doi.org/10.1118/1.2961393View Description Hide Description
Purpose: In dynamic susceptibility contrast MRI, when there is a disruption of the BBB, as is frequently the case with braintumors, contrast agent leaks out of the vasculature and causes additional T1 and T2 effects. In slightly leaky conditions, previous studies successfully modeled the T1 effect and were able correct it for better perfusion quantifications. However, in very leaky conditions, the T2 effect can be significant and needs to be taken into account. This study proposed a two‐compartmental model that is able to describe the combined T1 and T2 effects in the measured signals. Method and Materials: Our model considered different tracer residue functions for braintissues and leaking tumors. They were then incorporated in both T1 and T2 changes in the MR signal equation. Three unknown variables were introduced: K1, K2 and K3, and the K2 directly related to the permeability. We used the model to fit measured ΔR2* curves and corrected the contrast leakage in the patient data with heavy T2 effect. Results: The proposed model was able to fit well the leaking ΔR2* curves and better correct them comparing to the previous model. The GM/WM CBV ratios were comparable before (1.26) and after (1.19) the correction. However, Tumor/WM CBV ratio was 42.6% decreased after the correction (2.25 v.s. 5.27). The K2 map was able to describe regions with significant contrast extravasation, whereas the previous model failed due to the additional T2 effect. Conclusion: The model proposed in this study was able to correct both T1 and T2 effects of contrast extravasation in DSC MRI. The T2 component significantly overestimated rCBV in very leaking braintumors, which was not considered in the previous model with T1 effect only. In addition to better correct the rCBV maps, our model successfully extracted the regional permeability changes of the tumors.
SU‐EE‐A4‐03: Evaluating and Understanding Relative Phase Angle Between Fat and Water and Its Effect On Fat Quantification in the Dixon Methods35(2008); http://dx.doi.org/10.1118/1.2961394View Description Hide Description
Purpose: The purpose of this study is to measure phase angle α between water and fat as a function of TE and demonstrate that uncertainties in setting the exact TE or knowing the exact a can lead to variations in the fat quantification by the Dixon methods. Method and Materials: Two phantoms were constructed. One consisted of half pure water solution and half soybean oil with a clear interface, which was used to measure the relative phase angle between water and fat as a function of TE. The other consisted of 7 vials of homogeneously mixed vegetable oil and distilled water with different oil/water volume ratios (0/100, 10/90, 20/80, 30/70, 40/60, 50/50 and 100/0). We used a 2D FSPGR sequence to acquire the images with the following parameters: TR=180ms, flip‐angle =80°. TE was varied between 2.0 and 5.5ms in 0.1ms steps. A recently‐developed 2‐point Dixon algorithm was used to generate separated water and fat images.Results: We obtained the relationship between α and TE in phantom and in vivo by experiments. The least‐squares linear fits of the experimental results for phantom and in‐vivo yield and , respectively. From these relationships, the real α can be easily derived. Also, we demonstrated that the variations of TE allowed in the clinical range may lead to variations up to 40% in the apparent fat quantification due to the deviation of α Conclusion: It is desirable to know the true relationship of α and TE in in‐ and out‐of‐phase imaging. Such a relationship can guide us to select certain parameters to obtain desired relative phase angles. The variations in TE may lead to variations in fat quantification. The systemic experimental studies on TE dependence of α are expected to help improve the use of Dixon methods and reduce the fat quantification errors.
SU‐EE‐A4‐04: Accelerating Breast Dynamic Contrast Enhanced MRI with Efficient Multiple Acquisitions by SPEED Using Shared Information35(2008); http://dx.doi.org/10.1118/1.2961395View Description Hide Description
Purpose: The efficient multiple acquisition method using Skipped Phase Encoding and Edge Deghosting (SPEED) has been successfully demonstrated with a dynamic contrast enhanced (CE) mice tumor study; however, it has not been tested with any human subjects. In this work, this technique is further developed to accelerate breast dynamic CE MRI.Method and Materials: In dynamic CE‐MRI, a series of images are acquired within different time frames, which contain highly similar spatial information. The strong structural similarity is used to accelerate imaging by SPEED with factors greater than that achievable with a single acquisition. It was tested in this work with an in vivo breast dynamic CE MRI study. The dynamic CE scan was performed on a GE 1.5T system using a T1‐weighted gradient echo sequence (matrix 256×256, FOV 20cm × 20cm, TR = 5.5 ms, TE = 1.5 ms, flip angle = 30°, slice thickness = 4 mm, the number of frames = 7). Results: Reference images are first reconstructed from full k‐space data. The corresponding deghosted images are then reconstructed from partial data, with undersampling factors of 3/5 for one frame and 2/5 for all other frames, resulting in an acceleration factor of 2. In other words, the total scan time is 3.5 times that of a single acquisition. The imagesreconstructed from partial data by the proposed method show comparable quality as those reference images.Conclusion: In this work, the technique of efficient multiple acquisitions by SPEED is further developed to accelerate breast dynamic CE MRI. By using shared spatial information, a breast dynamic CE MRI study is accelerated by SPEED with a factor of 2, which is greater than that achievable with a single acquisition. This saving can be used to double the image resolution, or to increase the frame rates of dynamic sequence.
35(2008); http://dx.doi.org/10.1118/1.2961396View Description Hide Description
Purpose: Magnetic particle imaging, MPI, was introduced in 2005 and promises to be sufficiently sensitive to allow molecular imaging. We introduce and explore numerically an alternative method of encoding position in magnetic nanoparticle imaging. The original MPI method localized the nanoparticle signal at the 3rd harmonic frequency using a strong magnetic field to saturate the nanoparticles outside of the field free point (FFP). We present an alternative method of encoding the signal position in which the signal at the second harmonic is recorded so signal is produced along a magnetic field gradient. Method and Materials: The signal from 20nm magnetic nanoparticles was simulated with a Langevin model using iron oxide properties. Response matrices were calculated for a linear gradient across the sample and the condition number of the matrices was used as the metric for stability of the reconstruction. Results: The conditioning of the second harmonic is significantly better conditioned than that for the third harmonic. The condition number of the response matrix used to reconstruct the spatial distribution of the signal is nearly one for ideal cases. The size of the field gradient increases for increasing number of encoded pixels but need not be as large as that required to saturate the nanoparticles outside a single voxel. For a single size nanoparticle, the response function approximates a smoothed Haar wavelet function at the smallest scale and the best conditioning occurs when the scaling of the response functions approximates the scaling of a dyadic wavelet basis. Conclusion: A linear gradient can be used to encode nanoparticle position using the second or third harmonics. The signal at the second harmonic is significantly larger than at the third harmonic frequency and the conditioning is significantly better; the combination should allow an increase in sensitivity by a factor of from 2 to 8.
SU‐EE‐A4‐06: Estimation of the Optimal Maximum Beam Angle and Angular Increment for Normal and Shear Strain Estimation35(2008); http://dx.doi.org/10.1118/1.2961397View Description Hide Description
Purpose: In currentultrasoundelastography, only the axial component of the displacement vector is estimated and used to produce strain images. Previously we had proposed a method to estimate both the axial and lateral components of a displacement vector using radiofrequency echo signal data acquired along multiple angular insonification directions of the ultrasound beam. In this study, we present error propagation through the least square fitting process for optimization of the angular increment and maximum beam steered angle. Method and Materials:Ultrasound simulations are performed to corroborate theoretical predictions of the optimal values for the maximum beam angle and angular increment. Beam steering characteristics of the linear array were simulated by selecting appropriate time delays for each element which determines the focal point and steering angle for the beam. A uniformly elastic phantom was simulated by modeling a random distribution of 50 μm polystyrene beads with an average concentration of 9.7/mm3 in a medium (to simulate Rayleigh scattering). After computing RF signals for each insonification angle, the phantom was deformed by a uniaxial compression (1% of the phantom height). The displacement of each scatterer in the phantom was calculated using the Finite Element Analysis(FEA)software ANSYS. The new scatterer positions were used for calculating the post compression echo signals at each insonification angle. Results: The theoretical prediction matches well with numerical simulations. For typical system parameters, the optimal maximum beam angle is around 10‐deg for axial strain estimation, and around 15‐deg for lateral strain estimation. The optimal angular increment is around 4∼6 deg, which indicates that only 5∼7 beam angles are required for this strain tensor estimation technique. Conclusion: The theory presented in this study is useful for choosing optimal parameters for the angular data acquisition process for strain tensor estimation.
- Moderated Poster — Area 4 (Imaging): Computed Tomography
TU‐EE‐A4‐01: Bismuth Shields Vs. MAs Reduction for Decreased Radiation Dose to Breasts in CT Examinations35(2008); http://dx.doi.org/10.1118/1.2962623View Description Hide Description
Purpose:Bismuth breast shields have been promoted as a means for selectively reducing the radiation dose to the breast by about 30% in CT studies, while maintaining image quality. A study was performed to compare imagenoise and CT number accuracy with the shields to an alternative dose reduction method of employing 30% less mAs. Method and Materials: A humanoid thorax phantom with simulated breasts was imaged on a GE VCT scanner using: 1) a standard lungcancer screening protocol, 2) the same protocol but with a commercial bismuth breast shield, and 3) 30% less mAs without the shield. Regions‐of‐interest (ROIs) were placed in the images and the mean CT numbers and standard deviations of the CT numbers were compared. Results: Relative to the mean CT numbers in images for the standard technique, use of the breast shield resulted in increases of about 9HU, 19HU, 6HU, and 57HU in ROIs in the heart, anterior left lung, posterior left lung, and right breast, respectively. Corresponding changes for 30% mAs reduction were 1HU, −3HU, −2HU, and 0HU. Ratios of the standard deviations of the CT numbers in the dose reduced images to those in the images using the standard technique for the above ROIs were 1.4, 1.2, 0.9, and 1.8 for the breast shield and 1.3, 1.0, 1.0, and 1.2 for 30% mAs reduction. Conclusion: mAs‐reduction is preferred over bismuth breast shields because: 1) mAs‐reduction has much less effect on mean CT numbers, which is important for quantitative studies such as lung density and coronary calcification assessment, 2) noise in the mA‐reduced images is less, and 3) the images do not suffer from streak artifacts arising from the shields. Additional comparisons in images of human subjects undergoing IRB‐approved coronary calcification studies with the breast shield vs. 30% reduced mAs will be presented.
35(2008); http://dx.doi.org/10.1118/1.2962624View Description Hide Description
Purpose: To evaluate in a phantom study the dose to adult female breast tissue using current clinical body CT protocols on 64‐slice systems. Method and Materials: An anthropomorphic phantom with breast modules (Rando‐ Alderson) was scanned on a variety of 64‐slice CTscanners(GE LightSpeed VCT; Toshiba Aquilion; Siemens Sensation64; Philips Brilliance — in progress). Standard clinical protocols which either directly expose the breast or have scatter/edge‐of‐field dose were evaluated: (1) lung screening (smoker); (2) chest‐abdomen‐pelvis (CAP ‐oncology follow‐up); (3) cardiac calcium scoring (60bpm); and (4) virtual colonoscopy (supine & prone). Protocols were similar, but not identical, between systems. Scan coverage was matched; no breast shields. Absorbed dose to the breast tissue was measured by loading 10 TLDs into each breast module. LiF TLDs were calibrated individually for 9mm Al HVL (120 kVp), with an NIST traceable ion chamber, and a correction applied for the CT HVL. Imagenoise was also measured. Results: Standard clinical protocols for an adult female, including adaptive mA methods for the CAP exam, were utilized on each scanner. Averages of the TLDdose to the breast ranged from: (1) 0.56–1.36 cGy lung exam; (2) 1.27–2.98 cGy CAP; (3) 1.01–2.98 cGy calcium scoring; (4) 0.67–1.35 cGy virtual colonoscopy. Conclusion: The expected broad range of breast tissue dose for various CT exams was seen, but also indicate a possible reduction in dose compared to earlier reports from 4‐slice (Mahoney et al, RSNA 2005) and 16‐slice systems (Hurwitz et al, 2006 — extrapolated estimate). Variation between manufacturers was observed, but note that the protocols tested were those currently in clinical use. Further optimization of protocols for the given system design is/may be possible, especially given the significant interest from the entire radiological community to improve awareness of dose issues and to minimize exposures.
35(2008); http://dx.doi.org/10.1118/1.2962625View Description Hide Description
Purpose: The purpose of this work is to show that megavoltage cone‐beam CT (MVCBCT) images can provide accurate dose recalculation and be used to verify the daily dose distribution received by patients treated for head‐and‐neck (H&N) and prostate cancers.Method and Materials: Corrections for the cupping and missing data artifacts seen on MVCBCT images were developed for both H&N and pelvic imaging. MVCBCT images of six H&N and two prostate patients were acquired weekly during the course of their treatment. Several regions of interest were contoured including: the prostate and rectum and for H&N cases the spinal cord and parotids. Dose calculation was performed with the corrected MVCBCT images using the planned treatment beams and variations from treatment plan dosimetric endpoints were analyzed.Results: MVCBCT image correction and calibration for the H&N (pelvic) region shows standard deviations in dose calculations between kVCT and MVCBCT images of 1.9% (0.6%). The mean dose to the right parotid of H&N patients had an average increase of 18% during treatment. Increases of up to 52% were observed. The maximum dose to 1% of the spinal cord went up by 2% on average, although increases of up to 10% were noted. For prostate patients on one fraction an undetected setup error caused the dose received by 95% of the prostate to diminish by 3%. One patient had an average increase of 3.6% of the maximum dose received by 1% of the rectum. Conclusion: MVCBCT was used succesfully to verify the daily dose distribution for H&N and prostate patients. A substantial increase in the mean dose to the parotid glands was observed during treatment. For prostate patients the impact of setup errors on the prostate dose coverage was observed, along with the dosimetric consequences of volume changes in normal tissues.Conflict of interest: Supported by Siemens.
35(2008); http://dx.doi.org/10.1118/1.2962626View Description Hide Description
Purpose: To develop a novel method for registration of different phases of 4D CT with consideration of lung volume deformation and sliding motion. Method and Materials: Sliding motion of the lung against chest wall during breathing presents a challenging problem in image registration. The motion range of diaphragm during respiration is about 3 cm and the displacement vectors of tissue on the two sides of pleura are discontinuous. To register different phases of 4D CT, the lungs on these phases were first automatically segmented. A Scale Invariance Feature Transformation (SIFT) descriptor was used to find feature points shared by the template phase and target phases on the lung contours. A Fourier transformation of displacements of featured points was carried out. The low spatial frequency component of the displacement represents the sliding motion, whereas the high frequency component of the Fourier transformation represents the contribution from deformation and can be modeled by a conventional deformable model. After shifting the lungs on the target phase according to the filtered sliding displacements, a thin plate spline (TPS) deformable registration was applied between the template phase and shifted phases to obtain the displacement vector for each voxel. Results: We calculated the average diaphragm sliding distance between phase 1 and other phases with and without inclusion of lung sliding using patient data. It is demonstrated that the accuracy of the proposed method is three times better than that of conventional TPS method. With inclusion of sliding motion, the overlapped ratio of tumor contour is increased to 84.3% as compared to 78.0% using conventional approach. Conclusion: A hybrid method of deformable registration in spatial domain and low‐pass filter in frequency domain seems to model the lung breathing motion well. The combination provides a robust and computationally efficient method for registration of 4D CT thoracic images.
35(2008); http://dx.doi.org/10.1118/1.2962627View Description Hide Description
Purpose: One method for scatter correction in cone beam computed tomography(CBCT) is to compute the scatter with a Monte Carlo simulation. The accuracy of this approach may be influenced by the accuracy of the underlying photon scattering cross sections. The purpose of this study is to investigate the effect of the level of sophistication of photon interaction models on the computed scatter in CBCT and its influence on the accuracy of image reconstruction.Method and Materials: The investigation is performed using egs_cbct, a new EGSnrc based code for use in CBCT imaging. The EGSnrc treatment of Rayleigh scattering is improved to include measured molecular coherent scatteringform factors (MCSFF) in addition to the commonly used independent atom approximation form factors (IAAFF). A more accurate algorithm for sampling coherent scattering angles is also added. Three photonscatter models are investigated: Compton scattering according to the Klein‐Nishina equation and no Rayleigh scattering (simple); Bound Compton scattering modeled in the relativistic impulse approximation (RIA) and IAAFF; RIA and MCSFF. Scatter calculation and image reconstruction accuracy is tested for a 30 cm diameter water sphere with and without inserts of varying density and materials for a scan with 180 projections. Results: The simple model is not sufficiently accurate for estimating photonscatter in CBCT. The influence of MCSFF on the computed scatter distributions is small and only noticeable at the edges of the phantom. No significant difference in the accuracy of the reconstructed images is observed between the MCSFF and IAAFF coherent scattering models. Conclusion:Rayleigh scattering must be included in the Monte Carlo simulation to estimate the scatter in CBCT imaging. The inclusion of molecular interference effects in coherent scattering has no significant effect in the image reconstruction process.
35(2008); http://dx.doi.org/10.1118/1.2962628View Description Hide Description
Purpose: To accelerate the synthesis of digitally reconstructed radiographs(DRRs) and the reconstruction of cone‐beam CT(CBCT) data with the help of commodity graphics processing units (GPUs). The massively parallel architecture of GPUs allows significant improvements in execution speed for algorithms that present various levels of symmetry. Method and Materials: We have implemented DRR synthesis and CBCT reconstruction algorithms on GPUs and have compared their execution speed and accuracy with those of traditional CPU implementations. DRRs were obtained with an incremental version of Siddon's algorithm, an exact raytracing routine, while CBCT reconstructions were based on the FDK algorithm. The benchmarking was conducted with a nVidia GeForce 8800 GTX graphics board hosted in a 2.4 GHz Intel Quad Core PC. The Cg shading language was used for GPU programming, and all calculations were performed in single precision. Results: We have achieved execution speed improvement factors of 47x for DDR synthesis and of 100x for CBCT reconstruction with the GPU implementation. These figures, obtained with relatively large, clinically relevant datasets (512 Mb), could further be improved by using smaller datasets that fit entirely in the video memory. The DRRs obtained with the GPU implementation were identical to their CPU versions while the CBCTimages presented slight differences (2% standard deviation), most likely due to discrepancies in the CPU‐GPU floating‐point rounding conventions. Conclusion: We have implemented on a streaming architecture two algorithms relevant to many branches of medical physics. We have achieved significant speed increase factors while preserving the accuracy of the results. The rapid development of GPU products sporting more memory, supporting double‐precision and running at higher clock speeds lets envision even faster execution and more accurate results, thereby opening the way to new, innovative applications in medical physics.
- Moderated Poster — Area 4 (Imaging): Image Display, Processing, Non‐Conventional Imaging
TU‐FF‐A4‐01: Virtual Simulator Design for Collision Prevention During External Radiotherapy Planning35(2008); http://dx.doi.org/10.1118/1.2962656View Description Hide Description
Purpose: Collision avoidance of the treatment accelerator components such as gantry, table, collimators, jaws, and fixation devices with the patient is one of the biggest concerns in external treatment planning. Most commercial treatment planning systems do not include collision prevention simulation step. On the other hand, a fool‐proof collision‐map, lookup table, and simpler analytical method guard only against the most apparent collisions. Thus, a comprehensive virtual simulator design for collision avoidance is very useful for external radiotherapy planning. Method and Materials: An accurate modeling of the treatment accelerator is possible with geometric data. Three‐dimensional patient modeling is also possible from the patient's CT data. Since each component in the data bank is described as an independent mesh modeling based on the type of associated polygons, relative position changes can be described easily for the device dynamics simulation. The relative motions of the gantry and the treatment table are collected from the treatment plan and the graphical user interface generates the events at the given time intervals. This visual system is incorporated with the treatment planning simulation system. Results: The quality verification of our virtual simulator for the potential collision has been performed with two combinations of treatment table and gantry rotations where a collision is eminent based on the visual assessment. The planner can search for beam paths with minimal critical structure interference before extensive optimization process. A database of CT and MR scans for all tumor sites is being built, which provide useful information to map all potential collision possibilities for all treatment isocenters. Conclusion: The important benefits of this virtual simulator is the replacement of the conventional laborious procedures required for the expensive hardware simulator unit, the efficiency increment, the accuracy improvement of radiation treatment procedure, and the cost reduction in terms of time and physical patient's presence.
35(2008); http://dx.doi.org/10.1118/1.2962657View Description Hide Description
Purpose: To ensure that the performance of a fast optical computed tomography(OCT) scanning system is comparable with previous models. Method and Materials: MGS Research Inc. developed an OCT system based on the translate‐rotate method used by early generation x‐ray CTscanners. The performance of the system has been investigated and the system has been used in several published studies. Recently, a new OCT system was developed by MGS Research Inc. that reduces scan times by a factor of 10 or more. Several 3D dosimeters were irradiated using 6 MV photons and imaged on both version of the scanner. The imagenoise, reproducibility, and spatial accuracy were determined and used to evaluate the performance of the system. Results: The new version of the scanner reduced the scan time per plane from 7 minutes to 30 seconds. Preliminary results showed that noise levels in the images from both models were comparable. The uncertainty in the determination of the optical density values from images acquired with both models was ∼2%. The new model uses Fresnel lenses that may need adjustment prior to an imaging session which can affect the reproducibility of the system. Conclusion: The new version of the OCTscanner shows promise as a replacement for the previous version. Continued improvement in the software and hardware are needed to make the system as robust as the previous version.
The investigation was supported by PHS grant CA 10953 awarded by the NCI, DHHS.