Volume 36, Issue 6, June 2009
Index of content:
- Imaging Moderated Poster Session: Exhibit Hall ‐ Area 4
- Moderated Poster ‐ SPECT and PET/CT
36(2009); http://dx.doi.org/10.1118/1.3181087View Description Hide Description
Purpose: A methodology is proposed to estimate the standard deviation (STD) within lesion voxels for clinical PETimages.Method and Materials: Three series of list‐mode dynamic PETimages (8×3‐minute scans) of a seven chamber phantom (seven one‐liter bottles arranged hexagonally, 1120 ml) were acquired in 2D and 3D to estimate the STD of the images voxels. Filtered‐Back‐Projection (FBP) reconstruction with random, scatter, and attenuation correction is used instead of iterative reconstruction due to its improved signal‐to‐noise ratio (SNR) in hot lesions and the associated low count convergence issues of iterative reconstruction. In addition, the noise in FBP within each image is spatially invariant, which is not true for iterative reconstruction. The phantom fill ratios were 2×1:1, 2×2:1, 4:1, 8:1, and 16:1 with an initial activity concentration of 5.96 kBq/ml (1.26 mCi). The standard deviations are estimated by comparing the 8 3‐minute acquisitions to one another and by examining an ROI drawn within each image. The 3‐minute FBP images are compared to the 24‐minute image to estimate the standard deviation of the 24‐minute frame. Results: The following were verified: With FBP the STD of the voxels within a slice is spatially invariant, and FBP has superior SNR to iterative reconstruction for high contrast lesions. In addition, post reconstruction summed FBP images are statistically similar to sinogram summed images with maximum errors of the mean and STD of 0.2%, and 2.1%, in 2D and 2.1%, and 6.1% in 3D respectively. Finally, the STD for the summed image is inversely proportional to the square root of the number of frames. Conclusion: In phantom studies a single STD for an entire slice is shown to be representative of the STD of the individual voxels within that slice. Furthermore, in FBP the STD scales with . This methodology should extend well to patient studies.
SU‐DD‐A4‐02: Potential Improvement of PET Imaging Quantitative Accuracy with An External Reference Source36(2009); http://dx.doi.org/10.1118/1.3181088View Description Hide Description
Purpose:PET has been increasingly used to assess therapeutic response. As PET suffers from many inaccuracies, it might be difficult to distinguish between actual treatment response and acquisition error. The purpose of this study was to investigate the accuracy improvement of PETimaging with an external reference source. Method and Materials: A reference source was scanned in two different PET/CT scanners: GE Discovery LS (DLS) and GE Discovery VCT (DVCT). The reference source was scanned at various distances from the center of bore and at a fixed position near a scatter phantom filled with . In addition, the reference source was scanned next to patients over the period of two years. A series of scans was performed for each patient. All PETimages were reconstructed using 2D‐OSEM. A cylindrical region of interest on PET was automatically segmented. Results: The measured activity was found to be inversely proportional to the distance from the center of the bore. The activity decreased by 0.95 ± 0.09 % and 0.78 ± 0.19 % per cm for DLS and DVCT respectively. The measured activity was also found to be related to activity concentration of the phantom. Within each series of patient scans, the position of the reference source varied by 1.1 ± 0.8 (SD) cm leading to 1.0 % variation in activity. However, the observed inter‐scan variation of activity was 7.0 ± 6.4 %, which cannot be explained solely by the distance effect. Conclusion: The observed inter‐scan variation for the same patients was greater than that solely due to displacement of the reference indicating a significant intra‐scan variation at different time points. The on‐going phantom studies will determine quantitative accuracy of PETimaging, providing a standardized imaging procedure.
SU‐DD‐A4‐03: Regional Atrophy in Alzheimer's Disease Measured by Voxel Based Morphometry Compare with PET and MRI36(2009); http://dx.doi.org/10.1118/1.3181089View Description Hide Description
Purpose: Voxel based morphometry (VBM) has been increasingly applied to investigate differences in brain morphology between a group of patients and control subjects. VBM permits comparison of gray matter (GM) volume at voxel‐level from the entire brain, thus is an efficient method for assessing regional differences. The purpose of this study was to assess the regional GM volume loss measured by VBM in Alzheimer's disease (AD) compared to controls, and to measure hippocampal volume using manually delineated volumetry and compare the results to VBM findings. Method and Materials: Twenty‐three AD (mean age 70 ± 8y; m/f= 7/16, Mini‐Mental State Exam [MMSE]= 22.2) and 20 cognitively normal elderly control subjects (mean age 69 ± 4y; m/f= 10/10) were included in this study. The 20 sets of images were first normalized and create the probability maps for segmentation. Normalized hippocampal volume to the intracranial volume was compared between AD and control groups. Results: The AD group had a lower GM %, and a higher CSF% compared to controls. The hippocampal volume of AD patients was significantly lower than that of controls (P<0.001). The region includes parahippocampal gyrus, cingulate, insula, frontal lobe and middle temporal complex. Despite the high significance in manual ROI analysis, hippocampus was not revealed in the VBM. Conclusions: We found that the hippocampal volume in AD was significantly smaller than in controls using ROI‐based volumetry. However, although our VBM results demonstrated that AD patients had a significant atrophy in middle temporal lobe, parahippocampus and insula, the hippocampus was not revealed. While VBM can be applied to assess global atrophy efficiently, manual volumetry is needed to study irregularly‐shaped subcortical structures.
36(2009); http://dx.doi.org/10.1118/1.3181090View Description Hide Description
Purpose: The role of PET/CT imaging in radiotherapytreatment planning and monitoring has been increasing on a rapid pace. There is accumulating evidence that characteristics of pre‐treatment FDG‐PET could be utilized as prognostic factors to predict radiotherapy outcomes in different cancer sites. Direct standardized uptake value (SUV) measurements were traditionally used to assess risk. To improve our understanding of embedded information in PET/CT we are investigating an alternative image‐feature based approach for analyzing and predicting post‐radiotherapy local control in non‐small cell lungcancer(NSCLC) patients. Method and Materials: We analyzed pre‐treatment PET/CT scans of thirty‐one patients for the endpoint of local/loco‐regional failure. The Gross Tumor Volume (GTV) was considered as the region of interest (ROI). All patients underwent pre‐radiotherapy diagnostic PET/CT and the images were registered with their corresponding treatment planningCT. We studied the effect of motion artifacts using deblurring methods based on deconvolution with a 4D‐CT derived motion kernel. Features from the GTV region based on intensity‐volume histogram (IVH) metrics and texture characteristics were extracted from each CTimage,PETimage with and without motion correction. Results: About thirty candidate features were extracted for each case and were analyzed for assessment of patients' risk of failure. Our preliminary results indicate that IVH metrics and texture features could be potentially useful for failure prediction in NSCLC compared to volume and maxSUV. PET seemed to be more informative than CT in general. Motion correction affected the feature values and not the general trend in this data. Conclusion: We have explored new methods for analyzing failure risk in NSCLC from PET/CT data. Our results suggest a role for functional imaging based on image features in predicting risk of failure. However, further analysis of these variables and cross‐validation is still needed to determine which parameters are strong predictors of radiation treatment response.
36(2009); http://dx.doi.org/10.1118/1.3181091View Description Hide Description
Purpose: To evaluate the effect of pixel size and OSEM iterative reconstruction parameters on radial (RR) and tangential (TR) Tc‐99m SPECT resolution versus distance from isocenter. Method and Materials: Ten high‐concentration Tc‐99m point sources of size <2mm3 were positioned coplanar 0–9 cm from isocenter in a cylindrical phantom with low‐concentration background. Emission scans were acquired on a SPECT/CT system (Symbia T6, Siemens Medical Solutions) with LEHR collimation in continuous (C) and step‐and‐shoot (SS) modes for 360 views over 360° at 0.9, 1.8 and 3.6 mm/pixel. Data were iteratively reconstructed with 3D‐OSEM incorporating resolution, CT‐based attenuation, and scatter modeling, for different combinations of iterations and subsets (IT_SUB): 1_18, 10_18, 20_18, 30_18, 30_36, 30_60, 30_90. SPECT resolution was estimated using a Gaussian fit of the radial and tangential profiles through each point source. Results: TR was consistently better than RR. Anisotropy was independent of pixel size and scan mode but decreased with IT times SUB (e.g., TR/RR=0.78 and 0.62 for 1_18 and 30_90 with 0.9 mm/pixel in SS). Both TR and RR improved linearly with distance away from isocenter. The center‐to‐periphery resolution differences decreased with IT times SUB (e.g., slopes of resolution versus radius were −0.74 and −0.45 for 20_18 and 30_36 with 0.9 mm/pixel in SS) and with smaller pixel sizes (e.g., slopes of resolution versus radius were −0.89, −0.82 and −0.74 for 3.6, 1.8 and 0.9 mm/pixel for 20_18 in SS). TR and RR improved as a power‐law with IT times SUB. The rate of improvement showed no obvious dependence on pixel size. TR and RR were similar between SS and C. Conclusion:Spatial resolution of SPECTimagesreconstructed iteratively exhibited power‐law dependence on IT times SUB, linear dependence on radial position, and exhibited TR/RR anisotropy — modeling of which are important for accurate SPECT quantification.
SU‐DD‐A4‐06: Tradeoff Between Noise and Resolution in CT Images — Comparison of Filtered Backprojection and the Penalized Alternating Minimization Algorithm36(2009); http://dx.doi.org/10.1118/1.3181092View Description Hide Description
Purpose: Relative to linear reconstructions, statistical image reconstruction algorithms have been shown to produce images with less error, artifacts and noise for similar high‐contrast resolution as measured with the MTF. This study compares the noise and resolution, using the edge blur, of imagesreconstructed using filtered backprojection (FBP) and a new statistical algorithm, Alternating Minimization (AM) [O'Sullivan IEEE TMI vol: 26, ♯3]. Method and Materials: Monoenergetic projection data were simulated for two phantoms, each with a high and low‐contrast insert. Two levels of simple Poissonnoise were added to simulate low and clinical‐dose protocols. To study imagenoise and resolution, FBP was performed with a Gaussian blurred ramp filter of varying FWHM. The penalized AM algorithm was run with a range of penalty strengths. Reconstructed pixel size was also varied. Imagenoise was quantified as the percent standard deviation in the phantom background. Resolution was evaluated by measuring edge blur at contrast boundaries. Blur was quantified as the FWHM of the Gaussian that, when convolved with the known truth image, gives the highest correlation between reconstructed and convolved images for pixels surrounding the edge. Blur was calculated independently around each contrast boundary. AM and FBP imagenoise is compared as a function of edge blur. Results: For high resolution images (small edge blur), the AM algorithm reconstructsimages with up to 50% less noise than FBP for similar resolution. For all conditions (varying pixel size, projection noise, and contrast inserts), AM exhibits an advantage. Conclusion: Edge blur is used as a metric of CTimage resolution for both high and low contrast inserts. The penalized AM algorithm is shown to reduce imagenoise with less edge blurring than FBP. Future work will extend the use of edge blur to compare AM performance in experimentally acquired CT data.
- Moderated Poster ‐ Computed Tomography
SU‐EE‐A4‐01: Evaluation of Noise and SDNR Characteristics of Blended ASIR and FBP Images Obtained with the GE Discovery CT 750 HD Scanner36(2009); http://dx.doi.org/10.1118/1.3181111View Description Hide Description
Purpose: The new GE 750 HD CT scanner utilizes adaptive statistical iterative reconstruction (ASIR), which can be blended in various proportions with the filtered back projection reconstructions. A study was performed to characterize the dependence of noise and signal‐difference‐to‐noise ratio (SDNR) on %ASIR for a range of doses.
Method and Materials: Scans of the low contrast resolution module in the ACR CT accreditation phantom were acquired using “full‐dose”, 2/3 dose, ½ dose, and ¼ dose techniques. Scans were obtained in normal resolution mode with the STD, edge, and soft kernels, and in high resolution mode using the STD, HD STD, and edge kernels. ROIs were placed in the 25 mm cylinder and adjacent background region in this module. Images were reconstructed with %ASIR ranging from 0 –100 in steps of 10. For each condition, mean and standard deviations in the ROIs were obtained for 5 central slices in the module. Noise and SDNR were computed and the %ASIR that yielded the same noise and SDNR as in the full‐dose cases were determined for the lower doses. Results: %ASIR and dose had no effect on the mean CT♯s in the cylinder and background. Noise was found to linearly decrease with %ASIR, (R2>0.997). SDNR increased quadratically with %ASIR (R2>0.996) . For normal resolution, STD kernel, the noise and SDNR at full dose were achieved at 2/3 dose, ½ dose and ¼ dose, using 30%, 50% and 84% ASIR, respectively. Corresponding %ASIR for high resolution, HD STD were 23%, 42% and 68%. Values for other kernels will be presented. Conclusion: ASIR reduces noise and improves SDNR. By applying appropriate %ASIR, one can achieve similar image quality at reduced dose. The frequency content of the noise is different which can affect the %ASIR that is acceptable in the clinic.
SU‐EE‐A4‐02: An Iterative Method of Modeling Multidetector CT (MDCT) Source From Measured CTDI Values: A Feasibility Study36(2009); http://dx.doi.org/10.1118/1.3181112View Description Hide Description
Purpose: To propose a method of using the Monet Carlo (MC) technique to model MDCT scanners from CTDI values so the assessment of organ doses can be performed more accurately and easily. Method and Materials: The MC code, MCNPX, was used to perform all the simulations. Several parameters influencing CTDI values were analyzed and prioritized. The modeling method starts with employing the preliminary parameters necessary to perform the single axial scan to obtain the calculated CTDI values. Along with a priority list, each parameter was adjusted by comparing MC calculated CTDI values with measured CTDI values. The iterative process is completed when the parameters yield results that match the calculated and measured CTDI values. The validated CT model was then integrated with patient phantoms to calculate the organ doses for a specific CT procedure. Results: It was found that, in the modeling procedure, there are actually only two main parameters that exhibit the greatest influence on the finally calculated CTDI values: the thickness of the bowtie filter (BTF) and the length of BTF semimajor axis. These two parameters were thus given the highest priority. The modeling algorithm involves a total of six parameters including anode angle, thickness of flat filter, width of BTF, thickness of BTF, length of BTF semimajor axis, and source number. Based on this method a MDCT was modeled and the calculated CTDI values within around 5% of the measured CTDI values. Conclusion: Results demonstrated the feasibility of this iterative method based on analysis and priorities of various parameters. To our knowledge, this work is among the first attempts to develop the modeling method only using CTDI values. The CT scanners therefore can be modeled without the common problem of lacking information of parameters, thus facilitating accurate and easy MDCT dose assessment.
36(2009); http://dx.doi.org/10.1118/1.3181113View Description Hide Description
Purpose: To investigate a three‐pass reconstruction approach for metal artifact reduction in x‐ray CT.Method and Materials: The algorithm consists of: 1) Initial reconstruction from the original sinogram data; 2) Simple thresholding to identify high‐density regions (e.g. metal) that can cause artifacts; 3) Delineation of corresponding regions in the original sinogram that are replaced using linear interpolation; 4) Second reconstruction after the interpolation; 5) All pixels in the second image that lie between −500 and +500 HU are replaced with the mean of these pixels; 6) Rays in the sinogram through the metal are estimated a second time through forward projection of the segmented second image; 7) Third and final reconstruction. To avoid the need for forward projection across the entire native field‐of‐view (FOV) during step 6 above, a double‐wedge filter is applied in the 2DFT space of the sinogram so that objects outside of the reconstruction FOV are filtered out of the original sinogram. If k and p are the view‐angle and fan‐angle frequency variables, respectively, the double‐wedge filter consists of setting to zero all frequencies in the 2DFT of the sinogram for which |k/(k+p)|>R/L, where R is the reconstruction FOV and L is the source‐to‐isocenter distance. Results: The algorithm substantially reduces streak and blooming artifacts that are present in the original reconstruction for three scans with dental fillings, and performs better than linear interpolation across missing regions in the sinogram. The double‐wedge filter is effective in removing contributions to the sinogram from objects outside of the reconstruction FOV. Conclusion: The algorithm is effective for reducing metal artifacts as well as computationally practical. Conflicts of Interest:Funding was provided by GE Healthcare.
36(2009); http://dx.doi.org/10.1118/1.3181114View Description Hide Description
Purpose:. The purpose of this study is to assess the temporal and reconstruction accuracy of two different optical systems for respiratory motion detection for 4DCT. Materials and methods: A clinical CTscanner, run in cine mode, was used with two optical devices, the Gate CT®, (Vision RT, London, UK) and the RPM® (Varian, Palo Alto, CA), to detect respiratory motion. A radiation detector, GM‐10 (Blackcat, Westminster, MD), triggers the Xray on/off to Gate CTsystem, while the RPM is directly synchronized with the CTscanner through an electronic connection. Two phantoms were imaged: the first phantom translated a rigid plate along the antero‐posterior direction, and was used to assess the temporal synchronization of each optical system with the CTscanner. The second phantom consisted of 4 spheres that translated 3cm peak‐to‐peak in the superior inferior direction, used to assess the rebinned images created by Gate CT and RPM. Results:Calibration assessment showed a nearly perfect synchronization with the scanner for both the RPM and Gate CTsystems, thus demonstrating the good performance of the radiation detector. Results for the volume rebinding test showed a variations in volumes for the 3D reconstruction of up to 15% for Gate CT and up to 12% for RPM. The mean of the standard deviations errors were 11% and 9% respectively. Errors are mainly due to phase detection inaccuracies and to the large motion of the phantom. Conclusions: This feasibility study assessed the consistency of our two optical systems in synchronizing the respiratory signal with the image acquisition. A new patient protocol based on both RPM and GateCT will be soon started.
36(2009); http://dx.doi.org/10.1118/1.3181115View Description Hide Description
Purpose: Patients under CT guided interventional procedures usually receive high radiation dose as multiple scans are performed. The purpose of this study is to investigate potential of dose reduction for CT‐guided renal tumor cryoablation using a newly developed image reconstruction technique: Local HYPR Reconstruction (HYPR‐LR). Method and Materials: Three patients, each with a renal tumor, underwent percutaneous cryoablation with CT monitoring. Original full dose projection data sets were saved and exported to a personal computer.Noise was inserted into the projection data to simulate low dose scans (50% of the original dose) using a novel noise insertion tool developed by our lab. Low dose images at different freezing time were then reconstructed from the simulated low dose projections using commercial reconstruction algorithm. HYPR‐LR was conducted using the average low dose images as a composite image.Image quality, focusing on target (ice ball) visibility and imagenoise, was compared among full dose images (FD), low dose images (LD) and low dose HYPR‐LR images.Results: Low dose imagesreconstructed with HYPR‐LR demonstrate similar image quality as the full dose scan images and superior to the low dose imagesreconstructed with commercial software. Imagenoise measured at three set of images were: 51.3 (LD), 38.3 (FD), and 31.8 HU (HYPR‐LR). The growing ice ball can be better visualized in the HYPR‐LR image series compared to the low dose images due to improved image quality. Conclusion: HYPR‐LR has been demonstrated to be useful for dose reduction in renal tumor cryoablation with CT monitoring. Our study shows that at least a factor of 2 dose reduction is achievable by reducing the tube mAs by 50%. Clinically, this translates into a factor of 2 reduction in radiation risk (deterministic or stochastic) for the increasing numbers of patients undergoing CT‐guided tumor cryoablation.
SU‐EE‐A4‐06: Impact of KVp and Bowtie Compensator On Cone‐Beam CT Image Quality: An Elekta Synergy XVI Experience36(2009); http://dx.doi.org/10.1118/1.3181116View Description Hide Description
Purpose: To evaluate the impact of kVp and Bowtie compensator on cone‐beam CTimage quality using Elekta Synergy XVI system. Materials and Methods: An XVI system was customarily calibrated at 80, 100, 120, and 138 kVp for various collimation settings. For each kVp, the beam quality and output of the x‐ray tube was measured using a 6‐cm3 ionchamber with a series of tube current (mA) with and without Bowtie filter. Catphan 500 phantom was used for CBCTimaging. For the small FOV and 20‐cm length collimation (S20), high‐resolution CBCTimages were reconstructed with voxel size of .5×0.5×0.5 mm3. Spatial resolution, uniformity, CT‐number accuracy and linearity were measured on the Catphan images. Contrast‐to‐noise‐ratio (CNR) was measured using the means (S) and standard deviations (s) in 6‐mm diameter circular region‐of‐interest (ROI) within the polystyrene (PS) and LDPE by the formula of ( ) / ( 2 2 ) / 2 LDPE PS LDPE PSCNR = S − S s +s. Results: The exposure rate with Bowtie filter is linearly proportional to the mAs, increases at a rate of (kVp)2.85, and decreased to 73% of that without Bowtie at equivalent kVp and mAs settings. Without Bowtie, the spatial uniformity improves but the CNR deteriorates when increasing kVp at equivalent exposure rates. Application of Bowtie compensator improves the spatial uniformity by more than 11% and increases the accuracy of CT number for soft‐tissue materials at all tested kVps. Bowtie compensator also increases the CNR from 1.64 to 1.94 at 100 kVp, but reduces the CNR by ∼20% at 120kVp and 138kVp. Conclusion: Through this clinical investigation, we have found that Bowtie compensator improves the uniformity and CT‐number accuracy for soft‐tissue materials and may increase the CNR at the low kVp but decrease CNR at the high kVp. In image‐guided radiotherapy, using Bowtie with low kVp for small‐body and no Bowtie with high kVp for large‐body imaging appeared to be optimal settings.
- Moderated Poster ‐ MRI and Image Processing
36(2009); http://dx.doi.org/10.1118/1.3182265View Description Hide Description
Purpose: With the increased interest in using MR as a means of assessing therapy response, it is important to assess longitudinal systematic variations. In this study, T1 and contrast‐to‐noise ratio (CNR) variations during a dynamic contrast enhanced (DCE) acquisition were assessed on three scanners at three time points. Method and Materials: CNR and T1measures were calculated from images of a modified Eurospin TO‐5 phantom (DiagnosticSonar, Scotland) consisting of 19 compartments with T1 values ranging from 208–1630 ms. Three GE Excite HD scanners were evaluated. Multiple TI (N=10) inversion recovery (IR), multiple flip angle (N=7) fast spoiled gradient echo (FSPGR), and FSPGR DCE data were acquired at three time points (baseline, 1 hr, 1 week). T1measurements were obtained using both IR and FSPGR data. CNR measurements were computed using the longest T1 sample as a reference. Correlation and Bland‐Altman repeatability (same scanner) and agreement (different scanners)measures were computed. Results: Correlations of the IR‐ and FSPGR‐based T1measures were significant for all three scanners (R2>0.996; slopes ranging from 0.84–1.11). Short‐term (1 hr) and one‐week FSPGR repeatability results ranged from 7.0–9.1ms and 10.0–17.4 ms, respectively, with limits of agreement ranging from −15.5–15.1 ms and −21.5–31.7 ms, respectively. The FSPGR BA analyses indicated a linear increase in T1 differences with increasing T1 and the maximum difference was 343 ms. The IR based measurements did not demonstrate such a linear trend and differences were less than 40 ms. Short‐term (1 hr) IR/FSPGR repeatability and limits of agreement results ranged from 91.3–185.1 ms and −330.7–39.5 ms, respectively. Intra‐DCE scan CNR variations ranged from 0.3–0.6% across scanners and time points. Conclusion: The clinical scanners evaluated demonstrate good repeatability of T1 and CNR measurements on a given scanner with larger variations seen between different scanners, even from the same vendor.
MO‐EE‐A4‐02: Metabolic Changes in Malignant Brain Tumors During Mid‐Course Radiation Therapy: Initial Findings of 3Tesla Volumetric Magnetic Resonance Spectroscopic Imaging36(2009); http://dx.doi.org/10.1118/1.3182266View Description Hide Description
Purpose: Metabolic activity of tumors obtained non‐invasively by magnetic resonance spectroscopic imaging(MRSI) during mid‐course of radiation therapy (RT) can be useful in the evaluating tumor response as well as to design effective boost RT plans. The purpose of this work is to assess the changes in metabolic status of malignant braintumors using 3Tesla volumetric protonMRSI at three weeks of radiation therapy.Method and Materials: The MRSI data from 11 patients with malignant braintumors acquired during pre‐ and at 3rd week of RT were analyzed retrospectively. The 3D MRSI was performed on a Siemens 3T TRIO TIM scanner using PRESS (point resolved spectroscopy) sequence with TE/TR: 135/1510 ms, 16×16×8 matrix, FOV: 16×16×8 resulting in MRSI voxel size of 10×10×10 mm3. Linear combination (LC) model was used to analyze the acquired spectra. Tumor activity was determined from the values of metabolites ratio “Choline to N‐Acetyl‐Aspartate” (Cho/NAA). MRSI voxels segmented from metabolically active tumor regions were compared for further analysis. Results: The mean Cho/NAA values at 3rd week of RT compared to their pre‐RT values were found to decrease 26% to 79% in 7 out of 11 patients and increase73% in one case. Three patients had minimal changes (−6% to +17%). Voxels with higher tumor activity (i.e., larger Cho/NAA value) at pre‐RT were observed to show large decrease at 3 weeks of RT. The spatial patterns of metabolic abnormality considerably altered during 3 weeks of RT compared to that of pre‐RT. Conclusion:MRSI derived metabolic status of malignant braintumors in patients undergoing RT obtained during mid‐course of treatment provides valuable functional tumor information that is not available from other conventional anatomical imaging methods and it could help to devise patient‐specific effective treatment interventions.
36(2009); http://dx.doi.org/10.1118/1.3182267View Description Hide Description
Purpose: To introduce a novel two‐point initialization and semiautomated objective tool for MR brainspectroscopy. This method will improve the efficiency speed and reduce user‐bias in evaluation in‐vivo spectra assessed over artifacts (simulated) that arise from shimming, electronic noise, field inhomogeneity, coil sensitivities, relaxation which can cause variation in baseline drift or system noise. Method and Materials: Ten cases of C‐6 induced rodent Glioma models/controls were analyzed from MRS data acquired on a Bruker Biospin 7T scanner. The new two point method only requires the user/technologist to specify ‘start’ ‐ ‘end’ PPMs. First moment calculation is to used estimate the global standard deviation that initializes a Marquart‐Levenberg nonlinear optimization method. Three equations were assessed: I) Gaussian‐only fit, II) equal weighted Gaussian and Lorentzian Mixture, and III) free ratio between Gaussian and Lorentzian mixtures. We tested the algorithm in four metabolic regimes (Creatine, Choline, NAA, and Lipid/Lactate). Results: General Linearized Mixed Models were used to assess the method. There were significant differences (F=1817; df = (3, 47e3), p<.0001) between the algorithm types (I‐III) across disease model (c6 vs. control). When the pure Gaussian was applied the results tended to over estimate area (t= 2.73; df=450.0, p<0.001). The 50% mixture of Gaussian‐Lorentzian (t= −2.28; df 428.1, p<0.001) was found to underestimate of the true area. No difference was found on the freely varying (model III) for two point fit. Conclusion: The 2‐point method was shown to be equivalent or better than the 3 pt method for the initial spectra bracketing step. This provides advantages since the two point method requires less judgment (more objective) and is faster to perform. Additionally, we have demonstrated that the simple fitting of a Gaussian function may not be sufficient and Lorentzian terms may be required. Such standards are important for efficient glioma MRS evaluation.
36(2009); http://dx.doi.org/10.1118/1.3182268View Description Hide Description
Purpose: The main objective of this study is to evaluate the feasibility of using a recently developed MR technique, quantitative blood oxygenation level‐dependent (qBOLD), to quantify the oxygen extraction fraction (OEF) of tumors metastatic to the brain.Materials & Methods: The qBOLD technique provides a regional OEF measurement based upon an MR signal model of brain that incorporates prior knowledge about braintissue composition. A 3D version of gradient echo sampling of spin echo sequence with RF spoiling is used to obtain the MRI signal. (He, Zhu, and Yablonskiy, MRM 60:4, 882–888, 2008). To evaluate the feasibility of qBOLD quantify OEF in central nervous systemtumors, six patients (47.9 y to 67.2 y, mean 55.4 y, 4 female and 2 male) with metastatic braintumors were prospectively enrolled in a longitudinal imaging study. The primary malignancies were lungcancer (in five patients) and renal cell carcinoma (in one patient). The qBOLD procedure was performed as part of an integrated neuroimaging protocol, including conventional pre‐ and post‐contrast images and dynamic susceptibility contrast perfusion. The patients were scanned at 1.5T (Siemens TIM Espree) while wearing a stereotactic frame prior to radiation therapy. Post processing was performed offline using Matlab. Results: Supratentorial metastatic tumors (6 patients) and surrounding vasogneic edema demonstrated marked visual conspicuity and quantifiably altered OEF using the qBOLD MRI: e.g, OEF values for the area of vasogenic edema were 54.8+/−12.3 % compared to 36.6+/−6.6 % in contralateral normal white matter. Conclusions: Using a recently developed MR pulse sequence, qBOLD, we were able to quantify the OEF in humans with metastatic braintumors. qBOLD offered excellent visual conspicuity for lesion detection for supratentorial, non‐hemorrhagic lesions. Interestingly, vasogenic edema surrounding both primary and metastatic tumors was also associated with elevated OEF, a finding which warrants further investigation.
MO‐EE‐A4‐05: Image Feature‐Based Tumor Shrinkage Modeling in Head and Neck Cancer for Adaptive Radiation Therap36(2009); http://dx.doi.org/10.1118/1.3182269View Description Hide Description
Purpose: A novel technique was developed to model the tumor shrinkage/growth in response to radiotherapy in head‐n‐neck (HN) cancer to better understand the therapeutic process and compute the cumulative dose for adaptive radiation therapy (ART). Methods and materials: Five HN patients were enrolled in the study. For each case, 8∼10 pre‐treatment cone beam computed tomography(CBCT)images were acquired using the Varian Trilogy on‐board imager. Planning CT and gross tumor volume (GTV) for each patient was used as a template. An image feature based model was employed to establish the correspondence between planning CT and CBCT. Due to the non‐conservation of the image contents, similarity based deformable model cannot be solely used. Two‐step procedure was adopted: homologous tissue features shared by the planning CT and CBCT were detected and their correspondence was matched using the Scale Invariance Feature Transformation method. This was followed by a registration of the rest points using a basis spline interpolation. A bi‐directional mapping was developed to increase the precision of tissue feature correspondence. The proposed model was tested with a number of digital phantoms with introduction of artificial volumetric changes. Results: The application of the bi‐directional feature based model to digital phantoms revealed that the artificial volumetric changes could be modeled accurately. The error of GTV boundary between the modeled and the ground truth was less than 1mm. For the clinical cases, the new algorithm worked reliably for a reasonable volume change (<35%), indicating the time span between two consequent imaging sessions should not be unreasonably far away in order for the model to function properly. Conclusions: We developed an image feature‐based model to derive the tumor change kinetics to better understand the tumor response to radiotherapy. The new model will find its wide spread application in ART for HN cancer.
MO‐EE‐A4‐06: Using Total‐Variation Regularization for Deformable Registration of the Shear Movement of the Lungs36(2009); http://dx.doi.org/10.1118/1.3182270View Description Hide Description
Purpose: To report a deformableimage registration strategy using total‐variation regularization with explicit inclusion of the differential motions of thoracic structures. Methods and Materials: Accurate modeling of thoracic organ motions remains illusive because of the lack of an effective mechanism to deal with the discontinuous movements of the involved anatomic structures. In this work, we propose an efficient deformable registration algorithm to deal with the lung motion. Instead of directly applying the least square optimization, we include a total‐variation regularization to account for the motion discontinuity close to the contact surface between lungs and chest wall. The term of total variation calculates the sum of absolute values of the derivatives, and the penalties drive the derivatives toward zeros and force the optimized displacement vector close to be piece‐wise continuity. Results: The proposed approach is evaluated using a digital phantom case and two lungcancer patients. For the phantom case, a comparison with the Levenberg‐Marquardt least square optimization showed that the registration accuracy was markedly improved. On average, the registration error of 15 representative points in the lung (against the known ground truth) was reduced from 6.0±4.1 mm to 1.6±0.7 mm when the new method was used. Similar level of improvement was achieved for the clinical cases. Conclusions: The deformable approach using total‐variation provides a natural and logical solution to model the discontinuous organ motions and greatly improves the accuracy and robustness of deformable registration.
- Moderated Poster ‐ X‐Ray Imaging and PACS
MO‐FF‐A4‐01: Evaluation of Background Trend Correction Technique in Breast Tomosynthesis Quantitation36(2009); http://dx.doi.org/10.1118/1.3182295View Description Hide Description
Purpose: The main shortcoming of breast tomosynthesis (tomo) imaging when compared to CT is poor resolution in the depth direction and the associated difficulty in quantifying tissue density. This study will assess the quantitative potential of breast tomo using a clinical prototype and relatively simple reconstruction and the proposed background trend correction scheme. Tomo quantitation would allow improved characterization of lesions as well as proper image processing of tomo images (analogous to that of mammography).Method and Materials: Studies were based on a Siemens prototype breast tomo system with a 45° total angular span. First, Monte Carlo simulations were conducted using geometry to mimic the aforementioned prototype and voxelized breast tissue‐equivalent phantoms embedded with eleven small cuboid lesions of varying density. The material surrounding the lesions was either fat or glandular equivalent plastic. From the simulations, the effects of scatter, lesion depth, and background material density were studied. Empirical studies were then conducted with the prototype system and tissue‐equivalent phantoms similar to those of the simulations to allow investigation of the effects of lesion depth and background material density. All image reconstruction was performed using filtered backprojection incorporating a filter designed to emulate some grayscale characteristics of iterative reconstruction.Results: Resulting images displayed a visible difference in lesion brightness for both empirical and simulated experiments. After applying our background trend correction technique, the lesion voxel values varied linearly with glandular fraction (all R2 ⩾ 0.90) under all simulated and empirical conditions. Significant differences were encountered only for different background materials (in all scatter‐included paradigms). Conclusions: These high R2 values suggest that breast tomo image voxel values corrected by our outlined methods are highly positively correlated with true tissue density, implying that breast tomo imaging has definite quantitative potential.
MO‐FF‐A4‐02: Effects of Added X‐Ray Beam Cu Filtration On Image Quality and Patient Dose in Digital Radiography36(2009); http://dx.doi.org/10.1118/1.3182296View Description Hide Description
Purpose: To quantify effects of added x‐ray beam Cu filtration on image quality, skin entrance exposure, and effective dose in digital radiography.Method and Materials: A GE Definium 8000 DR unit, in AEC mode, was used with a 20 cm thick acrylic phantom to simulate an average adult abdomen. 0.0, 0.1, and 0.2‐mm of Cu filtration were added to the x‐ray beam. We used 75, 80, 85, 90, 95, 100, and 110kV. Image quality was compared using a contrast‐detail phantom (CDRAD). For each beam quality, we exposed the CDRAD phantom on top, below, and between two 10‐cm sections of the acrylic phantom, and at four different orientations relative to anode‐cathode axis. The results were averaged. Image processing parameters were the ones used clinically for an AP Abdomen exam. The images were analyzed using the CDRAD Analyzer software. Skin‐entrance exposures, based on AEC resultant mAs read‐out, were measured without backscatter, and effective doses for an average size adult phantom were calculated using the PCXMC software. Results: At 75–80kV addition of Cu filtration reduced image quality slightly (∼1–5%), skin‐entrance exposure by 38–52%, and effective dose by 12–21% relative to no additional Cu filtration. At 90–110kV image quality varied little (∼0–4%) with added Cu filtration, skin entrance exposure was reduced by 28–47%, and effective dose was reduced by up to 17%. Only at 85 kV image quality was reduced significantly (∼13–15%) with added Cu filtration. Skin‐entrance exposure was reduced by 35–48%, and effective dose was reduced by 8–17%. Conclusion: Although already a relatively low dose modality, DR may benefit from the use of additional x‐ray beam Cu filtration since, for most x‐ray beam qualities, image quality appears to change little, while patient doses are reduced more appreciably compared to using beams with no additional Cu filtration.