Index of content:
Volume 35, Issue 6, June 2008
- Imaging Moderated Poster Session: Exhibit Hall E
- Moderated Poster — Area 4 (Imaging): Computed Tomography
TU‐EE‐A4‐01: Bismuth Shields Vs. MAs Reduction for Decreased Radiation Dose to Breasts in CT Examinations35(2008); http://dx.doi.org/10.1118/1.2962623View Description Hide Description
Purpose:Bismuth breast shields have been promoted as a means for selectively reducing the radiation dose to the breast by about 30% in CT studies, while maintaining image quality. A study was performed to compare imagenoise and CT number accuracy with the shields to an alternative dose reduction method of employing 30% less mAs. Method and Materials: A humanoid thorax phantom with simulated breasts was imaged on a GE VCT scanner using: 1) a standard lungcancer screening protocol, 2) the same protocol but with a commercial bismuth breast shield, and 3) 30% less mAs without the shield. Regions‐of‐interest (ROIs) were placed in the images and the mean CT numbers and standard deviations of the CT numbers were compared. Results: Relative to the mean CT numbers in images for the standard technique, use of the breast shield resulted in increases of about 9HU, 19HU, 6HU, and 57HU in ROIs in the heart, anterior left lung, posterior left lung, and right breast, respectively. Corresponding changes for 30% mAs reduction were 1HU, −3HU, −2HU, and 0HU. Ratios of the standard deviations of the CT numbers in the dose reduced images to those in the images using the standard technique for the above ROIs were 1.4, 1.2, 0.9, and 1.8 for the breast shield and 1.3, 1.0, 1.0, and 1.2 for 30% mAs reduction. Conclusion: mAs‐reduction is preferred over bismuth breast shields because: 1) mAs‐reduction has much less effect on mean CT numbers, which is important for quantitative studies such as lung density and coronary calcification assessment, 2) noise in the mA‐reduced images is less, and 3) the images do not suffer from streak artifacts arising from the shields. Additional comparisons in images of human subjects undergoing IRB‐approved coronary calcification studies with the breast shield vs. 30% reduced mAs will be presented.
35(2008); http://dx.doi.org/10.1118/1.2962624View Description Hide Description
Purpose: To evaluate in a phantom study the dose to adult female breast tissue using current clinical body CT protocols on 64‐slice systems. Method and Materials: An anthropomorphic phantom with breast modules (Rando‐ Alderson) was scanned on a variety of 64‐slice CTscanners(GE LightSpeed VCT; Toshiba Aquilion; Siemens Sensation64; Philips Brilliance — in progress). Standard clinical protocols which either directly expose the breast or have scatter/edge‐of‐field dose were evaluated: (1) lung screening (smoker); (2) chest‐abdomen‐pelvis (CAP ‐oncology follow‐up); (3) cardiac calcium scoring (60bpm); and (4) virtual colonoscopy (supine & prone). Protocols were similar, but not identical, between systems. Scan coverage was matched; no breast shields. Absorbed dose to the breast tissue was measured by loading 10 TLDs into each breast module. LiF TLDs were calibrated individually for 9mm Al HVL (120 kVp), with an NIST traceable ion chamber, and a correction applied for the CT HVL. Imagenoise was also measured. Results: Standard clinical protocols for an adult female, including adaptive mA methods for the CAP exam, were utilized on each scanner. Averages of the TLDdose to the breast ranged from: (1) 0.56–1.36 cGy lung exam; (2) 1.27–2.98 cGy CAP; (3) 1.01–2.98 cGy calcium scoring; (4) 0.67–1.35 cGy virtual colonoscopy. Conclusion: The expected broad range of breast tissue dose for various CT exams was seen, but also indicate a possible reduction in dose compared to earlier reports from 4‐slice (Mahoney et al, RSNA 2005) and 16‐slice systems (Hurwitz et al, 2006 — extrapolated estimate). Variation between manufacturers was observed, but note that the protocols tested were those currently in clinical use. Further optimization of protocols for the given system design is/may be possible, especially given the significant interest from the entire radiological community to improve awareness of dose issues and to minimize exposures.
35(2008); http://dx.doi.org/10.1118/1.2962625View Description Hide Description
Purpose: The purpose of this work is to show that megavoltage cone‐beam CT (MVCBCT) images can provide accurate dose recalculation and be used to verify the daily dose distribution received by patients treated for head‐and‐neck (H&N) and prostate cancers.Method and Materials: Corrections for the cupping and missing data artifacts seen on MVCBCT images were developed for both H&N and pelvic imaging. MVCBCT images of six H&N and two prostate patients were acquired weekly during the course of their treatment. Several regions of interest were contoured including: the prostate and rectum and for H&N cases the spinal cord and parotids. Dose calculation was performed with the corrected MVCBCT images using the planned treatment beams and variations from treatment plan dosimetric endpoints were analyzed.Results: MVCBCT image correction and calibration for the H&N (pelvic) region shows standard deviations in dose calculations between kVCT and MVCBCT images of 1.9% (0.6%). The mean dose to the right parotid of H&N patients had an average increase of 18% during treatment. Increases of up to 52% were observed. The maximum dose to 1% of the spinal cord went up by 2% on average, although increases of up to 10% were noted. For prostate patients on one fraction an undetected setup error caused the dose received by 95% of the prostate to diminish by 3%. One patient had an average increase of 3.6% of the maximum dose received by 1% of the rectum. Conclusion: MVCBCT was used succesfully to verify the daily dose distribution for H&N and prostate patients. A substantial increase in the mean dose to the parotid glands was observed during treatment. For prostate patients the impact of setup errors on the prostate dose coverage was observed, along with the dosimetric consequences of volume changes in normal tissues.Conflict of interest: Supported by Siemens.
35(2008); http://dx.doi.org/10.1118/1.2962626View Description Hide Description
Purpose: To develop a novel method for registration of different phases of 4D CT with consideration of lung volume deformation and sliding motion. Method and Materials: Sliding motion of the lung against chest wall during breathing presents a challenging problem in image registration. The motion range of diaphragm during respiration is about 3 cm and the displacement vectors of tissue on the two sides of pleura are discontinuous. To register different phases of 4D CT, the lungs on these phases were first automatically segmented. A Scale Invariance Feature Transformation (SIFT) descriptor was used to find feature points shared by the template phase and target phases on the lung contours. A Fourier transformation of displacements of featured points was carried out. The low spatial frequency component of the displacement represents the sliding motion, whereas the high frequency component of the Fourier transformation represents the contribution from deformation and can be modeled by a conventional deformable model. After shifting the lungs on the target phase according to the filtered sliding displacements, a thin plate spline (TPS) deformable registration was applied between the template phase and shifted phases to obtain the displacement vector for each voxel. Results: We calculated the average diaphragm sliding distance between phase 1 and other phases with and without inclusion of lung sliding using patient data. It is demonstrated that the accuracy of the proposed method is three times better than that of conventional TPS method. With inclusion of sliding motion, the overlapped ratio of tumor contour is increased to 84.3% as compared to 78.0% using conventional approach. Conclusion: A hybrid method of deformable registration in spatial domain and low‐pass filter in frequency domain seems to model the lung breathing motion well. The combination provides a robust and computationally efficient method for registration of 4D CT thoracic images.
35(2008); http://dx.doi.org/10.1118/1.2962627View Description Hide Description
Purpose: One method for scatter correction in cone beam computed tomography(CBCT) is to compute the scatter with a Monte Carlo simulation. The accuracy of this approach may be influenced by the accuracy of the underlying photon scattering cross sections. The purpose of this study is to investigate the effect of the level of sophistication of photon interaction models on the computed scatter in CBCT and its influence on the accuracy of image reconstruction.Method and Materials: The investigation is performed using egs_cbct, a new EGSnrc based code for use in CBCT imaging. The EGSnrc treatment of Rayleigh scattering is improved to include measured molecular coherent scatteringform factors (MCSFF) in addition to the commonly used independent atom approximation form factors (IAAFF). A more accurate algorithm for sampling coherent scattering angles is also added. Three photonscatter models are investigated: Compton scattering according to the Klein‐Nishina equation and no Rayleigh scattering (simple); Bound Compton scattering modeled in the relativistic impulse approximation (RIA) and IAAFF; RIA and MCSFF. Scatter calculation and image reconstruction accuracy is tested for a 30 cm diameter water sphere with and without inserts of varying density and materials for a scan with 180 projections. Results: The simple model is not sufficiently accurate for estimating photonscatter in CBCT. The influence of MCSFF on the computed scatter distributions is small and only noticeable at the edges of the phantom. No significant difference in the accuracy of the reconstructed images is observed between the MCSFF and IAAFF coherent scattering models. Conclusion:Rayleigh scattering must be included in the Monte Carlo simulation to estimate the scatter in CBCT imaging. The inclusion of molecular interference effects in coherent scattering has no significant effect in the image reconstruction process.
35(2008); http://dx.doi.org/10.1118/1.2962628View Description Hide Description
Purpose: To accelerate the synthesis of digitally reconstructed radiographs(DRRs) and the reconstruction of cone‐beam CT(CBCT) data with the help of commodity graphics processing units (GPUs). The massively parallel architecture of GPUs allows significant improvements in execution speed for algorithms that present various levels of symmetry. Method and Materials: We have implemented DRR synthesis and CBCT reconstruction algorithms on GPUs and have compared their execution speed and accuracy with those of traditional CPU implementations. DRRs were obtained with an incremental version of Siddon's algorithm, an exact raytracing routine, while CBCT reconstructions were based on the FDK algorithm. The benchmarking was conducted with a nVidia GeForce 8800 GTX graphics board hosted in a 2.4 GHz Intel Quad Core PC. The Cg shading language was used for GPU programming, and all calculations were performed in single precision. Results: We have achieved execution speed improvement factors of 47x for DDR synthesis and of 100x for CBCT reconstruction with the GPU implementation. These figures, obtained with relatively large, clinically relevant datasets (512 Mb), could further be improved by using smaller datasets that fit entirely in the video memory. The DRRs obtained with the GPU implementation were identical to their CPU versions while the CBCTimages presented slight differences (2% standard deviation), most likely due to discrepancies in the CPU‐GPU floating‐point rounding conventions. Conclusion: We have implemented on a streaming architecture two algorithms relevant to many branches of medical physics. We have achieved significant speed increase factors while preserving the accuracy of the results. The rapid development of GPU products sporting more memory, supporting double‐precision and running at higher clock speeds lets envision even faster execution and more accurate results, thereby opening the way to new, innovative applications in medical physics.
- Moderated Poster — Area 4 (Imaging): Image Display, Processing, Non‐Conventional Imaging
TU‐FF‐A4‐01: Virtual Simulator Design for Collision Prevention During External Radiotherapy Planning35(2008); http://dx.doi.org/10.1118/1.2962656View Description Hide Description
Purpose: Collision avoidance of the treatment accelerator components such as gantry, table, collimators, jaws, and fixation devices with the patient is one of the biggest concerns in external treatment planning. Most commercial treatment planning systems do not include collision prevention simulation step. On the other hand, a fool‐proof collision‐map, lookup table, and simpler analytical method guard only against the most apparent collisions. Thus, a comprehensive virtual simulator design for collision avoidance is very useful for external radiotherapy planning. Method and Materials: An accurate modeling of the treatment accelerator is possible with geometric data. Three‐dimensional patient modeling is also possible from the patient's CT data. Since each component in the data bank is described as an independent mesh modeling based on the type of associated polygons, relative position changes can be described easily for the device dynamics simulation. The relative motions of the gantry and the treatment table are collected from the treatment plan and the graphical user interface generates the events at the given time intervals. This visual system is incorporated with the treatment planning simulation system. Results: The quality verification of our virtual simulator for the potential collision has been performed with two combinations of treatment table and gantry rotations where a collision is eminent based on the visual assessment. The planner can search for beam paths with minimal critical structure interference before extensive optimization process. A database of CT and MR scans for all tumor sites is being built, which provide useful information to map all potential collision possibilities for all treatment isocenters. Conclusion: The important benefits of this virtual simulator is the replacement of the conventional laborious procedures required for the expensive hardware simulator unit, the efficiency increment, the accuracy improvement of radiation treatment procedure, and the cost reduction in terms of time and physical patient's presence.
35(2008); http://dx.doi.org/10.1118/1.2962657View Description Hide Description
Purpose: To ensure that the performance of a fast optical computed tomography(OCT) scanning system is comparable with previous models. Method and Materials: MGS Research Inc. developed an OCT system based on the translate‐rotate method used by early generation x‐ray CTscanners. The performance of the system has been investigated and the system has been used in several published studies. Recently, a new OCT system was developed by MGS Research Inc. that reduces scan times by a factor of 10 or more. Several 3D dosimeters were irradiated using 6 MV photons and imaged on both version of the scanner. The imagenoise, reproducibility, and spatial accuracy were determined and used to evaluate the performance of the system. Results: The new version of the scanner reduced the scan time per plane from 7 minutes to 30 seconds. Preliminary results showed that noise levels in the images from both models were comparable. The uncertainty in the determination of the optical density values from images acquired with both models was ∼2%. The new model uses Fresnel lenses that may need adjustment prior to an imaging session which can affect the reproducibility of the system. Conclusion: The new version of the OCTscanner shows promise as a replacement for the previous version. Continued improvement in the software and hardware are needed to make the system as robust as the previous version.
The investigation was supported by PHS grant CA 10953 awarded by the NCI, DHHS.
TU‐FF‐A4‐03: Improving the Accuracy of Optical‐Emission‐CT Imaging Through Application of a Non‐Uniform Attenuation Correction35(2008); http://dx.doi.org/10.1118/1.2962658View Description Hide Description
Purpose: Optical computed tomography (optical‐CT) and emission tomography (optical‐ECT) are new techniques with demonstrated potential for imaging structure and function (including gene expression) in unsectioned tissue samples. This work presents the first attempts to improve the accuracy of optical‐ECT by incorporating an attenuation correction analogous to that applied in SPECT.Method and Materials: Optical‐ECT can be described as a linear system Ax=b, where x is the fluorescing distribution, b is the expected value of measured projections, and A describes the mapping from x to b. An in‐house code (Spect‐Map) originally developed for SPECTreconstruction was adapted for application to optical‐ECT. Verification of the method was performed by imaging a phantom containing a known distribution of fluorescing wires. Optical‐CT/ECT projections were taken consecutively to ensure accurate co‐registration. Attenuation‐uncorrected and ‐corrected optical‐ECT images were reconstructed by calculating A assuming zero attenuation and the optical‐CT‐measured non‐uniform attenuation, respectively. Successful preliminary verification led to the application of attenuation correction to optical‐ECT images of unsectioned human breast xenograft tumors which had transcribed fluorescing proteins labeling viable tumor burden (RFP) and HIF1 distribution (GFP). Results: Significant attenuation artifacts were observed in the uncorrected optical‐ECT image of the phantom. The middle wire appeared artificially less intense due to greater attenuation from the surrounding ink‐doped gel. This artifact was successfully removed in the attenuation‐correctedimage, demonstrating basic performance of the method. Fluorescence intensities of the wires varied by as much as 29% in the uncorrected image versus 3% in the corrected image. Application of the attenuation correction to xenograft tumorimages shows significant changes in apparent expression of fluorescing proteins. Interpretation and results will be presented. Conclusion: These results suggest that Spect‐Map has been successfully adapted to perform attenuation correction for optical‐ECT imaging. Preliminary xenograft tumorreconstructions indicate that attenuation correction is vital for accurate optical‐ECT imaging.
TU‐FF‐A4‐04: Comparison of Optical Diffusion Approximation and Delta P1 Approximation Models for Laser Fluence in Cancer Treatment35(2008); http://dx.doi.org/10.1118/1.2962659View Description Hide Description
Purpose: Evaluation of competing optical models, the optical diffusion approximation (ODA) and delta P1 approximation for use in ablative laser cancer treatment of nanoparticles impregnated tumors.Method and Materials: Gold‐coated, silica‐core nanoshells, core diameter 180nm were placed in 1.5 wt% agar gel phantoms with nanoshell concentrations of 1.19×109 nanoshells/mL and 2.53×109 nanoshells/mL. Phantoms were cylindrical, 23 mm wide by 69 mm high and heated using laser powers of 0.64, 0.8 and 1.2Watts with 0.5cm spot size. Thermal images of heating were obtained using MRTI on a clinical 1.5 T MRI (Excite, HD, GE Healthcare Technologies, Waukesha, WI). MRTI uses a 2D fast, spoiled, gradient‐echo sequence, with field of view = 12 cm, slice thickness = 3 mm, , TR/TE = 74.5 ms/15 ms, FA = 30°. MRTI utilized the complex phase difference method to calculate temperature images, one image every 5 seconds for 300 seconds. Modeling the thermal response was performed with a finite element solution of the nonhomogenous heat equation using commercial software (Comsol Multiphysics®, Comsol Inc., Burlington, MA, U.S.). Results: For the 1.19×109 nanoshells/mL phantom both ODA and delta P1 give similar results with delta P1 being better overall, root mean square (RMS) difference between experiment and the model was 3 times greater for ODA than delta P1. For the higher concentration gel (2.53×109 nanoshells/mL) the RMS difference between experiment and the model was 4 times greater in the case of ODA compared to delta P1 with ODA failing to describe the experimental data in any adequate way. The greater accuracy of the delta P1 is attributed to its treatment of scattered and unscattered components of the light as separate entities. Conclusion: ODA works for lower nanoshell concentrations, but breaks down at higher concentrations. Delta P1 works well for all concentrations tested.
35(2008); http://dx.doi.org/10.1118/1.2962660View Description Hide Description
Purpose: Radiation‐induced skin toxicity is a common and potentially serious treatment complication for patients receiving radiotherapy. A noninvasive method for predicting skin reaction, particularly moist desquamation, could allow for treatment modifications that may improve treatment outcome and quality of life. Method and Materials: 3D thermal tomography (3DTT) is a new image‐modality that displays the 3D distribution of thermal effusivity under the imaged surface. The 3DTT system utilizes one or more photographic flash lamps and an infrared camera containing a focal‐plane array of infrared sensors. The flash lamps provide a thermal impulse (a few ms in duration) on the sample surface, and the infrared camera monitors the immediate rise and gradual decay of surface temperature due to conduction of surface heat into the interior of the sample. Because heat transfer from the surface to the interior depends on a material's thermal properties, pulsed thermal imaging data can be used to determine the thermal property distribution below the surface. As a feasibility study, we used 3DTT to obtain effusivity‐based cross‐sectional images for a ceramic composite plate with embedded holes ranging from 1–7.5mm in diameter at various depths and a stock of pig's knee joint. We also measured the degree of maximum temperature rise at the back of a hand to assure the safety of the procedure. Results: Thermal effusivity tomographyimages from 0.3–5mm depth were successfully obtained from both ceramic and pig joint phantoms. The images showed excellent details of the phantoms, with better resolution toward the surface. The temperature rise at the back surface of a hand was no more 5°C, confirming the safety of the 3DTT procedure. Conclusion: It is feasible to produce 3DTT images under the surface of phantoms. These results warrant further exploration of using 3DTT as patient specific biomarker to predict the development of radiation‐induced skin injuries.
TU‐FF‐A4‐06: Image‐Based Modeling of Tumor Shrinkage Or Growth: Towards Adaptive Radiation Therapy of Head‐And‐Neck Cancer35(2008); http://dx.doi.org/10.1118/1.2962661View Description Hide Description
Purpose: Understanding the kinetics of tumor growth/shrinkage represents a critical step in quantitative assessment of therapeutics and realization of adaptive radiation therapy (ART). We establish a novel framework for image‐basedmodeling of tumor change and demonstrate its performance. Method and Materials: Due to the non‐conservation of tissue, similarity‐based deformable models are not suitable for describing the tumor growth/shrinkage process. Under the hypothesis that the tissue features in the tumor volume or the boundary region are partially preserved, we model the tumor kinetics by a two‐step procedure: (1) auto‐detection of homologous tissue features shared by the planning CT and subsequent on‐treatment CBCTimages using the Scale Invariance Feature Transformation (SIFT) method; (2) establishment of voxel‐to‐voxel correspondence between two input images for the remaining spatial points by a basis spline interpolation. The correctness of the tissue feature correspondence is doubly assured by a bi‐directional association procedure, in which the SIFT features are mapped from planning CT to CBCT and reversely. Only the associations common to both mappings are used in BSpline interpolation. A number of synthetic digital phantom experiments and five clinical HN cases are used to assess the performance of the proposed technique. Results:Image contents of the digital phantoms are modified in various ways. It is found the proposed technique can identify any of the changes faithfully. The subsequent feature‐guided BSpline interpolation reproduces the “ground truth” with an accuracy better than 1.3mm. For the clinical cases, the new algorithm works reliably for a volume change less than 30%, suggesting the time span between two consequent imaging sessions should not be unreasonably far away in order for the model to function properly. Conclusion: An image‐basedtumor kinetic model has been developed to better understand the tumor response to radiation therapy. The technique provides a solid foundation for future head‐and‐neck ART.
- Moderated Poster — Area 4 (Imaging): Magnetic Resonance and Ultrasound Imaging
SU‐EE‐A4‐01: Stereoscopic Visualization Of Diffusion Tensor Imaging Data: A Comparative Survey Of Several DTI Visualization Techniques35(2008); http://dx.doi.org/10.1118/1.2961392View Description Hide Description
Purpose: To compare several methods for displaying DTI data in MRI for clinical use. Method and Materials: A diffusiontensorimaging (DTI) visualization tool was developed at our institution by graphically displaying the principal eigenvector as a headless arrow, using either regular or stereoscopic LCD monitors. This tool utilizes stereoscopic vision to represent diffusiontensor's spatio‐directional information, while allowing color, the traditional tool for displaying directional information, to be used for other diffusion characteristics, such as functional anisotropy (FA). In this tool, the principal eigenvector at each voxel, Vmax, is depicted as a headless arrow, while a color scale is used to encode the FA index. We compared: a) grayscale FA map (GSFM), b) coded orientation map (CCOM), c) Vmax maps using regular non‐stereoscopic display (VM), and d) Vmax maps using stereoscopic display (VMS). A survey of clinical utility was performed by eight board‐certified neuroradiologists, using a paired comparison questionnaire format with forced and graded choices. Five representative cases were selected based on the typical braintumor patient population at our institution. Results: Vmax map was favored over traditional methods of display in most of the cases (80% vs. 10%, 10% no preference). However, when stereoscopic (VMS) and the non‐stereoscopic (VM) modes were compared, VMS was preferred in 45% of them while VM was 35% and 30% had no preference. The main reason given for the preference of the stereoscopic DTI visualization tool (VMS, VM) to the conventional DTI visualization methods (CCOM and GSFM) was better delineation of white matter tract and improved 3D anatomy effect. Conclusion: DTI data displayed by our Vmax based display methodology seems to be preferred over traditional display methods in tests of clinical utility.
SU‐EE‐A4‐02: Modeling Contrast Agent Extravasation in Dynamic Susceptibility Contrast MRI of Very Leaky Brain Tumors35(2008); http://dx.doi.org/10.1118/1.2961393View Description Hide Description
Purpose: In dynamic susceptibility contrast MRI, when there is a disruption of the BBB, as is frequently the case with braintumors, contrast agent leaks out of the vasculature and causes additional T1 and T2 effects. In slightly leaky conditions, previous studies successfully modeled the T1 effect and were able correct it for better perfusion quantifications. However, in very leaky conditions, the T2 effect can be significant and needs to be taken into account. This study proposed a two‐compartmental model that is able to describe the combined T1 and T2 effects in the measured signals. Method and Materials: Our model considered different tracer residue functions for braintissues and leaking tumors. They were then incorporated in both T1 and T2 changes in the MR signal equation. Three unknown variables were introduced: K1, K2 and K3, and the K2 directly related to the permeability. We used the model to fit measured ΔR2* curves and corrected the contrast leakage in the patient data with heavy T2 effect. Results: The proposed model was able to fit well the leaking ΔR2* curves and better correct them comparing to the previous model. The GM/WM CBV ratios were comparable before (1.26) and after (1.19) the correction. However, Tumor/WM CBV ratio was 42.6% decreased after the correction (2.25 v.s. 5.27). The K2 map was able to describe regions with significant contrast extravasation, whereas the previous model failed due to the additional T2 effect. Conclusion: The model proposed in this study was able to correct both T1 and T2 effects of contrast extravasation in DSC MRI. The T2 component significantly overestimated rCBV in very leaking braintumors, which was not considered in the previous model with T1 effect only. In addition to better correct the rCBV maps, our model successfully extracted the regional permeability changes of the tumors.
SU‐EE‐A4‐03: Evaluating and Understanding Relative Phase Angle Between Fat and Water and Its Effect On Fat Quantification in the Dixon Methods35(2008); http://dx.doi.org/10.1118/1.2961394View Description Hide Description
Purpose: The purpose of this study is to measure phase angle α between water and fat as a function of TE and demonstrate that uncertainties in setting the exact TE or knowing the exact a can lead to variations in the fat quantification by the Dixon methods. Method and Materials: Two phantoms were constructed. One consisted of half pure water solution and half soybean oil with a clear interface, which was used to measure the relative phase angle between water and fat as a function of TE. The other consisted of 7 vials of homogeneously mixed vegetable oil and distilled water with different oil/water volume ratios (0/100, 10/90, 20/80, 30/70, 40/60, 50/50 and 100/0). We used a 2D FSPGR sequence to acquire the images with the following parameters: TR=180ms, flip‐angle =80°. TE was varied between 2.0 and 5.5ms in 0.1ms steps. A recently‐developed 2‐point Dixon algorithm was used to generate separated water and fat images.Results: We obtained the relationship between α and TE in phantom and in vivo by experiments. The least‐squares linear fits of the experimental results for phantom and in‐vivo yield and , respectively. From these relationships, the real α can be easily derived. Also, we demonstrated that the variations of TE allowed in the clinical range may lead to variations up to 40% in the apparent fat quantification due to the deviation of α Conclusion: It is desirable to know the true relationship of α and TE in in‐ and out‐of‐phase imaging. Such a relationship can guide us to select certain parameters to obtain desired relative phase angles. The variations in TE may lead to variations in fat quantification. The systemic experimental studies on TE dependence of α are expected to help improve the use of Dixon methods and reduce the fat quantification errors.
SU‐EE‐A4‐04: Accelerating Breast Dynamic Contrast Enhanced MRI with Efficient Multiple Acquisitions by SPEED Using Shared Information35(2008); http://dx.doi.org/10.1118/1.2961395View Description Hide Description
Purpose: The efficient multiple acquisition method using Skipped Phase Encoding and Edge Deghosting (SPEED) has been successfully demonstrated with a dynamic contrast enhanced (CE) mice tumor study; however, it has not been tested with any human subjects. In this work, this technique is further developed to accelerate breast dynamic CE MRI.Method and Materials: In dynamic CE‐MRI, a series of images are acquired within different time frames, which contain highly similar spatial information. The strong structural similarity is used to accelerate imaging by SPEED with factors greater than that achievable with a single acquisition. It was tested in this work with an in vivo breast dynamic CE MRI study. The dynamic CE scan was performed on a GE 1.5T system using a T1‐weighted gradient echo sequence (matrix 256×256, FOV 20cm × 20cm, TR = 5.5 ms, TE = 1.5 ms, flip angle = 30°, slice thickness = 4 mm, the number of frames = 7). Results: Reference images are first reconstructed from full k‐space data. The corresponding deghosted images are then reconstructed from partial data, with undersampling factors of 3/5 for one frame and 2/5 for all other frames, resulting in an acceleration factor of 2. In other words, the total scan time is 3.5 times that of a single acquisition. The imagesreconstructed from partial data by the proposed method show comparable quality as those reference images.Conclusion: In this work, the technique of efficient multiple acquisitions by SPEED is further developed to accelerate breast dynamic CE MRI. By using shared spatial information, a breast dynamic CE MRI study is accelerated by SPEED with a factor of 2, which is greater than that achievable with a single acquisition. This saving can be used to double the image resolution, or to increase the frame rates of dynamic sequence.
35(2008); http://dx.doi.org/10.1118/1.2961396View Description Hide Description
Purpose: Magnetic particle imaging, MPI, was introduced in 2005 and promises to be sufficiently sensitive to allow molecular imaging. We introduce and explore numerically an alternative method of encoding position in magnetic nanoparticle imaging. The original MPI method localized the nanoparticle signal at the 3rd harmonic frequency using a strong magnetic field to saturate the nanoparticles outside of the field free point (FFP). We present an alternative method of encoding the signal position in which the signal at the second harmonic is recorded so signal is produced along a magnetic field gradient. Method and Materials: The signal from 20nm magnetic nanoparticles was simulated with a Langevin model using iron oxide properties. Response matrices were calculated for a linear gradient across the sample and the condition number of the matrices was used as the metric for stability of the reconstruction. Results: The conditioning of the second harmonic is significantly better conditioned than that for the third harmonic. The condition number of the response matrix used to reconstruct the spatial distribution of the signal is nearly one for ideal cases. The size of the field gradient increases for increasing number of encoded pixels but need not be as large as that required to saturate the nanoparticles outside a single voxel. For a single size nanoparticle, the response function approximates a smoothed Haar wavelet function at the smallest scale and the best conditioning occurs when the scaling of the response functions approximates the scaling of a dyadic wavelet basis. Conclusion: A linear gradient can be used to encode nanoparticle position using the second or third harmonics. The signal at the second harmonic is significantly larger than at the third harmonic frequency and the conditioning is significantly better; the combination should allow an increase in sensitivity by a factor of from 2 to 8.
SU‐EE‐A4‐06: Estimation of the Optimal Maximum Beam Angle and Angular Increment for Normal and Shear Strain Estimation35(2008); http://dx.doi.org/10.1118/1.2961397View Description Hide Description
Purpose: In currentultrasoundelastography, only the axial component of the displacement vector is estimated and used to produce strain images. Previously we had proposed a method to estimate both the axial and lateral components of a displacement vector using radiofrequency echo signal data acquired along multiple angular insonification directions of the ultrasound beam. In this study, we present error propagation through the least square fitting process for optimization of the angular increment and maximum beam steered angle. Method and Materials:Ultrasound simulations are performed to corroborate theoretical predictions of the optimal values for the maximum beam angle and angular increment. Beam steering characteristics of the linear array were simulated by selecting appropriate time delays for each element which determines the focal point and steering angle for the beam. A uniformly elastic phantom was simulated by modeling a random distribution of 50 μm polystyrene beads with an average concentration of 9.7/mm3 in a medium (to simulate Rayleigh scattering). After computing RF signals for each insonification angle, the phantom was deformed by a uniaxial compression (1% of the phantom height). The displacement of each scatterer in the phantom was calculated using the Finite Element Analysis(FEA)software ANSYS. The new scatterer positions were used for calculating the post compression echo signals at each insonification angle. Results: The theoretical prediction matches well with numerical simulations. For typical system parameters, the optimal maximum beam angle is around 10‐deg for axial strain estimation, and around 15‐deg for lateral strain estimation. The optimal angular increment is around 4∼6 deg, which indicates that only 5∼7 beam angles are required for this strain tensor estimation technique. Conclusion: The theory presented in this study is useful for choosing optimal parameters for the angular data acquisition process for strain tensor estimation.
- Moderated Poster — Area 4 (Imaging): Nuclear and X‐ray Imaging
35(2008); http://dx.doi.org/10.1118/1.2961368View Description Hide Description
Purpose: An inherent problem in four‐dimensional (4D) PETimaging is the poor statistics in each phase, because the total coincidence events are divided into several phase bins during image acquisition. We propose in this work a simple but efficient image post‐processing method to improve the 4DPET image quality. Method and Materials: In this method, the entire acquired coincidence events are used for each individual phase to enhance their signal‐to‐noise ratios (SNRs). A summed 3D PETimage is first obtained from the noisy 4DPET images, which represents the maximum SNR achievable. By deconvolving the 3D image with a deformable model derived from 4DCT, an improved 4DPET phase series can be obtained. For the best image quality, voice coaching was used to assure a regular and consistent breathing pattern during the course of PET and CT scans. The method was quantitatively evaluated with numerical and physical phantom experiments. Three clinical studies of pancreatic, lung and liver cancer patients were then carried out. Results: Numerical simulations showed that the model‐based 4D‐PET deconvolution method converged monotonically to the “ground truth” within a few iterations, and the SNR of the physical phantom images showed an increase of 83% over the conventional 4D PET without sacrificing spatial resolution. Similar performance was also observed for the patient study. Conclusion: We have developed a new method for improved 4D‐PET imaging. A salient feature of the method is that the coincidence events acquired at different time points are considered simultaneously when reconstructing each phase‐resolved image, leading to substantially improved 4D‐PET images.
SU‐DD‐A4‐02: Assessment of PET Estimated Tumor Volume by Four‐Dimensional Computed Tomography Measurements35(2008); http://dx.doi.org/10.1118/1.2961369View Description Hide Description
Purpose:PETimaging is increasingly playing a key role in tumor detection, staging, and radiotherapy target definition of different cancer sites. However, the use of PET delineation in treatment planning of lungcancer is hindered by breathing motion blur artifacts. These artifacts complicate the evaluation of segmentation algorithms accuracy for PET. In this work, we propose to assess PET estimated tumor volume by comparing with four‐dimensional computed tomography measurements under different motion conditions. Method and Materials: We analyzed six NSCLC patients' datasets who underwent pre‐treatment FDG‐PET/CT scanning and 4D‐CT simulations. The 4D‐CT were acquired according to a ciné‐mode 4D protocol; 25 scans were collected at each couch position while the patient underwent simultaneous bellows and spirometry measurements. Volumetric datasets were rebinned to the following phases: end‐of exhalation, beginning‐of‐exhalation, mid‐exhalation, mid‐inhalation, and end‐of‐inhalation, in addition to computing average CT and MIP datasets. The tumor volume was manually contoured on nine PET and CT datasets by two physicians for each patient to provide a gold standard for comparison. Different PET segmentation algorithms based on SUV thresholding and active contours were evaluated. Motion artifacts were mitigated using a deconvolution‐based deblurring approach. Results: Our preliminary analysis indicates that manual contouring on 4D‐CT datasets produced relatively consistent results across the different phases by the two observers. Motion deblurring had varying effects on the evaluated algorithms; it increased the estimated volume by the 40% maximum SUV thresholding and reduced it when applying an SUV cutoff of 2.5. The active contour model produced robust results independent of the blurring effect. Conclusion: We have investigated a 4DCT approach for assessing the performance of PET segmentation methods in lungcancer. Our preliminary results indicate high sensitivity of thresholding methods and better robustness by active contouring models. Further investigation is required to improve accuracy relative to the 4DCT gold standard.