Index of content:
Volume 33, Issue 6, June 2006
- Joint Imaging/Therapy Scientific Session: Valenica A
- The John S. Laughlin Science Council Research Symposium: Multi‐Modality Image Fusion and Deformable Registration
33(2006); http://dx.doi.org/10.1118/1.2241506View Description Hide Description
Purpose: To develop an image fusion application utilizing genetic algorithms, segment the fused images using treatment‐planning contours, perform a slice‐by‐slice sub‐fusion of the segmented images in order to measure deformation, and then utilize the measured deformation for adaptive therapy on a helical tomotherapy treatment delivery. Methods and Materials: A reference CTimage of a density and spatial resolution phantom was obtained using a MVCT imaging. A secondary MVCT fusion image was obtained with the phantom offset by a known amount with plugs removed or rotated. An image fusion algorithm was created using genetic programming to perform image registration of the MVCT images. Contours of the plugs were used to extract sub‐images that were separately registered and deformed using the genetic algorithm. Adaptive therapy was achieved thorough a treatment deliverysinogram deformation algorithm. The sinogram deformation algorithm was tested using a geometric test case that consisted of a dose triangle with a 5.2‐cm base located inside a dose circle with of 3.14‐cm radius. The test dose pattern was moved by known amounts by deforming the treatment deliverysinogram.Results: The initial genetic fusion of the reference and secondary MVCT images was achieved in approximately 15 generations. The time required to perform the genetic fusion was typically 10 to 15 seconds. The images were fused to within 0.7‐mm of the correct position. At the end of the initial fusion, the genetic algorithm correctly identified one of the plugs as missing in the secondary MVCT dataset. The genetic algorithm correctly segmented the second resolution plug in a sub‐image and deformed it to within 1‐degree and 0.7‐mm of the correct position. The delivery deformation tests moved the dose to within 5‐mm of the desired position. Conclusions: A genetic algorithm has been developed for performing image fusion and simple deformation of defined regions of interest.
33(2006); http://dx.doi.org/10.1118/1.2241507View Description Hide Description
Purpose: Inversion of a deformation field is applied frequently to map dose and regions of interest to a reference frame. A prevailing naïve approach that takes the opposite displacement of the forward deformation as the displacement of the inverse is mathematically wrong and can cause large errors for large or accumulative deformation. Inversion by “scattered data interpolation” has O(N 2) complexity and is difficult to implement. We propose a simple iterative approach with O(N) complexity. Method: Instead of calculating the inverse, we calculate the displacement of the inverse. The displacement of the inverse is iteratively refined through the displacement of the forward map. We prove that such iterative scheme converges exponentially to the true solution when the deformation field is subject to a condition of the Lipschitz type. The Lipschitz type condition essentially states that the difference of the deformation of two points can not be too far. This is a mild restriction on the deformation field and is usually a valid assumption for any deformable registration method with regularization.
Results: We tested the proposed method on both simulated 2D data and real 4D CT data of lung patient. The simulations showed that the proposed method has exponential convergences to the true inverse. For real 4D CT data, the forward deformation field constructed by deformable registration mapped the test phase to the reference phase and the inverse of that deformation field accurately map the reference phase to the test phase. Conclusions: A simple, accurate and fast method for inverting a deformation field is presented. Both the mathematical proof and the simulations showed its exponential convergence. Simulations and real data tests demonstrated its efficacy in medical imageanalysis and radiotherapy applications. Typically less than 10 iterations are needed to get an inverse deformation field with clinically relavent accuracy.
33(2006); http://dx.doi.org/10.1118/1.2241508View Description Hide Description
Purpose: Generally, in MRI, PET, and CTimage datasets, more information is available for defining the target volume (or normal structures) than is used during the target segmentation. We introduce a method to take advantage of all the imaging information available for target segmentation, including multi‐modality images or multiple image sets from the same modality. Method and Materials: We generalized the multi‐valued level set deformable model (Chan et al., JVCI (2000) 11:130–141) for simultaneous 2D/3D segmentation/registration of multi‐modality images consisting of a combination of PET, CT, or MR datasets. Information from multi modality image sets is combined to define the final target volume. The method was evaluated on three patient cases, including: a non‐small cell lungcancer case with PET/CT, a cervix cancer case with PET/CT, and a prostate patient case with CT and MR. Results: In the case of the lungtumor the level set algorithm took 120 iterations for convergence, while in the case cervix tumor it converged after 30 iterations because the tumor has a deformed circular shape. In the prostate case, it took 50 iterations to converge and the results were made more sensitive to the shape prior information, because MR provides less gradient strength than PET. The computational time was on the order of few seconds in all cases. Conclusion: We a developed a new target segmentation algorithm which uses information simultaneously from multiple modalities. Our initial results indicate that the algorithm is promising and could provide physicians with a reliable contouring tool for lung, cervical, and prostate cancer.
This research was partially supported by NIH grant R01 CA85181 and a grant from TomoTherapy, Inc.
33(2006); http://dx.doi.org/10.1118/1.2241509View Description Hide Description
Purpose: Existing methods for deformable image registration typically use homogeneous regularization to encourage global smoothness. Less work has been done to incorporate voxel‐level tissue‐specific elasticity information. Ignoring differences in elasticity can, however, result in non‐physiological registrations, such as bone warping. We propose an approach to incorporate tissue rigidity information using a spatial variant regularization. Method and Materials: Regularized image registration algorithms estimate the deformation by minimizing a cost function, consisting of a dis‐similarity metric and regularization. To account for tissue‐type‐dependent rigidity information, we incorporate into the cost function a non‐rigidity penalty: an integral of stiffness index for local deformation weighted by spatial variant regularization factor depending on tissue type. For CT data, a simple monotonic increasing function of the CT number is used as a rigidity index for local tissue type. A necessary and sufficient condition for stiff local deformation is derived, and the local non‐stiffness is measured by the deviation of local Jacobian from orthnormality using the Frobenius norm. Tensor B‐Splines are used to parameterize the deformation field. A multi‐resolution scheme and gradient‐based approach are applied for optimization. Performance was accessed by registering 3D thorax CT‐images obtained from different breathing phases. Results: Experiments with clinical data demonstrate higher accuracy for inhale‐exhale thorax CT registration with the proposed approach. We observe comparable intensity match as the unregularized approaches, but more physiologically reasonable results with respect to different tissue types; in particular bone warping phenomena is eliminated in general. Conclusion: This work provides a way to incorporate tissue‐type‐dependent information into deformable registration framework with regularization design. Inference from image intensity avoids explicit segmentation, and is robust to partial volume effect. Our formulation based on local Jacobian and Frobenius norm provides analytical expression for the regularization and its derivative. More physiological results are achieved with minor computation expense.
Supported by NIH P01‐CA59827.
TU‐C‐ValA‐05: Assessment of a Model‐Based Deformable Image Registration Approach for Radiotherapy Planning33(2006); http://dx.doi.org/10.1118/1.2241510View Description Hide Description
Purpose: To assess the accuracy of a surface‐based deformable image registration strategy as a function of the elasticitymodel for the integration of multi‐modality imaging,image‐guidedradiation therapy, and quantification of geometrical change during and following therapy.
Method and Materials: A surface‐model based deformable image registration system has been developed that enables quantitative description of geometrical change in multi‐modal images. Based on the deformation of organ surfaces represented by triangular surface meshes, a volumetric deformation field is derived using different volumetric elasticitymodels (Thin‐Plate Splines, Wendland functions, Elastic Body Splines) as alternatives to finite‐element modeling.Results: The system was demonstrated on five livercancer patients, ten prostate cancer patients, thorax in five healthy volunteers, and abdomen in five healthy volunteers. The accuracy of the system was assessed by tracking visible fiducials (bronchial bifurcations in the lung, vessel bifurcations in the liver, implanted gold markers in the prostate). The maximum displacements for lung,liver and prostate were 5.3 cm, 3.2 cm, and 1.8 cm respectively. The largest registration error (direction, mean ± standard deviation) for lung,liver and prostate were (inferior‐superior, −0.21 ± 0.38 cm), (anterior‐posterior, −0.09 ± 0.34 cm), and (left‐right, 0.04 ± 0.38 cm) respectively, which was within the image resolution regardless of the deformation model. The computation time (2.7 GHz Intel Xeon) was on the order of seconds (e.g. 10 seconds for two prostate data sets), and image deformation results could be viewed at interactive speed (less than 1 second for 512×512 voxels). Conclusion: Surface‐based deformable image registration enables the quantification of geometrical change in normal tissue and tumor with acceptable accuracy and speed.
TU‐C‐ValA‐06: Quantifying the Properties and Accuracy of a Deformable Image Registration Algorithm for 4D Treatment Planning33(2006); http://dx.doi.org/10.1118/1.2241511View Description Hide Description
Purpose: A necessary tool to facilitate automated four‐dimensional and adaptive radiotherapy planning is deformable image registration (DIR). The purpose of the current study was to quantify the accuracy of a DIR algorithm by comparing automatically transferred and manually segmented structures on 4DCT images.Method and Materials: 780 structures were manually segmented on thirteen patient 4DCT image sets each consisting of 10 respiratory phases. A large deformable diffeomorphic DIR algorithm, integrated with a commercial treatment planning system, was used to map each CT set from the inspiration respiratory phase CTimage set respiratory phase images. The calculated displacement vector fields were used to deform and transfer structures defined on the inspiration CT to the other respiratory phase CTimage sets. The manually and automatically segmented structures were compared using volumetric, displacement, and surface congruence metrics. Results: Deformation with respiration was observed for the lung tumor and normal tissues. This deformation was verified by examining the mapping of high contrast objects, such as the lungs and cord, between image sets. The auto‐ and manual methods showed similar trends, with a smaller difference observed between the GTVs than other structures. The auto‐contoured structures were more consistent both in terms of centroid displacement and volume as a function of respiratory phase than manual contours. 1.6% of the time, deficiencies of manual contouring has been detected using auto contouring. Image artifacts play a crucial role in auto contouring. Conclusion: An automated system is established to auto‐contour structures starting from one 4DCT image phase to other 4DCT image phases. The auto‐contoured structures generally agree with the manually drawn structures. However the auto‐contoured structures are more consistent in trajectory and volume, and also highlighted some large errors in the manually drawn contours. Careful assessment is needed in the presence of 4DCT artifacts.
33(2006); http://dx.doi.org/10.1118/1.2241512View Description Hide Description
Purpose: 4D‐imaging techniques such as 4D‐CT/MRI/PET reveal spatial and temporal details of patient's anatomy. Here we develop a 4D‐4D registration method to utilize the 4D data acquired under different conditions or using different modalities. Method: A 4D input (model or reference) consists of a number of 3D sets of images, each representing the patient's anatomy at a phase point. When the patient's breathing pattern is repeatable, the task of 4D‐4D matching is to find the appropriate 3D dataset in the model input for each phase in reference. Instead of exhaustively searching for the best match for each phase, a search algorithm was implemented, which can simultaneously find the matches for all phases with consideration of temporal relationship between the 3D image sets in the inputs. An interpolation scheme capable of deriving an image set based on two temporally adjacent 3D‐datasets was implemented to deal with the situation where the discrete temporal points of the two inputs do not coincide. Digital phantom and patient studies were performed to illustrate the inter‐/intra‐modality 4D‐4D registration technique. Results: In the phantom study where the optimal match is known, the proposed technique was able to reproduce the “ground truth” with high spatial fidelity (<1.5mm). In addition, the technique regenerated all deliberately introduced “missing” 3D images at different phase points in one of the inputs because of the temporal interpolation. In a registration of gated‐MRI and 4DCT, the technique enabled us to optimally select the corresponding CT phase. The technique was also found useful for the registration of two sets of 4DCTs acquired at different time points. In this situation, a spatial accuracy of less than 2.5mm was achieved in all three cases. Conclusions: Automated 4D‐4D registration can find the best possible spatio‐temporal match between the two 4D datasets and may have significant implication for IGRT.