In external beam radiation therapy, digitally reconstructed radiographs(DRRs) and portal images are used to verify patient setup based either on a visual comparison or, less frequently, with automated registration algorithms. A registration algorithm can be trapped in local optima due to irregularity of patient anatomy, image noise and artifacts, and/ or out-of-plane shifts, resulting in an incorrect solution. Thus, human observation, which is subjective, is still required to check the registration result. We propose to use a novel image registration quality evaluator (RQE) to automatically identify misregistrations as part of an algorithm-based decision-making process for verification of patient positioning. A RQE, based on an adaptive pattern classifier, is generated from a pair of reference and target images to determine the acceptability of a registration solution given an optimization process. Here we applied our RQE to patient positioning for cranial radiation therapy. We constructed two RQEs—one for the evaluation of intramodal registrations (i.e., portal-portal); the other for intermodal registrations (i.e., portal-DRR). Mutual information, because of its high discriminatory ability compared with other measures (i.e., correlation coefficient and partitioned intensity uniformity), was chosen as the test function for both RQEs. We adopted translation and 1° rotation as the maximal acceptable registration errors, reflecting desirable clinical setup tolerances for cranial radiation therapy. Receiver operating characteristic analysis was used to evaluate the performance of the RQE, including computations of sensitivity and specificity. The RQEs showed very good performance for both intramodal and intermodal registrations using simulated and phantom data. The sensitivity and the specificity were 0.973 and 0.936, respectively, for the intramodal RQE using phantom data. Whereas the sensitivity and the specificity were 0.961 and 0.758, respectively, for the intermodal RQE using phantom data. Phantom experiments also indicated our RQEs detected out-of-plane deviations exceeding and 2.5°. A preliminary retrospective clinical study of the RQE on cranial portal imaging also yielded good sensitivity and specificity . Clinical implementation of a RQE could potentially reduce the involvement of the human observer for routine patient positioning verification, while increasing setup accuracy and reducing setup verification time.
The authors would like to thank David Sobczak and Xiaofei Ying of St. Jude Children’s Research Hospital for their invaluable assistance in EPID and CT data collection, and DRR generation, respectively. This work was supported by a grant from the National Cancer Institute (No. R29 CA76061).
II.A. Similarity measures
II.A.1. Correlation coefficient (CC)
II.A.2. Mutual information (MI)
II.A.3. Normalized mutual information (NMI)
II.A.4. Partitioned intensity uniformity (PIU)
II.B. Registration quality evaluator (RQE) construction
II.B.1. Training data generation
II.B.2. Supervised classification of the training dataset
II.B.3. Decision boundary calculation
III. EXPERIMENT DESIGN
III.A. Phantom experiment on intramodal RQE
III.B. Phantom experiment on intermodal RQE
III.C. Phantom experiment with simulated out-of-plane deviations
III.D. Retrospective intermodal RQE study using clinical data
IV.A. Choice of similarity measures
IV.B. RQE construction
IV.C. Performance of RQE with in-plane deviations in phantom experiments
IV.D. Performance of RQE in presence of out-of-plane deviations
IV.E. Performance of RQEs in preliminary retrospective clinical study
V.A. Systematic error
V.C. Attraction of similarity measures in neighborhood of global maximum
V.D. RQE performance for phantom imaging versus computer simulations
V.E. Out-of-plane deviations
V.F. Preliminary retrospective clinical study
Data & Media loading...
Article metrics loading...
Full text loading...