No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
The full text of this article is not currently available.
Nonlocal atlas-guided multi-channel forest learning for human brain labeling
1.S. Zhang, Y. Zhan, M. Dewan, J. Huang, D. N. Metaxas, and X. S. Zhou, “Towards robust and effective shape modeling: Sparse shape composition,” Med. Image Anal. 16, 265–277 (2012).
2.C. Fennema-Notestine, D. J. Hagler, L. K. McEvoy, A. S. Fleisher, E. H. Wu, D. S. Karow, and A. M. Dale, “Structural MRI biomarkers for preclinical and mild Alzheimer’s disease,” Hum. Brain Mapp. 30, 3238–3253 (2009).
3.R. Westerhausen, E. Luders, K. Specht, S. H. Ofte, A. W. Toga, P. M. Thompson, T. Helland, and K. Hugdahl, “Structural and functional reorganization of the corpus callosum between the age of 6 and 8 years,” Cereb. Cortex 21, 1012–1017 (2010).
4.G. Wu, Q. Wang, D. Zhang, F. Nie, H. Huang, and D. Shen, “A generative probability model of joint label fusion for multi-atlas based brain segmentation,” Med. Image Anal. 18, 881–890 (2014).
5.L. Wang, Y. Gao, F. Shi, G. Li, J. H. Gilmore, W. Lin, and D. Shen, “Links: Learning-based multi-source integration framework for segmentation of infant brain images,” NeuroImage 108, 160–172 (2015).
6.H. Wang, J. W. Suh, S. R. Das, J. B. Pluta, C. Craige, and P. Yushkevich, “Multi-atlas segmentation with joint label fusion,” IEEE Trans. Pattern Anal. Mach. Intell. 35, 611–623 (2013).
8.T. Tong, R. Wolz, P. Coupé, J. V. Hajnal, D. Rueckert, and The Alzheimer’s Disease Neuroimaging Initiative, “Segmentation of MR images via discriminative dictionary learning and sparse coding: Application to hippocampus labeling,” NeuroImage 76, 11–23 (2013).
10.G. Wu, Q. Wang, H. Jia, and D. Shen, “Feature-based groupwise registration by hierarchical anatomical correspondence detection,” Hum. Brain Mapp. 33, 253–271 (2012).
12.P. Coupé, J. V. Manjón, V. Fonov, J. Pruessner, M. Robles, and D. L. Collins, “Patch-based segmentation using expert priors: Application to hippocampus and ventricle segmentation,” NeuroImage 54, 940–954 (2011).
13.G. Wu, Q. Wang, D. Zhang, and D. Shen, “Robust patch-based multi-atlas labeling by joint sparsity regularization,” in MICCAI Workshop STMI (Springer, Berlin Heidelberg, 2012).
14.V. N. Vapnik and V. Vapnik, Statistical Learning Theory (Wiley, New York, NY, 1998), Vol. 1.
15.Y. Freund and R. E. Schapire, “A desicion-theoretic generalization of on-line learning and an application to boosting,” in Computational Learning Theory (Springer, Berlin Heidelberg, 1995), pp. 23–37.
17.V. A. Magnotta, D. Heckel, N. C. Andreasen, T. Cizadlo, P. W. Corson, J. C. Ehrhardt, and W. T. Yuh, “Measurement of brain structures with artificial neural networks: Two-and three-dimensional applications 1,” Radiology 211, 781–790 (1999).
18.D. Zikic, B. Glocker, and A. Criminisi, “Atlas encoding by randomized forests for efficient label propagation,” in Medical Image Computing and Computer-Assisted Intervention–MICCAI 2013 (Springer, Berlin Heidelberg, 2013), pp. 66–73.
19.Z. Tu and X. Bai, “Auto-context and its application to high-level vision tasks and 3D brain image segmentation,” IEEE Trans. Pattern Anal. Mach. Intell. 32, 1744–1757 (2010).
20.Y. Hao, T. Wang, X. Zhang, Y. Duan, C. Yu, T. Jiang, and Y. Fan, “Local label learning (LLL) for subcortical structure segmentation: Application to hippocampus segmentation,” Hum. Brain Mapp. 35, 2674–2697 (2014).
21.G. Ma, Y. Gao, G. Wu, L. Wu, and D. Shen, “Atlas-guided multi-channel forest learning for human brain labeling,” in Medical Computer Vision: Algorithms for Big Data (Springer, International Publishing, 2014), pp. 97–104.
23.M. Kim, G. Wu, W. Li, L. Wang, Y.-D. Son, Z.-H. Cho, and D. Shen, “Automatic hippocampus segmentation of 7.0 Tesla MR images by combining multiple atlases and auto-context models,” NeuroImage 83, 335–345 (2013).
24.D. W. Shattuck, M. Mirza, V. Adisetiyo, C. Hojatkashani, G. Salamon, K. L. Narr, R. A. Poldrack, R. M. Bilder, and A. W. Toga, “Construction of a 3D probabilistic atlas of human cortical structures,” NeuroImage 39, 1064–1080 (2008).
26.N. J. Tustison, B. B. Avants, P. Cook, Y. Zheng, A. Egan, P. Yushkevich, and J. C. Gee, “N4ITK: Improved N3 bias correction,” IEEE Trans. Med. Imaging 29, 1310–1320 (2010).
27.S. M. Smith, M. Jenkinson, M. W. Woolrich, C. F. Beckmann, T. E. Behrens, H. Johansen-Berg, P. R. Bannister, M. De Luca, I. Drobnjak, D. E. Flitney, and R. K. Niazy, “Advances in functional and structural MR image analysis and implementation as FSL,” NeuroImage 23, S208–S219 (2004).
29.K. D. Fritscher, M. Peroni, P. Zaffino, M. F. Spadea, R. Schubert, and G. Sharp, “Automatic segmentation of head and neck CT images for radiotherapy treatment planning using multiple atlases, statistical appearance models, and geodesic active contours,” Med. Phys. 41, 051910 (11pp.) (2014).
30.L. Wang, K. C. Chen, Y. Gao, F. Shi, S. Liao, G. Li, S. G. F. Shen, J. Yan, P. K. M. Lee, and B. Chow, “Automated bone segmentation from dental CBCT images using patch-based sparse representation and convex optimization,” Med. Phys. 41, 043503 (14pp.) (2014).
Article metrics loading...
It is important for many quantitative brain studies to label meaningful anatomical regions in MR brainimages. However, due to high complexity of brain structures and ambiguous boundaries between different anatomical regions, the anatomical labeling of MR brainimages is still quite a challenging task. In many existing label fusion methods, appearance information is widely used. However, since local anatomy in the human brain is often complex, the appearance information alone is limited in characterizing each image point, especially for identifying the same anatomical structure across different subjects. Recent progress in computer vision suggests that the context features can be very useful in identifying an object from a complex scene. In light of this, the authors propose a novel learning-based label fusion method by using both low-level appearance features (computed from the target image) and high-level context features (computed from warped atlases or tentative labeling maps of the target image).
In particular, the authors employ a multi-channel random forest to learn the nonlinear relationship between these hybrid features and target labels (i.e., corresponding to certain anatomical structures). Specifically, at each of the iterations, the random forest will output tentative labeling maps of the target image, from which the authors compute spatial label context features and then use in combination with original appearance features of the target image to refine the labeling. Moreover, to accommodate the high inter-subject variations, the authors further extend their learning-based label fusion to a multi-atlas scenario, i.e., they train a random forest for each atlas and then obtain the final labeling result according to the consensus of results from all atlases.
The authors have comprehensively evaluated their method on both public LONI_LBPA40 and IXI datasets. To quantitatively evaluate the labeling accuracy, the authors use the dice similarity coefficient to measure the overlap degree. Their method achieves average overlaps of 82.56% on 54 regions of interest (ROIs) and 79.78% on 80 ROIs, respectively, which significantly outperform the baseline method (random forests), with the average overlaps of 72.48% on 54 ROIs and 72.09% on 80 ROIs, respectively.
The proposed methods have achieved the highest labeling accuracy, compared to several state-of-the-art methods in the literature.
Full text loading...
Most read this month