banner image
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Training models of anatomic shape variability
Rent this article for


Image of FIG. 1.
FIG. 1.

Purely intensity driven registration can generate noncredible shapes when pulling daily images back to the atlas for IGRT/ART. Here the boundary of the prostate segmentation has drifted into the rectum during an intensity-driven registration. Such segmentations are not allowed with our shape based training (see Fig. 16).

Image of FIG. 2.
FIG. 2.

The process of training and using probability distributions on object shapes in clinical applications. The step outlined in bold, fitting training images with parametric models as input to statistical analysis, is the main subject of this article.

Image of FIG. 3.
FIG. 3.

Fitting a statistical deformable model to a target training image. (Top) 3D surface views and (bottom) single sagittal slice views of bladder template geometry (left) coarsely aligned to a target training image, (middle) deformably fit to that image, and (right) in the context of the actual grayscale data.

Image of FIG. 4.
FIG. 4.

A detailed view of our method for the training step shown in Fig. 2. Shapes are fit to the training images iteratively according to a binary image segmentation algorithm, with purely geometric terms relaxed in favor of converging group statistics. See the Appendix for a summary of the discrete medial representation and principal geodesic analysis.

Image of FIG. 5.
FIG. 5.

An image landmark identified at the tip of a segmentation with (left) large tolerance and (right) a tighter tolerance.

Image of FIG. 6.
FIG. 6.

A medial mesh (thick lines) and implied surface (thin lines) with a (left) high nonuniformity penalty and (right) low nonuniformity penalty. Meshes with high irregularity may imply similar surfaces as more regular meshes, but can result in qualitatively inferior results and break our volumetric correspondence assumptions.

Image of FIG. 7.
FIG. 7.

A slice from a distance map and a suboptimally fit surface illustrating . The light gray lines show the distance map gradient direction; dark gray lines show the surface normal direction.

Image of FIG. 8.
FIG. 8.

Two candidate model meshes compared to tiled surface of a segmentation of the thin masseter muscle in the neck. The mesh on the right has been fit naïvely; the better fitting mesh on the left has been fit to a dilated image and then contracted.

Image of FIG. 9.
FIG. 9.

A mean bladder (darker upper object) and its first two principal modes of deformation relative to the mean prostate (lighter lower object). These two modes of deformation together cover over 65% of the shape variability across the 18 images of this patient.

Image of FIG. 10.
FIG. 10.

Tiled surfaces from two procedurally generated warped ellipsoid test objects showing bending and tapering.

Image of FIG. 11.
FIG. 11.

Relationship between the boundary voxels of an ellipsoid binary training image (gray) and the fitted model’s surface (black) through a transaxial slice.

Image of FIG. 12.
FIG. 12.

Histogram of average and max distances for warped ellipsoid models over 20 training cases. The two outliers were excluded from the first round statistics, but were successfully fit in the next round using the recovered statistical modes of deformation.

Image of FIG. 13.
FIG. 13.

As we would expect, the first two principal modes of deformation trained from 20 bent, twisted, and tapered ellipsoids reflect bending and tapering.

Image of FIG. 14.
FIG. 14.

Histogram of average and max distances for fit bladder, prostate, and rectum models over 328 training cases pooled from 25 sets of same patient inter-fractional images.

Image of FIG. 15.
FIG. 15.

Multi-object shape models. (Left) A 15 object complex of structures from the head and neck. (Right) Deep brain structures from an autism study, left and right hippocampus, amygdala, putamen, caudate, and globus pallidus.

Image of FIG. 16.
FIG. 16.

Manual (white) and computed (black) segmentations of the prostate in a treatment image, trans-axial slice on the left, saggital slice on the right. The segmentation computed based on shape training described in this article agrees with the manual segmentation for this day agree with an 89% int/ave volume overlap.

Image of FIG. 17.
FIG. 17.

Discrete medial representations. (Left) A medial sample with two equal length spokes that touch opposing surface patches. (Mid Left) A sampled skeletal sheet for a kidney with neighbor relations marked. (Middle) Spokes at each medial sample describe the orientation of the implied surface at that hub. (Mid Right) A densely sampled surface can be interpolated from the medial samples. (Right) A prostate model with subfigures defined for the left and right seminal vesicles.


Generic image for table

Algorithm 1: Iteratively Training Models of Shape Variability


Article metrics loading...


Full text loading...

This is a required field
Please enter a valid email address
752b84549af89a08dbdd7fdb8b9568b5 journal.articlezxybnytfddd
Scitation: Training models of anatomic shape variability