1887
banner image
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
Interactive initialization of 2D/3D rigid registration
Rent:
Rent this article for
USD
10.1118/1.4830428
/content/aapm/journal/medphys/40/12/10.1118/1.4830428
http://aip.metastore.ingenta.com/content/aapm/journal/medphys/40/12/10.1118/1.4830428

Figures

Image of FIG. 1.
FIG. 1.

Setup of the gesture based approach. (left) Kinect device and computer screen displaying the graphical user interface and interaction panel. (right) user performing a grip gesture.

Image of FIG. 2.
FIG. 2.

(Left) Graphical user interface for gesture-based 2D/3D initialization. (right) interaction panel indicating control mode, and cursors indicating the current gesture classification (unknown, release, grip). The four primary panes control orientation and translation. The bottom left pane and left sidebar indicate the operator's hand location in the Kinect's workspace. The top left pane toggles between the x-ray images.

Image of FIG. 3.
FIG. 3.

Calculation of pose update to the 3D data for a user interaction. is the world coordinate frame, and ′ are the coordinate frames of the reference x-ray at its current and new positions, and ′ are the coordinate frames of the 3D data at its current and new positions. Dashed line marks the unknown transformation.

Image of FIG. 4.
FIG. 4.

Data acquisition steps shown from left to right: Original depth map where lighter colors are closer and darker colors are farther away. The joint positions are visualized as a skeleton overlaid onto the depth map; ROI surrounding the right hand; and downsampled and normalized depth map in ROI.

Image of FIG. 5.
FIG. 5.

Variability of hand sizes used for training the classifier.

Image of FIG. 6.
FIG. 6.

(a) The user attempts to mimic the volume-tool pose defined in the virtual world to the patient-tool pose in the physical world. That is, positioning the tool in the physical world such that . When this happens, the volume and patient coordinate frames coincide, and the unknown can be computed by composing the known with the tracked . (b) Coordinate systems involved in the initialization, solid lines denote known transformations, dashed denote transformations to be determined.

Image of FIG. 7.
FIG. 7.

Graphical user interfaces for (a) pose planning in virtual world and (b) pose mimicking in physical world.

Image of FIG. 8.
FIG. 8.

Reference data sets used in this study. The x-ray images for the ISI and Vienna data sets were provided in the anterior-posterior and lateral views. The Ljubljana data set provided 18 evenly spaced x-ray images around the spinal cord, and we selected two images (000 and 006) in our study. Original x-ray images for this data set had low contrast, so they were thresholded and enhanced to provide better visualization. The right two columns show the corresponding 2D rendering of the CT and MR images at their ground truth positions. They show the differences between x-ray images and rendering of the 3D data. The physical dimensions for x-rays, CT, and MR are shown underneath the images for each data set.

Image of FIG. 9.
FIG. 9.

(a) Transformations involved in the validation of AR-based initialization approach. The tracker' coordinate system is the common/world coordinate system used by the reference data set. (b) Definition of reference transformation. The transformation between the patient, bottom, and patient DRF, top, coordinate systems was used as the reference for our experiments.

Image of FIG. 10.
FIG. 10.

Experimental results for gesture-based initialization. (left column) distribution of mTRE per operator, and (right column) distribution of interaction time per operator.

Image of FIG. 11.
FIG. 11.

Initial versus final mTRE for the gesture-based method, diagonal marks the no-change line. Note that the initial mTREs for the Vienna data set (circle marker symbol) are larger due to the “lever” effect associated with rotating a larger object.

Image of FIG. 12.
FIG. 12.

Experimental results for interactive AR-based initialization. (left column) distribution of mTRE per operator, and (right column) distribution of interaction time per operator.

Image of FIG. 13.
FIG. 13.

Relationship between instant and interactive initialization: instant mTRE versus interactive mTRE (top), and instant mTRE versus interaction time (bottom).

Image of FIG. 14.
FIG. 14.

Clinical setup for ACL reconstruction surgery, and a lateral x-ray (not from same procedure). X-ray shows knee flexed at about 120°. Preoperative diagnostic MR is acquired at full extension, 0° between femur and tibia.

Tables

Generic image for table
TABLE I.

Experimental results for instant coarse initialization (for each reference data set, the results were summarized from all participants and both x-ray/CT and x-ray/MR experiments).

Generic image for table
TABLE II.

Consolidated results for interactive AR-based initialization (results were summarized according to different categories).

Loading

Article metrics loading...

/content/aapm/journal/medphys/40/12/10.1118/1.4830428
2013-11-19
2014-04-18
Loading

Full text loading...

This is a required field
Please enter a valid email address
752b84549af89a08dbdd7fdb8b9568b5 journal.articlezxybnytfddd
Scitation: Interactive initialization of 2D/3D rigid registration
http://aip.metastore.ingenta.com/content/aapm/journal/medphys/40/12/10.1118/1.4830428
10.1118/1.4830428
SEARCH_EXPAND_ITEM