1887
banner image
No data available.
Please log in to see this content.
You have no subscription access to this content.
No metrics data to plot.
The attempt to load metrics for this article has failed.
The attempt to plot a graph for these metrics has failed.
ROC analysis in patient specific quality assurance
Rent:
Rent this article for
USD
10.1118/1.4795757
/content/aapm/journal/medphys/40/4/10.1118/1.4795757
http://aip.metastore.ingenta.com/content/aapm/journal/medphys/40/4/10.1118/1.4795757

Figures

Image of FIG. 1.
FIG. 1.

Illustration of tests whose binary outcome lead to good or poor detectability. Tests where a normal result and an abnormal result share a very similar distribution (left panel) are difficult to discriminate on the basis of measurements below or above a threshold value. Tests whose normal and abnormal distributions have dissimilar distributions, such as in the middle panel, are easier to differentiate using a threshold value. Tests that are more ideal lead to better detectability, where the false positive fraction approaches 0, and the true positive fraction approaches 1 (right panel).

Image of FIG. 2.
FIG. 2.

(a)–(d) Plots of the fraction of fields with a passing rate greater than a user defined threshold (between 0% and 100%). The unmodified MLC group is shown in dashed lines, the group with MLC errors are shown with the solid lines. Separation between the pass rate distribution for the unmodified vs the modified group increases as the size of MLC errors increases and as the γ-AD criterion is decreased.

Image of FIG. 3.
FIG. 3.

ROC plots of sensitivity (TPF) vs 1-specificity (FPF) for 4 of 20 curves generated. Curves with highest area have the optimal sensitivity and specificity. Curves along the diagonal, with AUC of 0.5 represent test whose outcome is not significantly different than a random guess.

Image of FIG. 4.
FIG. 4.

Measurement of AUC as a function of γ-criterion and size of MLC error. For MLC errors greater than about 2 mm, the detector employed exhibits very good sensitivity and specificity, and hence very good detectability. For smaller MLC errors, sensitivity and specificity decrease to near random results at very small MLC errors (0.5 mm).

Image of FIG. 5.
FIG. 5.

The ideal threshold value as measured by the point on the AUC curve closest to the point where sensitivity and specificity equal 1.

Tables

Generic image for table
TABLE I.

Ideal threshold parameters as determined from Fig. 5 .

Generic image for table
TABLE II.

Effect of applying the ideal threshold pass rates to an independent set of measurements. Using the AP field from a 7 field prostate plan for 20 randomly chosen patients, we introduced random errors of 1, 2, 3, 4, and 5 mm for each field. The number of field that would be rejected based on the ideal threshold points from Table I were then determined.

Loading

Article metrics loading...

/content/aapm/journal/medphys/40/4/10.1118/1.4795757
2013-04-02
2014-04-23
Loading

Full text loading...

This is a required field
Please enter a valid email address
752b84549af89a08dbdd7fdb8b9568b5 journal.articlezxybnytfddd
Scitation: ROC analysis in patient specific quality assurance
http://aip.metastore.ingenta.com/content/aapm/journal/medphys/40/4/10.1118/1.4795757
10.1118/1.4795757
SEARCH_EXPAND_ITEM