Next: Graphical Analysis Up: Analysis....and stuff Previous: Confusion Hypercubes   Contents

## ROC Curves

ROC stands for Receiver-Operator characteristics. The ROC curve identifies how many false positives one must tolerate to be guaranteed a certain percentage of true positives. An ideal ROC curve is the step-function.

The basic function in the ROC module is the computeRocForLabel() function. This function returns a dictionary with four items.

area
The area under the ROC curve. The closer this value it to 1.0, the better the ROC curve.
curve
The raw data for the ROC curve.
insideHistogram
A Histogram object of points inside the class versus distance. This is an instance of the Scientific.Statistics.Histogram class.
outsideHistogram
A Histogram object of points outside the class versus distance.

Using the results from the FullEM run,

```>>> import bisect
>>> roc_stats = computeRocForLabel('1', fullem_labeling, ds)
>>> #
>>> # What percentage if we want a 0.0 percent false
>>> # positive rate? (minus 1 because bisect says where
>>> # to insert the point)
...
>>> (bisect.bisect(roc_stats['curve'][0], 0.0) - 1) / 750.0
0.16933333333333334
>>> #
>>> # 10 percent
...
>>> (bisect.bisect(roc_stats['curve'][0], 0.1) - 1) / 750.0
0.26933333333333331
>>> #
>>> # 50 percent
...
>>> (bisect.bisect(roc_stats['curve'][0], 0.5) - 1) / 750.0
0.59333333333333338
>>> #
>>> # 90 percent
...
>>> (bisect.bisect(roc_stats['curve'][0], 0.9) - 1) / 750.0
0.91733333333333333
```

So, it looks like we can only be confident of about 17 percent of the points classified into class '1'. Let's look at a summary of the stats for all the classes.

```>>> from __future__ import nested_scopes
>>> roc_stats = computeRocFromDataset(fullem_labeling, ds)
>>> def tolerance(x, percent):
...    return (bisect.bisect(x['curve'][0],
...    percent) - 1) / 750.0
...
>>> def avg_roc(stats, percent):
...    return reduce(lambda x,y : x+y,
...    map(lambda x : tolerance(x, percent),
...    stats.values())) / len(stats.keys())
...
>>> #
>>> # On average, what's the zero false-positive percentage
...
>>> avg_roc(roc_stats, 0.0)
0.13066666666666665
>>> avg_roc(roc_stats, 0.5)
0.59813333333333329
>>> avg_roc(roc_stats, 0.9)
0.91813333333333325
```

In fact, class '1' was one of the better classes. On average, only 13 percent of the points are confidently correct. Of course, in real life we are willing to tolerate some error. For a 5 percent error rate (95 percent confidence), 21.2 percent of the datapoints are acceptable.

Only last feature of the roc module is the ability to graphically output the ROC curves. Let's look at an interesting one - class '2' vs. class '1'.

```>>> plot1 = makeRocPlots(roc_stats['1'])
>>> plot2 = makeRocPlots(roc_stats['2'])
```

After running these commands, you should see two windows whose contents appear as Figure 1. The plot with the very sharp rise represents class 1 and the other, class 2. As you can see, the curve for class 1 is clearly superior to that of class 2.

Next: Graphical Analysis Up: Analysis....and stuff Previous: Confusion Hypercubes   Contents
Lucas Scharenbroich 2003-08-27