Comments:"Bayes' rule in Haskell, or why drug tests don't work"
URL:http://www.randomhacks.net/articles/2007/02/22/bayes-rule-and-drug-tests
Bayes' rule in Haskell, or why drug tests don't work
Posted by Eric KiddThu, 22 Feb 2007 18:11:00 GMT
Part 3 of Refactoring Probability Distributions.
(Part 1: PerhapsT,
Part 2: Sampling functions)
I really love this quote, because it’s insanely provocative
to any language designer. What would a programming language look
like if Bayes’ rule were as simple as an if
statement?
Let’s start with a toy problem, and refactor it until Bayes’ rule is baked right into our programming language.
Imagine, for a moment, that we’re in charge of administering drug tests for a small business. We’ll represent each employee’s test results (and drug use) as follows:
dataTest=Pos|Negderiving(Show,Eq)dataHeroinStatus=User|Cleanderiving(Show,Eq)
Assuming that 0.1% of our employees have used heroin recently, and that our test is 99% accurate, we can model the testing process as follows:
drugTest1::Distd=>d(HeroinStatus,Test)drugTest1=doheroinStatus<-percentUser0.1testResult<-ifheroinStatus==UserthenpercentPos99elsepercentPos1return(heroinStatus,testResult)percentUserp=percentpUserCleanpercentPosp=percentpPosNegpercentpx1x2=weighted[(x1,p),(x2,100p)]
This code is based our FDist monad, which is in turn based onPFP. Don’t worry if it seems slightly mysterious; you can think of the
“<-
” operator as choosing an element from a probability
distribution.
Running our drug test shows every possible combination of the two variables:
>exactdrugTest1[Perhaps(User,Pos)0.1%,Perhaps(User,Neg)0.0%,Perhaps(Clean,Pos)1.0%,Perhaps(Clean,Neg)98.9%]
If you look carefully, we have a problem. Most of the employees who test positive are actually clean! Let’s tweak our code a bit, and try to zoom in on the positive test results.
Ignoring negative test results
We don’t care about employees who test negative for heroin use. We can
throw away those results using Haskell’s Maybe
type:
drugTest2::Distd=>d(MaybeHeroinStatus)drugTest2=do(heroinStatus,testResult)<-drugTest1return(iftestResult==PosthenJustheroinStatuselseNothing)
This shows us just the variables we’re interested in, but the percentages are still a mess:
>exactdrugTest2[Perhaps(JustUser)0.1%,PerhapsNothing0.0%,Perhaps(JustClean)1.0%,PerhapsNothing98.9%]
Ideally, we want to reach into that distribution, discard all theNothing
values, and then normalize the remaining percentages
so that they add up to 100%. We can do that with a bit of Haskell code:
value(Perhapsx_)=xprob(Perhaps_p)=pcatMaybes'::[Perhaps(Maybea)]->[Perhapsa]catMaybes'[]=[]catMaybes'(PerhapsNothing_:xs)=catMaybes'xscatMaybes'(Perhaps(Justx)p:xs)=Perhapsxp:catMaybes'xsonlyJust::FDist(Maybea)->FDistaonlyJustdist|total>0=PerhapsT(mapadjustfiltered)|otherwise=PerhapsT[]wherefiltered=catMaybes'(runPerhapsTdist)total=sum(mapprobfiltered)adjust(Perhapsxp)=Perhapsx(p/total)
And sure enough, that lets us zoom right in on the interesting values:
>exact(onlyJustdrugTest2)[PerhapsUser9.0%,PerhapsClean91.0%]
OK, that’s definitely not good news. Even though our test is 99% accurate, 91% of the people we accuse will be innocent!
(If this seems counter-intuitive, imagine what happens if we have no employees who use heroin. Out of 1000 employees, 10 will have a positive test result, and 100% of them will be innocent.)
Baking Maybe into our monad
The above code gets the right answer, but it’s still pretty awkward. We
have Just
and Nothing
all over the place,stinking up our application code. Why don’t we hide them inside our monad?
Fortunately, we can do just that, using the MaybeT
monad
transformer. Don’t worry if you don’t understand the details:
typeFDist'=MaybeTFDistinstanceFunctorFDist'wherefmap=liftMinstanceDistFDist'whereweightedxws=lift(weightedxws)
As Russel and Norvig point out (chapter 13), cancelling out the impossible worlds and normalizing the remaining probabilities is equivalent to Bayes’ rule. So in homage, we can write:
bayes::FDist'a->[Perhapsa]bayes=exact.onlyJust.runMaybeT
We’re missing just one piece, a statement to prune out impossible worlds:
condition::Bool->FDist'()condition=MaybeT.return.toMaybewheretoMaybeTrue=Just()toMaybeFalse=Nothing
And now, here’s our final drug test.
drugTest3::FDist'HeroinStatus->FDist'HeroinStatusdrugTest3prior=doheroinStatus<-priortestResult<-ifheroinStatus==UserthenpercentPos99elsepercentPos1condition(testResult==Pos)returnheroinStatus
This gives us the same results as before:
>bayes(drugTest3(percentUser0.1))[PerhapsUser9.0%,PerhapsClean91.0%]
So testing all of our employees is still hopeless. But what if we only tested employees with clear signs of heroin abuse? In that case, there’s probably a 50/50 chance of drug use.
And that gives us remarkably better results. Out of the people who test positive, 99% will be using drugs:
>bayes(drugTest3(percentUser50))[PerhapsUser99.0%,PerhapsClean1.0%]
The moral of this story: No matter how accurate our drug test, we shouldn’t bother to run it unless we have probable cause.
Similar constraints apply to any population-wide surveillance: If you’re searching for something sufficiently rare (criminals, terrorists, strange diseases), it doesn’t matter how good your tests are. If you test everyone, you’ll drown under thousands of false positives.
Extreme Haskell geeking
If we go back and look at part 1, this gives us:
typeFDist'=MaybeT(PerhapsT[])
This has some interesting consequences:
If we collapse MaybeT into PerhapsT, we can work with probability distributions that don’t sum to 1, where the “missing” probability represents an impossible world. We can add condition to Rand (part 2) using MaybeT Rand. Bayes’ rule is basically the combination of MaybeT and a suitable catMaybes function applied to any probability distribution monad.Also worth noting: Popular theories of natural language semantics are based on the λ-calculus. Chung-chieh Shan has a fascinating paper showing how to incorporate monads and monad transformers into this model. If we replaced Chung-chieh Shan’s Set monad with one of our Bayesian monads, what would we get? (Currently, I have no idea.)
Part 4: The naive Bayes classifier
Part 5: What happens if we replace MaybeT with PerhapsT?