‘Deadly Dilemmas’ by Larry Laudan (2008)

Every so often there is an academic conference that looks different to the rest. It starts with a parade of ordinary-to-the-eye citizens while the professors and researchers and industry experts stare in from the audience. An announcement then booms over the microphone: “These are the innocent men who were sentenced to death, who by good fortune and the grace of God were not executed by the state.” There’s some hushed applause from the onlookers, a few wry smiles, and plenty of self-righteousness. “How can you look them in the eye and still defend capital punishment?”

These conferences are tragically small affairs, made up of those few people who are professionally committed to ridding the criminal justice system of its errors. They are powerful spectacles nonetheless. The cause is noble, and difficult, trying to connect epistemology to the real world in a much too neglected field. But for Larry Laudan these events are also horribly naïve, overpowered by emotional reasoning and a lack of philosophical rigour. The advice he offers the attendees: try instead, if only for a moment, to be “fair, impartial, honest, and thorough”.  

The conviction of an innocent person is perhaps the worst type of error that any institution could make. The devastation it causes upon the individual and his family is bad enough, but the radiating damage of lost confidence in the criminal justice system pushes things much further. It represents the most egregious violation of our social contract, where we implicitly sign over some of our rights to the state, with the state agreeing to defend those rights on our behalf; “life, liberty, and the pursuit of happiness.”

But when the state executes an innocent person, they are, as it were, killing an innocent person! And that is where the analysis should hinge, not on the emotion, on the unease, or on the nightmares of police frame-jobs. What we ought to care about here is the unit of loss, and the size and impact of it.

The expectation of such losses are often factored into the institutions around us, with the occurrence of these errors balanced against other potential errors of different kinds, in different directions. Like wily insurance salesmen, risk assessments are drawn up and an acceptable number – and degree – of these mistakes is decided upon. But when it comes to criminal justice, the mistakes that matter are poorly understood, and those risk assessments unbelievably crude.

Almost all of the analysis tends to focus on false convictions, and just how many of these should be considered too many. William Blackstone set the benchmark for this in 1765 when he proposed that the error ratio ought to be 10 to 1 – better that ten criminals go unpunished than one innocent person is unfairly convicted. Now you may prefer a different number, and prominent philosophers have stepped into these waters over the years with ratios from 1000 to 1, all the way down to 2 to 1. Or you may doubt that choosing a number is in any way relevant, believing that announcing such a thing has no actual impact on the real world error rate. Either way, you are likely thinking much too much about one side of the ratio and not enough about the other.

An innocent person going to prison is an appalling thought, and in the American system today just such errors are happening in between 3.3% and 5% of trial cases. The best source we have for this is the discovery or testing of DNA evidence (post-conviction). It is from those exonerations that we get our first look at the actual false conviction ratio: 3.3% to 5%, or 20 to 1. But this is half the story, most criminal charges are resolved by plea deals and the error rate at trial cannot simply be extrapolated out.

This is also where things become counterintuitive. Those early DNA exonerations were understandably focussed on trial convictions – situations where the convicted person has insisted upon their innocence all along. Yet when this changed, and cases with plea deals were also scrutinised against DNA evidence, something strange happened: the false conviction rates were considerably lower. Whereas many people expected plea deals to be a corrupting factor, pushing innocent people to admit guilt through fear (1 year in prison before trial against the risk of losing at trial and getting 15 years), the opposite was happening.

The reasons for this matter. Contrary to many people’s worst expectations, prosecutors were avoiding pursuing cases where the evidence was strongly in favour of the accused person being innocent, whether it was in trials or in plea deals. And when the evidence was strongly in favour of the accused person being guilty, it was in both parties’ interest to bargain for a better situation: the guilty person for a lighter sentence and the prosecutor to avoid the time and risks of a trial. There was also an important psychological factor at play: for serious crimes (such as the ones being examined) falsely accused people are less likely to plead guilty, regardless of the looming sentence and risk. They are innocent, and all the fear and bullying in the world won’t make them lie and admit otherwise.

Whereas the error rate at trial was 5% at the high end of things, the error rate from plea deals is 0.045%. Combine both those error rates and their prevalence in the American system, and you have an overall error rate of 0.84%. “It is hard to imagine” writes Laudan, “conducting a criminal justice system that makes substantially fewer errors.”

So things are working as they should? The ways in which we have designed the burden of proof are appropriate? No! The ratio that people tend to instinctively worry about is that of convictions to false convictions, but this isn’t what those distant philosophers were concerned about. Reducing the amount of false convictions isn’t really that hard, just keep ratcheting that burden of proof upward to the point where the only people being convicted are those for whom the evidence is completely overwhelming. These are errors of a different kind, but they are still errors within the system, and they tend to go strangely under-noticed.

But the failure to prosecute guilty people is the same as allowing crimes to go unsolved, and the social costs of this can be enormous. And so, it would be better if we simply acknowledged from the outset that many of our efforts to decrease the risk of false convictions means that there will also be more felons on the streets; unpunished, unrestrained, undeterred. We are actively increasing the likelihood that we will become victims of crime.

As it stands in America, you are 300 times less likely to be falsely convicted across your lifetime than you are to be a victim of a serious crime. And for many people working hard to reduce false convictions by “all means possible” this is a completely acceptable figure. After all, isn’t it (to approach the actual dilemma here, the real trade-off) much worse to be falsely convicted than to be a victim? It seems emotionally true until, that is, you scratch ever so slightly at the issue. “In what sense can it be worse” writes Laudan, “to be wrongfully convicted of murder than to be murdered?”

The point is a simple one, step back from the headlines and the academic parades of exonerated citizens, and ask yourself the real questions: “(1) What is an acceptable number of false convictions? (2) What is an acceptable number of crimes?” Then acknowledge that “Neither question can be answered without reference to the other” and that however you dial this thing up, people will suffer and die on one or both sides of the equation.