Despite being 250 years old, Blackstone’s Ratio continues to be an active source of debate in jurisprudence.
One question is whether it has any influence today on the law itself; if so, what form does that influence take, how strong is it and how does this vary between jurisdictions. This dovetails directly into arguments for or against specific and potentially influential policies concerning the strength of evidence required to secure a conviction. This makes pressing the question of whether any influence is justified, which in turn leads in two directions: first, to questions about the meaning of the dictum, which may or may not be answered by divining what Blackstone originally intended; and second, to discussions of the procedure through which the ratio of 10, or other numbers, may be given effect. For example, those who rely on Kaplan’s equation (below) use the ratio to generate a percentage, which is then conceived as a standard of proof. But this in turn leads to a further discussion concerning whether the dictum can or should be interpreted in quantitative terms at all.
Running in parallel to this scholarly work, high courts in many states in the United States continue to make decisions about the number they prefer, with choices reported by Volokh ranging from 1 (Florida) to 100 (Oklahoma). The significance and consequence of these decisions is not clear however.
On this page I am developing an (opinionated) bibliography of reasonably current work related to Blackstone’s ratio.
A good starting point is Volokh’s survey of what he calls “n law”, which takes in the ratio’s history and current usage (in 1997). It’s presented as a humorous article, but I think reflects an underlying perplexity. From the abstract:
Values of n are compared and contrasted on the federal and state levels. States and federal circuits with high values of n are recommended as possible residences for potential criminals.[i]
Pi and colleagues have just updated the state-by-state figures, which continue to evolve.[ii]
Epps based an article on Blackstone’s ratio in the Harvard Law Review in 2015. He says it remains strongly influential:
“[B]etter that ten guilty persons escape, than that one innocent suffer”is perhaps the most revered adage in the criminal law, exalted by judgesand scholarsalike as “a cardinal principle of Anglo-American jurisprudence.”[iii]
He says the current interpretation of the ratio, which he calls “the Blackstone principle”, is as follows:
Blackstone’s ten-to-one ratio and its variations can’t be taken literally. There’s no way to measure the exact ratio between the false convictions and false acquittals our system creates, and no one seriously advocates that it is critical to strive for exactly ten false acquittals for every false conviction. Instead, the ratio serves as shorthand for a less precise — but still important — moral principle about the distribution of errors: we are obliged to design the rules of the criminal justice system to reduce the risk of false convictions — even at the expense of creating more false acquittals and thus more errors overall.
Epps however thinks the dictum’s influence a bad thing and argues for reform. Appleman argued against Epps’s conclusions, but shares some of Volokh’s perplexity about the dictum itself:
Epps seeks to tie many, if not all, of the problems of America’s criminal system onto that hoary old Blackstonian koan.[iv]
A parallel conflict is between Allen and Laudan on one side and Risinger on the other, with Allen and Laudan arguing for a lighter burden of proof in order to convict more of the guilty.[v][vi]
A helpful list of Laudan’s many publications on the topic is provided by Epps.
Kaplan introduced a way of deriving a percentage representing a standard of proof from the ratio in 1968:
Here Dg is the subjective expected disutility of a false finding of guilt and Di the same for a false finding of innocence, so that the ratio of the two of them in the denominator is a ratio similar in character to Blackstone’s.
The contemporary continuation of this work has a considerable following today, for which Hamer provides a partial list.[vii] The list is a bit out of date now however and a notable subsequent example is Walen.[viii]
This approach has however been criticised by DeKay:
“To the extent that jurors’, judges’ and legal scholars’ notions of correct standards of proof are based on desires to bring about particular error ratios, such notions are founded on presumptions that are fundamentally invalid.”.[ix]
And by me.[x]
Lippke, Laudan and Kaplow have published proposals for alternative kinds of ratios that could be used to steer the policies of the criminal justice system.[xi][xii][xiii]
I think they are all flawed and have put forward my own.[xiv]
Risinger has tried to resolve the perplexity by divining Blackstone’s original intendment:
As for the “Blackstone ratio,” that is, the label now generally given to Blackstone’s version of the moral assertion that “it is better that ___ guilty go free than that one innocent be convicted,” I believe that all we can say about the intendment of this expression was that it was meant as a general declaration that, for any given crime, an error that convicts an innocent person is much worse morally than an error that acquits a guilty person. The number Blackstone chose to make this point was ten.[xv]
Findlay has concluded it is a mistake to interpret the dictum in a quantitative way at all:
In the end, the requirement of proof beyond a reasonable doubt is based on something more fundamental, but less quantifiable, than Professor Laudan’s algorithm assumes. Even if we could somehow assign a mathematical weight to the harms that individuals suffer from the effects of wrongful conviction and the effects of being victimized by a recidivist who was mistakenly acquitted of a prior offense, that still would not justify modeling a burden of proof based on those weights. That is because the burden of proof—the requirement that the state prove guilt beyond a reasonable doubt—and indeed, Blackstone’s ratio (the well-known maxim that it is “better that ten guilty persons escape than that one innocent suffer”)—are not premised not on some illusion of mathematical precision, or on some notion that the harms caused to victims of all sorts can be meaningfully assessed and weighed. They are instead based, at least in significant part, on the notion that, whatever suffering the victimization may cause, it is far worse as a structural matter in a free society for the government to actively and deliberately deprive its citizens of life, liberty, or the ability to lead the life they choose, than it is for a private individual, through criminal misdeeds, to harm another and to escape punishment.[xvi]
References
[i]Alexander Volokh, ‘n Guilty Men’, Univ. of Pennsylvania Law Rev.146 (1997).
[ii]Daniel Pi, Francesco Parisi and Barbara Luppi, Quantifying Reasonable Doubt(Rochester, NY: Social Science Research Network, 5 August 2018) <https://papers.ssrn.com/abstract=3226479> [accessed 18 October 2018].
[iii]Daniel Epps, ‘The Consequences of Error in Criminal Justice’, HARVARD LAW REVIEW128: 87.
[iv]Laura I Appleman, ‘A Tragedy of Errors: Blackstone, Procedural Asymmetry, and Criminal Justice’, Harvard Law Review Forum128: 91.
[v]Ronald J. Allen and Larry Laudan, Deadly Dilemmas(Rochester, NY: Social Science Research Network, 24 June 2008) <https://papers.ssrn.com/abstract=1150931> [accessed 18 October 2018].
[vi]D Michael Risinger, ‘Tragic Consequences of Deadly Dilemmas: A Response to Allen and Laudan’, SETON HALL LAW REVIEW40 (2008): 31.
[vii]‘Hamer, David — “Probabilistic Standards of Proof, Their Complements and the Errors that are Expected to Flow from Them” [2004] UNELawJl 3; (2004) 1(1) University of New England Law Journal 71’, p. 83 <http://classic.austlii.edu.au/au/journals/UNELawJl/2004/3.html> [accessed 11 October 2018].
[viii]Alec Walen, ‘Proof beyond a Reasonable Doubt: A Balanced Retributive Account’, Louisiana Law Review76 (2015): 355.
[ix]Michael L. Dekay, ‘The Difference between Blackstone-Like Error Ratios and Probabilistic Standards of Proof’, Law & Social Inquiry21, 1 (1996): 95–132.
[x]William Cullerne Bown, ‘Killing Kaplanism: Flawed methodologies, the standard of proof and modernity’, The International Journal of Evidence & Proof, 2018: 1365712718798387.
[xi]Richard Lippke, ‘Punishing the Guilty, Not Punishing the Innocent’, Journal of Moral Philosophy7, 4 (2010): 462–88, at p. 464.
[xii]Larry Laudan, Truth, Error, and Criminal Law: An Essay in Legal Epistemology(Cambridge University Press, 2006), p. 74.
[xiii]Louis Kaplow, ‘Burden of Proof’ <https://www.yalelawjournal.org/article/burden-of-proof> [accessed 11 October 2018].
[xiv]William Cullerne Bown, ‘The criminal justice system as a problem in binary classification’, The International Journal of Evidence & Proof22, 4 (2018): 363–91.
[xv]D Michael Risinger, ‘What Standards of Proof Imply We Want from Jurors, and What We Should Say to Them to Get It’,: 21.
[xvi]Keith A. Findley, Reducing Error in the Criminal Justice System(Rochester, NY: Social Science Research Network, 8 May 2018) <https://papers.ssrn.com/abstract=3175448> [accessed 11 October 2018].