This summer, while researching for a paper on the Canadian law of causation in the age of torts committed in cyberspace, I re-read the Science Manual for Canadian Judges (Manual). A 2013 project of the Canadian National Judicial Institute, the Manual was intended to fill a much-needed lacuna in our legal system. Most lawyers are awful scientists. So the publication received little fanfare and I don’t know many who have read it.
Judges are appointed from a pool of senior lawyers. It stands to reason that most judges possess a poor grasp of scientific principles. The demographic fact that the last time most judges were in a high school physics or chemistry classroom would have been about the time of the Apollo moon landing, and you can imagine why there is reason to be worried. So far, there appear to be only two Canadian decisions that even made glancing reference to the Manual (R. v. Maple Lodge Farms, at para. 42 and R. v. McLaughlin at para. 87). This fact, of course, does not bode well.
Add to this the fact that most judges are men, the prevalence of ‘Male Answer Syndrome‘ (MAS, the propensity to express an answer to a question without knowing whether it is correct or without the ability to analyse the facts, or to try to assemble Ikea furniture without reading the instructions) undoubtedly leads to the possibility that most cases involving scientific evidence are decided, not based on full understanding of the facts, but rather on the ability of expert witnesses to sound plausible when testifying in court.
This post is not intended to provide anything more than a superficial critique of this issue. But what an issue it is. My view is that every lawyer who has any dealing with scientific evidence must read the Manual, and that failure to do so would be a breach of the standard of care in the same sense as having failed to grasp the basic principles of contracts or property law. I know that pretty much includes everyone in our profession, but the problem is so easily remedied. ‘Basic’ does not have to mean ‘not hard to understand.’ In first year law, we had to try hard to learn what is now comes to us easily. We have to do the same with science as a basic element of the practice of law.
One of the essential points one gleans from reading the manual and associated references is that much of modern science, in coping with cause and effect, is counter-intuitive. That is to say, as a male lawyer handicapped by MAS, I am susceptible to see a set of factual data and come to a conclusion about the interaction between the parties to a legal dispute and the material world in a way that is more wrong than it is correct. Beyond what is factually obvious, the capacity for events to have occurred in a way that defies one’s immediate opinion or prediction is a fact of life we either welcome (‘I love surprises’) or resist (‘I hate surprises’). The degree to which one is prepared to believe a counter-intuitive answer to a scientific problem represents the presence of an open mind: the vital tool of any jurist.
Take an ‘obvious’ case such as a rear-end traffic collision. It is possible for any person, including a jurist, to say the facts speak for themselves, if we have only two cars, one stopped and the other one plowing into it. Enter the additional fact of another driver swerving into the lane, causing the rear-ending driver to swerve into a lane where the cars have stopped, and the relationships between tortious conduct and the result become less obvious. In cyber torts, such as unfair commercial practices diverting web traffic from one e-commerce site to another, this type of causation in real time is just the first stage of analysis. The ‘information superhighway’ is not merely a metaphor: it describes the interaction of many data sources, some automated and therefore neutral and some not automated.
In order for lawyers and judges, ever the stalwart acolytes of the cult of the ‘but for’ theory of causation, it is time we opened our eyes to the work of Thomas Bayes, an 18th-century English philosopher and Presbyterian minister whose theological perspective led him to believe there was more to how things worked than was obvious or visible to the eye.
At pages 65-67 of the Manual, the authors engage in a rather sophisticated analysis of the role of the common law judge in making sense of scientific evidence. It is here that we encounter Bayes, and the notion that what appears to make sense as ‘probable’ from observed facts is not necessarily correct, or even probable. The reason for this is that facts may appear static when presented in court, but in real life, when the event happened, the facts were dynamic. The rear-ending driver may have been inattentive before the third driver swerved into his lane and forced him to hit the plaintiff, but would appropriate attention have saved him from the collision? This type of reasoning would clearly apply when considering whether a web site would have attracted visitors – anonymous entities – to see the pay-per-click advertising, were it not for the infringement by another site.
To illustrate the counter-intuition involved in the Bayesian contribution to our understanding of how things happen – and how things occurred in the past – the New York Times introduced the public to a logical puzzle called the “Monty Hall Problem.” The name of this puzzle refers to the host of a popular 20th-century television game show Let’s Make a Deal in which participants are forced to make decisions based on prizes hidden behind sliding doors.
Told in advance that behind one of three doors is a car, and behind to others is a goat, the starting premise is that the contestant has a one-in-three chance of winning the car. The contestant starts by choosing a door. Will it be advantageous to keep choosing, or settle for a fixed amount of cash offered as the deal? As the game proceeds, Mr. Hall opens one of the other doors, revealing a goat. Most of us, if only for lack of desire to devote much thought to it, will say that odds have not changed, from one in three, for the one she chose and the other closed door. A pragmatic tort lawyer might then say it is now a coin-toss: one in two. The Bayesian would point out that neither of these approaches is accurate, and say it is advantageous for the contestant to switch to the other door because it has a two-in-three chance of revealing the car.
What confounds the obvious approach is that, in order to make the game good television, Mr. Hall knows the prizes behind each door and must choose to open one of the other two doors hiding a goat. He cannot open the door hiding the car, or else that would eliminate the opportunity to win it. The likelihood that the two non-chosen doors both hid goats is the same as the original probability: one in three. The revelation of a goat behind one of the unchosen doors means the likelihood that the car is behind the remaining unchosen door is two-in-three. So the wise choice is to switch choices to the remaining closed door. In other words, the host’s revelation of a goat door raises the likelihood of the remaining unchosen door as concealing the car by 100%, from 1/3 to 2/3.
While this example may lead a jurist to wonder what application it has to solving legal questions in the justice system, one only has to consider how scientific evidence is presented by parties through expert witnesses. When one considers how the burden of proof of causation in most tort cases is not a matter of certainty but probability, one really need not wonder for too long. Medical malpractice, especially cases involving delayed diagnosis of cancer, very much requires a dynamic and probabilistic concept of scientific causation to reach a just conclusion. Despite the efforts to standardize a high level of scepticism, such as in the development of the Daubert rules, parties and their expert witnesses are very much in the position to know the facts on which experts rely in coming to conclusions about cause and effect, and the facts that have been ignored, either intentionally or through inadvertence. As presenters of cases in trials of legal disputes, the experts and the lawyers leading their evidence are the Monty Halls of the courtroom. The usual burden of proof is 51% probability that one thing led to another in accordance with the ‘but for’ analysis. In that instance, the difference between 1/3 and 2/3 would clearly have the potential to decide the case one way instead of another.
A recent Ontario trial decision, affirmed by the Ontario Court of Appeal, illustrates the justice of employing a Bayesian or belief-based analysis over a more traditional frequentist or data-driven approach. In Goodman v. Viljoen (affirmed by the Court of Appeal) at para. 128, Walters J. said this about causation in the context of deciding whether a failure to administer a risk-reducing treatment caused an adverse result according to the ‘but for’ test:
In order to determine the probability that the risk of [Cerebral Palsy] is reduced, one must use the Bayesian method which uses a different definition of probability. It is an expression of the degree of belief about the unknown.
So it’s back to school, lawyers and judges. Put your ‘common sense‘ in the bottom drawer and start reading the Manual.