One of the arguments against the validity of the Bayesian approach is that, in some cases, it’s hard to pin a precise number on a probability. When we can find good and relevant statistics to back up the assessment of a likelihood, this isn’t such an issue. But in the real world, some of the time we don’t have good numbers. Or we have the numbers on something similar to what we’re looking for – but not quite the same thing.

So what happens in those cases when we have to estimate the inputs?

Every Number Must Be Based on Reliable Sources and Solid Reasoning.

First of all, even though some of the likelihood input values in Rootclaim analyses are estimates, each likelihood assessment still has to be broken down and the reasoning behind it explained using hard evidence, statistics, reliable sources, and solid reasoning – they don’t just appear out of thin air.

Because Rootclaim is an open platform, the inputs can also be continually fine-tuned by the crowd. As soon as someone brings more solid sources, more precise data, or more in-depth reasoning for a specific likelihood assessment, that number gets updated and the outcome re-calculated. So the more the crowd contributes, the better the analysis becomes.

Rootclaim inputs are not based on a survey or polling of what the crowd thinks, but rather on the validity of the arguments and the reliability of the sources.

Using Numbers Means We Can Talk about Likelihoods in a Concrete Way.

Think of it this way: In a text opinion piece, every time we say something “probably happened,” we’re effectively saying that event has some likelihood greater than 50% – but we’re leaving the reader to guess how likely it really is. By pinning a number on an event – even one we have to estimate – we can actually discuss the likelihoods in concrete, meaningful terms.

In opinion pieces people stay vague; in a Rootclaim analysis, we express exactly how our results came about in a way that lets us combine individual pieces of the puzzle into a meaningful bigger picture.

Breaking It Down into Smaller Pieces Leaves Less Room for Error.

Another important advantage that makes the results of a Bayesian analysis valid even when some of the inputs are not based on exact statistics is that the whole is greater (or in this case, more valid) than the sum of its parts. 

Breaking down an analysis into smaller and smaller pieces removes more and more potential for error. Instead of generalizing about high-level concepts, we can get specific. This makes it less likely for subconscious biases to leak into our estimates and taint the result. This also makes it more likely that people with contradicting opinions on the “big question” will be able to agree on likelihood assessments in “smaller”, more narrow questions.

So as a whole, the analysis ends up with much more accurate results than any of the individual pieces that went into it. Once we gather together lots of individual estimates about the small pieces of the puzzle, we can typically paint a much clearer and more accurate picture of the larger story.

Since typically there are many pieces of evidence, slight inaccuracies (estimates) are merely noise that doesn’t distort the music of the big picture.