Who Cares About Evidence?

Who cares about evidence?

Why bother with evidence? Because it improves the odds that what we believe is actually true. But not always.

A close contact of mine who studies the sociology of science recently commented that, when the data do not support the researcher’s hypothesis, all too often it is the data that are rejected. In the political realm, logic and evidence are routinely subordinated to belief and ideology or, in the more elegant words of Mary Wollstonecraft: “But what a weak barrier is truth when it stands in the way of an hypothesis!”

Science, by the usual definitions, does not appear to come naturally to human beings and most of us struggle with it in school. While we’ve put men on the moon, it took us tens of thousands of years to accomplish this. Moreover, we want the world to be like…the way we want it to be. Damn the evidence! This is understandable, especially when science is shrouded in mumbo-jumbo and math, which most of us also hate.

Many of you reading this are marketing researchers and perhaps are thinking of course, we all know this. But when we recommend to a client that they cut their prices by ten percent, or advise them that their new product would probably flop if launched, or tell them these are the consumers they should target, we don’t say this off-the-cuff. We offer evidence to support our recommendations.

However, all evidence is not equal. Evidence can differ in quantity and quality. Recommending that a client make a decision potentially involving millions of dollars should normally require more than one number or the opinion of a moderator based on a couple of focus groups. So how can clients tell how much, or what kind of evidence, is enough? As far as I know, there is no simple answer.

New Call-to-action

Let’s go back to the drawing board and start from the beginning. There are situations in which we know with certainty, or near certainty, that something will or will not happen. We peek outside our window and see clear blue skies all around and we know it won’t rain in the next five minutes. There are immutable physical laws governing this. There are also circumstances where we don’t know with certainty what will happen but can estimate the outcome pretty accurately. Say the baseball team we’re rooting for is up 5-0 in the 9th inning. We can look up historical data and, if we’re clever and quick enough, estimate the probability our team will win the game. We won’t be right all the time but we can beat a coin toss most of the time.

Finally, there are situations in which we have no pertinent data and there are no physical laws driving what will or will not happen. For instance, say a New Product Development person has a rough idea for a radically new kind of product, and we must decide whether to pursue it or not. In this case we are ignorant, in the parlance of decision theorists, because we have no applicable data or benchmarks at this juncture. It’s in these sorts of situations where using our gut instinct – our unconscious intelligence in the words of psychologist Gerd Gigerenzer – is our only real option. We may decide to commission marketing research or we may shelve the idea.

The first type of decision is very straightforward, and decisions that are essentially clerical in nature are increasingly being automated. If the decision rules are clear and unambiguous, humans aren’t required except perhaps as auditors and to ensure bugs haven’t crept into the software. For the second type of decision, where things are not as clear cut but probable outcomes can be estimated, analytics of various kinds, including Artificial Intelligence (AI), can be very helpful. Over time, they may reduce the need for human decision makers but this is hotly debated.

Let’s now turn to the third decision type. This last kind of decision is what we’re naturally best equipped to deal with by virtue of our genetics. There are no definitive rules and we lack the data to estimate probable outcomes with satisfactory accuracy. Here, gut instinct – the product of millions of years of evolution – cannot be replaced. AI cannot find and analyze data that do not exist. This is the kind of decision we most often encounter in real life, the world outside of blogs, sales pitches, academic papers and textbooks.

Different sorts of evidence come into play in these very dissimilar situations. In the first instance, we only need data that are required by the decision rule. The second is less clear – what we need depends on our model, which we have previously judged as adequate for our purposes, given the data we have or can obtain. However, there normally are many decisions required to develop that model and typically competing models we could have used instead. Furthermore, what is “adequate?” Decisions such as these cannot be automated and, as statisticians know, a considerable amount of judgement is required in data analysis.

This leads us back to the third type of decision process. We cannot calculate or estimate the outcome but somehow sense what is the appropriate course of action, which may be not to make a decision. Veteran marketing researchers know that major corporate decisions often fall into this category (though there may be sleight of hand performed with numbers and analytics). It is also the natural habitat of politicians. These gut-feel decisions aren’t just about which pizza toppings to order or what should we watch on TV tonight. Although I’m a marketing science person, I concede that there are situations in which analytics just get in the way. There is no point in trying to quantify illusions. We have to make a decision and all we really have is our personal or collective gut to go by. Analysis paralysis wastes time and time, after all, is money.

As you’ll have already surmised, many decisions are multifaceted and combine two or more of these decision processes. Though not an attorney myself, it is my impression that legal decisions mostly are of the first and third types. There are also circumstances in which the outcome itself is a matter of dispute. Is this what we really want? Would that outcome be right? Opinions may clash as to what the best decision is and ethical considerations might weigh heavily. There may be no right or wrong answer, in other words, and AI (or consultants like me) cannot come to the rescue.

The first and third types are the ones we’re generally most comfortable with. Engineers trust their equations and salespeople trust their gut. The second kind of decision process is hazy and a no man’s land for many of us. It is the natural habitat of statisticians, who must learn to cope with ambiguity and turn it to their advantage in order to thrive.

Hence, evidence matters some of the time but not all of the time. Moreover, humans frequently misconstrue conjecture as evidence. We also readily reject evidence that contradicts our opinions, and cherry-pick data and analytics to support decisions we’ve already made. More data and more sophisticated analytics will not change the world overnight.

This brief article has been an enormous simplification of a complicated topic that straddles philosophy, statistics and countless other disciplines. I haven’t even quoted Donald Rumsfeld, but hope you’ve found it fun and useful!

 

Arrange a Conversation 

Browse

Article by channel:

Read more articles tagged: Analytics, Featured