Why experiment?

Why experiment?

Marketing research is fundamentally about observing, quantifying, understanding and predicting consumer behaviour. Though it may seem counterintuitive, understanding why consumers behave as they do is not essential for predicting what they will do.

Many algorithms successfully employed for years in predictive analytics are difficult or impossible to interpret; they do not display coefficients analogous to regression weights, for instance.

Understanding why people do what they do, however, can often help us make more accurate predictions and make better use of our predictions. Understanding why certain types of people are heavy (or light) users of a product category can also be very helpful in anticipating the sorts of new products they may be interested in, as well as learning how to communicate with them. There are innumerable reasons why knowing the why is useful.

“No causation without manipulation” is a statistical adage usually taken to imply that only through randomized experiments can we prove a causal effect. Its original meaning appears subtler, though. My own hard hat view is that it’s very difficult to prove causation, but that the odds favour randomized experiments over observational studies.

Experiments, however, can be highly artificial and fail to generalize to real-world conditions. Reality is a different sort of laboratory, and for some studies, field experiments are better suited. The subjects may be “WEIRD” too – small samples of American undergraduate psychology majors, for instance.

In marketing research, randomized experiments have been used extensively in new product development and advertising research. The central idea is that randomization minimizes the risk that the control and treatment groups were different in important ways prior to administration of the treatment, which in a marketing research context could mean asking respondents to taste test either Pepsi or Coke. (Which is the control and which is the treatment would likely depend in this example on which is the client.) Here is a short summary of the pros and cons of experimental versus non-experimental research.

Experimental Design: Procedures for the Behavioral Sciences (Kirk) and Design and Analysis of Experiments (Montgomery) are two standard reference books on experimental designs. Experimentation is much more than tossing a coin and putting a subject in one group or another, as the table of contents of another excellent book, Design and Analysis of Experiments with R (Lawson) demonstrates.

Randomization does not guarantee that the treatment and control groups are adequately balanced on important variables. For example, we might discover that the Pepsi group had more younger males and that younger males, on average, preferred Pepsi. In this case, we can statistically adjust for this imbalance so that it does not bias our conclusions. When designing the experiment, had we reason to believe age and sex were consequential variables, we could have used a randomized block design to account for this beforehand.

When there are many variables thought to be important, matching subjects (e.g., respondents) on these characteristics prior to assigning them to control or treatment is an alternative. Respondents could be matched according to how similar they are with respect to age, sex and several other variables, and one member of each matched pair randomly assigned to the treatment and the other to the control. Propensity score analysis (PSA) is an elaboration of this idea and common in non-experimental research, though also used in randomized experiments. Here is a brief overview of PSA.

Subjects can also be followed over time after being randomly assigned to two or more experimental groups. Latent growth curve analysis, which frequently utilizes Structural Equation Modeling, is one type of longitudinal analysis that can incorporate experimentation.

You may have also heard of quasi-experiments and natural experiments, which fall in between randomized experiments and purely observational studies. Conjoint analysis is yet another wrinkle on this complex subject. In conjoint, specialized experimental designs are used to determine the combinations of product features shown to respondents. Respondents are not normally assigned to experimental groups in conjoint (though this is possible).

Meta-analysis is an important topic related to analysis of causation, and I have written a brief overview of it here.

D

Causal analysis is a tough topic and experts are not in unison on all fundamental issues. Except when there is some clear deterministic mechanism involved – with a machine which has malfunctioned, for instance – the best we can do is approach causation probabilistically. Statistics is a systematic means of dealing with uncertainty through the laws of probability, thus its prominent role in science. It is the grammar of science, according to no less an authority than Karl Pearson.

The word “model” is often used in connection with causal analysis. Statisticians use this term in at least three ways. The first is as a synonym for theory. It is also used in an operational sense, such as a path diagram depicting a causal theory. Lastly, it can refer to a mathematical equation or system of equations thought to approximately represent the process which generated the data. This hypothetical process may be based on theory or, less ideally, an attempt at post hoc explanation. Statistical modelling is often used in conjunction with randomized experiments.

Statistical models are probabilistic, not deterministic. Binary logistic regression analysis, for example, estimates the probability of a customer belonging to one group or another (e.g., liked the product or not). Part of the beauty of this is we can draw our cutoffs wherever we feel most useful – to focus on high-probability customers, for example.

Some might wonder what a simple taste has to do with causal analysis. There are at least two connections. Most fundamentally, if we conclude Pepsi scored better than Coke, we are saying that it is the treatment (Pepsi or Coke) that caused the differences in the ratings. In medical research, it might be the effectiveness of pain relievers which is being assessed but the logic and basic methodology are the same.

At a more sophisticated level, we may wish to dig deeper and try to understand why Pepsi was preferred overall. There are many statistical methods which can be used to answer this question. A generic term for this is key driver analysis and we can make use of product features, respondent ratings of product features, respondent characteristics, or any combination of the three.

In our daily lives, we all frequently make informal causal statements such as “I like this supermarket because it’s on the way home from work.” This is part of being human and we tend to make causal statements casually. Researchers need to be on guard for this when they head to work.

If you’d like to learn more about causal analysis, the gentlest introductions I know are The Halo Effect (Rosenzweig) and Bit by Bit: Social Research in the Digital Age (Salganik). Causation: The Why Beneath The What, a short interview with Harvard epidemiologist Tyler VanderWeele, and Causation in a Nutshell may also be helpful. Causation Matters reproduces a classic paper by Sir Austin Bradford Hill. It is surprisingly non-technical and very much worth reading.  

Three other books I can recommend are Experimental and Quasi-Experimental Designs (Shadish et al.)Observation and Experiment (Rosenbaum), and Mastering ‘Metrics (Angrist and Pischke). There are many advanced books, and I have listed several hereunder Research Design and Causal Analysis and a few other places on this page.

Like most anything, new technology has upsides and downsides. One questionable practice is to design computer simulations based on a theory and then use the results of these simulations to “prove” the theory. I’m unaware of a substantive defence of this practice, a clear example of circular reasoning.

The notion that big data has removed the need for theory and experimentation made a splash in the popular media about a decade ago and is another example of sloppy reasoning. Anyone who has worked with data for very long knows that more data by itself does not mean more useful information. Many large data files are heavily imputed and error-ridden. Furthermore, even with high-quality data, it’s quite easy to find something that isn’t really there, as explained in Stuff Happens.

More than once I’ve heard someone say they hadn’t seen a particular method used in marketing research, with the implication being this method doesn’t work or isn’t useful in marketing research. Some of these discussions have involved experimental designs and other aspects of causal analysis.

If we are to call ourselves researchers, though, we have to look outside marketing research for new ideas. After all, it wasn’t that long ago when experimentation wasn’t used in any field. It does work and helps us be more effective marketing researchers. If the quality doesn’t matter in marketing research, then marketing research doesn’t matter.

Arrange a Conversation 

Browse

Article by channel:

Read more articles tagged: Featured, Marketing Analytics, Predictive Analytics