Join us for networking & quality resources to help you and your team succeed in digital transformation.
I have never considered marketing or marketing research to be as rigorous as aerospace engineering. Even randomized control trials, generally considered the gold standard of medical research, cannot consistently achieve that level of scientific rigor.
Few scientific fields or scholarly disciplines can measure up to rocket science, and there are many reasons why marketing seldom does. Though there are some law-like patterns to consumer behavior – for example, price increases usually reduce demand – they cannot be expressed as formulas in the manner of scientific laws.
Causal relationships in general are much tougher to establish in the behavioral and social sciences than in engineering. One reason is that experiments involving humans are usually more difficult to conduct and the results harder to generalize beyond laboratory conditions. Human behavior also can be highly complex and involve many variables. Neuroscience and genomic research are in their infancy, and environmental variables such as those related to upbringing which influence consumer behavior aren’t in our data bases.
Data are usually incomplete and tell us only part of the story. Omitted variables are the rule rather than the exception. Thankfully, we don’t yet inhabit the surveillance dystopias predicted by some! Integrating data from disparate sources typically involves a considerable amount of data imputation, and even “official” data sources contain errors. See Data Matching (Christen) for an in-depth look into this subject.
Measurement error is often a significant problem. Consumer surveys, for example, can only provide an approximate picture and are best interpreted directionally. Obtaining a truly representative sample of a broad population of consumers also is highly problematic. Qualitative research should be treated with even more caution, though I have seen important decisions made on the basis of a few focus groups.
There are relatively few statisticians and others with strong backgrounds in “hard” science working in marketing and marketing research. A consequence is that cognitive habits and technical skills from these disciplines have not yet diffused deeply into marketing and marketing research. That said, people from those fields sometimes struggle in the behavioral and social sciences precisely because human behavior cannot be calculated. However this is achieved, we need a better balance of people skills and analytical skills.
Setting these caveats aside, marketers in many organizations now have substantial amounts of data they can access or acquire at little or no cost. Computing power has grown enormously in the past two decades and a vast array of analytic tools are now at our disposal. Using these data and analytic techniques wisely, however, is another matter, and a not a small amount of marketing research is pretty shoddy in my opinion. We can do better.
However, it’s important to bear in mind that even “hard” science involves considerable amounts of human judgement, and the length and quality of experience of scientific researchers are also relevant. This is why two scientists in any field can look at the same data or statistical model and draw very different conclusions.
So what separates sound science from sloppy science? There are no indisputable rules, though the NIH guidelines I linked earlier list some quite strict procedures for health research. Causation Matters references a classic paper by eminent statistician Sir Austin Bradford Hill that I highly recommend reading.
As I mentioned, I believe marketers and marketing researchers can do better. We will never be rocket scientists but, again, this is a very high bar and we don’t have to reach that lofty standard to advance our professions. Some dubious practice should be relatively easy to fix. Many bad habits, though, have persisted for years, such as setting KPIs that have no empirical relationship to the bottom line! Bad practice endures, in part, because of outdated incentive structures and other management problems that have no direct connection to science.
Another key to making marketing and marketing research more scientific is statistical thinking which, among other things, involves asking ourselves a series of questions such as these:
- Have I become too ego-involved with the subject I’m researching and compromised my objectivity?
- Is my hypothesis internally consistent?
- Do I have empirical evidence to support my hypothesis? Have I looked at all the relevant empirical evidence?
- Are there rival explanations I haven’t considered?
- Am I confusing the possible with the plausible or the plausible with fact?
- Are patterns I’ve observed in the data likely to be real, or merely due to chance? What might have caused these patterns, if real?
- Are there unobserved variables or other confounders I haven’t accounted for that may have caused these patterns?
- Am I confusing cause with effect, or correlation with causation?
- Am I drawing conclusions about fruit based only on apples?
- Are my data of sufficient quality to justify the inferences I’ve made from them? Have I properly accounted for sample design, non-response, missing data, measurement error, statistical assumptions and other potential effects?
- Given A and B, if I do C and D, what are the likely outcomes? What are the likely outcomes of those outcomes?
- Am I asking the right questions?
- Are there other questions I should be asking?
Some more thoughts on how we can do this can be found in these brief articles:
- Research Thinking: Some tips on how to do better marketing research more easily
- Putting It All Together
- Missing Links
- Who Cares About Evidence?
- How to Save Marketing
- How Brands Don’t Grow
- Causal Analysis: The Next Frontier in Analytics?
- Data, Analytics and Decisions: How thinking like a scientist can help you make better decisions
- What Makes a Good Analyst?
- What Does Marketing Science Bring to the Table?
- Anyone Can Do Marketing Research
Other short posts related to this topic can be found here.
In closing, I should mention that Philosophy of Social Science (Rosenberg) is a delight to read and penetrating look at how the social and behavioral sciences differ from other fields. Theory Construction and Model-Building (Jaccard and Jacoby), Experimental and Quasi-Experimental Designs (Shadish et al.) and Methods of Meta-Analysis (Schmidt and Hunter) may also be of interest to marketing scientists.
Article by channel:
Everything you need to know about Digital Transformation
The best articles, news and events direct to your inbox