Random thoughts on questionable thinking, habits and practice in marketing research. I’m tongue-in-cheek in places but this is meant to be serious.
- Thinking that Statistical Science is mainly math and programming. Or even worse, little more than point-and-click software like Word.
- Sweatshop statistics…cranking out zillions of models in mechanical fashion with only cursory checking and little effort to see if they make sense. Caveat Emptor.
- Performing sophisticated modeling without first clarifying the objectives of the research. This is potentially very costly and “no time” is no excuse.
- Diving into advanced analytics…and then realizing that we don’t have the data we need because of sloppy research design. Sophisticated analytics usually works best when designed into the research.
- Asking consumers questions regarding their purchase behavior that are so detailed that no human could be expected to answer them accurately…and then claiming that survey research “doesn’t work.”
- Asking respondents to rate long lists of values, attitudes and lifestyle statements (“psychographics”) that make little sense to most people and are unrelated to past, current or future consumer behavior…and then claiming that survey research “doesn’t work.”
- Reporting that adding a slew of independent variables to a regression model “increased the R square by 25%” (for example.) Adding just one predictor will nearly always increase the model R square, and the adjusted R square, which adjusts for model complexity, is usually more meaningful. Secondly, R square is a proportion, so was this 25% meant to imply an increase of .25 to .50, for example, or a less spectacular improvement of, say, .25 to .31?
- Not considering how response patterns in survey research differ by national culture. A 50% top 2 box score might be pretty good in some countries but pretty lousy in others.
- Mixing apples and oranges, e.g. “Age has more impact on purchase frequency than gender.” Does this mean that including age in a model improves model fit more than adding gender does? If so, then we should say so.
- Reporting standardized regression coefficients for dummy variables. It’s hard to think of a case where interpreting a proportion in terms of standardized units is meaningful. More to the point, we aren’t truly standardizing since the variance of a proportion decreases as you move away from .5 towards 1 or 0.
- Interviewing a small slice of consumers based on preconceptions – not evidence – that they are “the target.” This runs up costs and can produce results that seriously mislead decision makers.
- Not creating standard questions and questionnaire templates for studies that will be repeated frequently in the future. Constantly re-inventing the wheel is inefficient and leads to inconsistent quality.
- Saying “correlation is not causation” without knowing what this actually means.
- Ignoring results we don’t like.
- Saying “We already knew this!” when we like the results.
We’re all guilty of at least some of the above some of the time…
Here’s the link to Part 2 and Part 3 of the series.
I’m not a university professor – just a lunch pail guy – but I hope you’ve found this useful. I’m sometimes honored and flattered to be asked for career advice. The best I can do is quote one of my own mentors, David McCallum: “Be a jack of all trades and a master of at least two.”
Kevin Gray is President of Cannon Gray, a marketing science and analytics company.
Article by channel:
Everything you need to know about Digital Transformation
The best articles, news and events direct to your inbox
Read more articles tagged: Featured