Hard Hat Stats: Some bad habits we’ve gotten ourselves into (Part 3)

Random thoughts on questionable thinking, habits and practice in marketing research. I’m tongue-in-cheek in places but this is meant to be serious.

  1. Trying to do a bicycle kick when we don’t know how to trap the ball properly…The best professional athletes usually excel at the basics, too!
  2. Along the same lines, the very human tendency to conclude we don’t need to know things we don’t like or find difficult…I used to think hierarchical (aka multi-level) models were invented just to keep academics busy, but after I learned more about them and started using them I realized I was wrong. At a more fundamental level, probability and sampling are two subjects that are yawners for most MR people, but without at least a basic knowledge of them costly errors will be made…and career doors will be closed.
  3. Tossing around technical jargon to impress clients or superiors without knowing what it means. “You can fool some of the people, some of the time…” It also wastes a lot of time and can lead to serious misunderstandings.
  4. Not recognizing that all variables in a principal components (“factor”) analysis have at least some weight in the computation of factor scores. As an illustration, say only two variables out of 30 load heavily on a factor and we name that factor based on these two variables. In reality, they may only account for a small percentage of the variance of that factor’s score, thus the label we gave the factor is misleading and the conclusions we draw from the analysis may be way off the mark.  This is a very common mistake in MR.
  5. Not being aware that factor analysis is not the same as principal components analysis and that varimax isn’t the only kind of useful rotation.
  6. Thinking that brand mapping is just correspondence analysis with our software’s default settings – essentially a clerical task. Another common mistake. Changing the settings may have a substantive impact on the map, and there are many good ways to do mapping besides correspondence analysis.
  7. Claiming that “traditional” stats can’t handle non-linearities and interactions and that you need “AI” or machine learning in these circumstances. While it is true that in many regression models the right hand side of the equation does not include terms to account for non-linearities and interactions, this is the modeler’s choice and not an inherent limitation of these methods. 
  8. Conversely, statisticians pooh-poohing machine learning methods developed in computer science and other fields without having really studied them and tested them on actual data. Statisticians have prejudices too.
  9. Using statistical methods designed for cross-sectional data on time-series data. Not necessarily the end of the world, but not best practice. If you’re new to time-series analysis, here’s a quick primer.
  10. Over-interpreting model results, e.g., “If we increase consumer ratings of Attributes B, D and H by X percent, we’ll improve our profitability by Y percent.” This is Science Fiction, not Statistical Science.
  11. Not realizing that how a product is marketed is usually at least as important as the product itself! A volumetric forecasting method, for example, that does not have a sophisticated market model component and does not include detailed marketing scenarios in its simulations will only be able to provide very rough forecasts. There are now quite a few people in marketing research (or related positions) who lack a basic understanding of marketing and consumer behavior, and I think we should be worried about this.  
  12. Similarly, thinking that marketers have ever perceived consumers as perfectly rational decision makers. Once-upon-a-time, some academic economists mighthave believed this but we weren’t listening to them. When was the last time you saw an ad that merely listed functional benefits and price? 
  13. Failure to appreciate how huge a discipline Statistical Science is, or how rapidly it is developing. See the Journal of the American Statistical Association for examples. Automating stats (and machine learning) is getting harder, not easier, AI notwithstanding.
  14. Assuming bigger data means better data.
  15. Forgetting that Facebook, Twitter, Google and online retailers are middlemen.

We’re all guilty of at least some of the above some of the time… 

Here are the links to Part 1 , Part 2 , Part 4 and Part 5 of the series.

I’m not a scholar – just a lunch pail guy – but I hope you’ve found this useful. I’m sometimes honored and flattered to be asked for career advice. The best I can do is quote one of my own mentors, David McCallum: “Be a jack of all trades and a master of at least two.”

Kevin Gray is President of Cannon Gray, a marketing science and analytics company.

Read more by Kevin Gray, here
 

Arrange a ConversationĀ 

Browse

Article by channel:

Read more articles tagged: Analytics, Featured, Statistics