Disrupting B.S.

Phony innovation holds back true innovation by distracting us, wasting our time and squandering budget.  Some claims are also so ludicrous that they can be used as an excuse for resisting change.

As a marketing science and analytics person, I am continuously exposed to sales pitches and chatter about “disruptive innovation” of one kind or another.  Some of these innovations are truly impressive but many, frankly, have little substance.  I’d like to share a few thoughts about how to separate the wheat from the baloney.  I think they will apply generally, not just to my areas of specialization. 

My apologies in advance if I’m a bit hard-hitting in places. There are a lot of claims being made that are complete nonsense and threaten the credibility of our profession.  My opinions are not just my own and in preparing this post I consulted with other marketing scientists and academics.  The usual caveat applies, of course.

Established methods, processes and technologies can be hard to dislodge.  As I think most of us will admit, we humans tend to be creatures of habit and to resist change.  However, many “old things” have stood the test of time and replacing them would involve costs and risks which may be substantial.  This status quo bias of ours should not be arbitrarily condemned as irrational – it is a preference that has evolved over millions of years and we might not be here today without it.

We also need to keep in mind that something positioned as innovative might actually be an existing idea that has been repackaged and recycled.  This is quite common.  However, some innovations don’t diffuse very far simply because they aren’t very good ideas.  Some new ideas can fail, though, not because they’re bad ideas, but because they are difficult for potential users to comprehend or put into practice.  They are too far ahead of their time.  Other good ideas fail because they have been poorly marketed. 

Even real innovation can have downsides.  Automating bad practice, for example, does not make it good practice, and it seems there are now more ways than ever to do things that should be avoided more cost effectively.

In the world of bogus innovation, straw men dot the landscape.  Some who sell “disruptive” methods are really tech salespeople and know little about marketing or marketing research.  They sometimes tip their hand by criticizing things marketing research professionals do not actually do.  A related tactic is to misrepresent poor practice as traditional MR, or run-of-the mill MR as best-in-class (which is usually a matter of opinion).  We need to look closely at the benchmarks against which the innovation is being compared. 

Be especially on the lookout for the word “disruptive.”  It’s hackneyed and this alone should raise eyebrows.  “We are the only ones who…” should also get your guard up.  One might wonder if no one else does it because it doesn’t actually work!  Ostensible benefits of new a technology may really be camouflaged claims about itshypothetical potential, not what in fact it has been proven to deliver.  Potential isn’t reality.

Hence, “validated” is another word to be wary of.  It wouldn’t be difficult for an unscrupulous vender to cherry pick data after the fact and present them as evidence that their methodology can predict future sales, for instance. The details of how a proprietary methodology was developed are normally under lock and key, for obvious reasons.  However, it is certainly ethical for a potential buyer to ask for whatever details the seller customarily releases, and we should scrutinize claims that a methodology has been validated unless the price tag and potential risks of using it are small.  Validation often involves causal analysis, which is highly complex – see Causal Analysis: The Next Frontier in Analytics? for a brief summary of this topic.

It sometimes transpires that the methodology is a house of cards, built on assumptions that have not been truly validated by anyone, including academics.  For example, it may help you move KPIs that have no actual relationship with business performance. Some companies pay a great deal of attention to KPIs that have never been empirically validated.  While it may seem logical that they drive business performance, in fact, they may not.

Looked at from another angle, an elaborate sales presentation that essentially says “Our proprietary methodology is better than our competitors'” is questionable because the details necessary to make such a claim are seldom made public, as noted.  Having spent many years in an R&D role myself, I can confess that much of what marketing scientists “know” about competitors’ products is actually guesswork.  Furthermore, reasonable people can disagree about many things, including statisticians, and it may not be unambiguously clear which method, on balance, is best even when all relevant details are in the public domain. 

Some claims flunk Marketing 101.  An ad pre-testing method, for instance, may claim to predict future sales.  In reality, assessing the impact of past ads, independent of other elements of the marketing mix, is very challenging and requires special expertise.  Moreover, ads can have different objectives, long-term image-building versus introducing a new product being two examples.  An ad that performs exceptionally well in a pre-test thus may not be “the man for the job.” 

Some pitches attempt to bewilder us with complexity, perhaps in the hope we won’t look too closely or ask too many questions for fear of embarrassing ourselves. Those adopting this sales strategy tend to lean heavily on jargon and dodge specifics.  Don’t allow yourself to be intimidated – try to pinpoint concretely what this new product, service or process is supposed to be able to do and whether there is genuine evidence it can deliver on these promises.  Stay calm and ask for specifics.  Don’t be afraid to ask “dumb” questions, either, as responses to them can be quite revealing.

Dubious claims sometimes conceal themselves behind academic or scientific authority.  To paraphrase one comment I spotted on LI, this is a good way to weaponize B.S.  While endorsements from experts are, on the surface, impressive, don’t be too taken in by paper credentials – ethics are not correlated with mathematical prowess or programming skills.  Independent authorities with no commercial stake in the game may have very different opinions regarding the viability of what has been claimed or proposed.  Reaching out to them may involve a consulting fee but not always – many academics are extremely generous with their time and more so if they are concerned that the integrity of their discipline is threatened.

Unlike conventional blood tests, biometric tools used in marketing research often suffer from substantial measurement error.  Setting that aside, I occasionally hear it claimed that some of these new methods only require very small sample sizes (e.g., 20).  This is disingenuous.  If health officials needed to estimate the distribution of blood types in the national population they would use a sample larger than 20, and reactions to an ad in a pre-test (for instance) are far more variable.  In addition, in an ad pre-test we usually analyze the results within respondent subgroups, which requires a total sample of considerably more than 20 respondents.  Counter to some claims I’ve heard, increasing the number of measurements on a respondent may (or may not) improve measurement precision for that respondent, but it does not increase the number of respondents in the sample.  Science fiction is not reality, but baloney is baloney…even from a university professor.

Popular business media are an excellent source of nonsense. “Companies that do XYZ are more profitable than companies that don’t do XYZ” is not evidence that XYZ works.  It is merely a sentence written in English.  A few obvious questions come to mind.  How is more profitable defined?  How did the two groups of companies differbefore XYZ was adopted?  What about performance over time?  Average profitability for companies doing XYZ might actually have decreased since they adopted it!  On occasion, I’ve asked for econometric evidence and received blank stares instead…

Some claims are self-repudiating almost to the point of farce, for instance eloquently-written articles asserting that humans cannot express themselves well verbally.  In these pieces, we may be told in-depth interviews don’t really work but, inexplicably, text-mining Twitter with their software does.  It seems we’ve been deceiving ourselves all these years…Some pitches for biometrics and implicit methods make similar sorts of contentions, neglecting that their development may have required exhaustive interviews with test subjects.

That humans are not perfectly rational is not news, nor was it was when Sigmund Freud was a lad.  Nonetheless, every so often we are informed that new research has discovered that humans do not always shop very scientifically and that conventional marketing is all wrong.  I suspect, however, that tail fins were not installed on automobiles in the 1950s for purposes of aerodynamics.  When was the last time you saw an ad or package that merely listed functional benefits and price?  Blackboard economics should not be confused with marketing as it has actually been conducted over the years.  This is not to say that behavioral economics is baloney, only that some of what has been written about it is.  The importance of emotion in marketing is old news, as are the struggles most of us have had with math.

Clever use of visuals is another way to pull the wool over our eyes.  More and more I find my attention grabbed by captivating graphics on social media and websites…captivating until I take a closer look and realize the sexy graph really doesn’t tell me anything very insightful or useful.

Data imputation is something else to watch out for.  Some companies now claim to have extensive, detailed information on each and every one of us that can be used for targeting purposes.  While I’m concerned that a lot of personal information is being collected and assembled into mammoth data bases without our informed consent, much of this “information” turns out not to be real information – it’s imputed data and may be very inaccurate. 

Moreover, while Natural Language Processing and Computational Linguistics have made great strides, there is still much work to be done in the area of text analytics.  Professor Bing Liu reminds us in his book Sentiment Analysis that the silver bullet remains elusive.  Let’s also not forget that we humans frequently miscommunicate among ourselves, sometimes with tragic consequences.  In the past few years targeting has gotten a lot more precise, to be sure, but I’m not certain the NSA could deliver on some of the promises I hear. 

Automated data mining and modeling is risky beyond a very basic level.  In his book Statistical Rethinking, Richard McElreath of the Max Planck Institute makes a very important observation: “…statisticians do not in general exactly agree on how to analyze anything but the simplest of problems.  The fact that statistical inference uses mathematics does not imply that there is only one reasonable or useful way to conduct an analysis.  Engineering uses math as well, but there are many ways to build a bridge.”  So who programs these tools?  How do they decide which procedures or options are best? A further complication is that new statistical and machine learning techniques are being developed at an ever-increasing rate, which makes it ever more difficult for automated software to keep pace.  

Silly or misleading claims and outright falsehoods can discourage adoption of useful new tools and lead to backlash.  Big Data, the Internet of Things and Artificial Intelligence have been hyped to the point where many genuine experts in these areas have become worried.  “More hype about hype” is not an impressive response to these legitimate fears and is insulting to boot.  And let’s not forget that incremental innovations also can have a big impact – something does not need to be “disruptive” to be extremely useful.

All this said, something slickly marketed or hyped to a nauseating degree may, in fact, work as advertised and be worth its price tag.  Nothing in this post should be taken to suggest that innovation and disruption are all B.S. and that it’s OK to stick with the status quo.  That would be a very risky position, in my view, and unreasonable.  I’m not opposed to change, I’m opposed to B.S. and any innovation that helps me do my job more effectively and more efficiently is most welcome!

Being open-minded means being open-minded and our conclusions about a new method, process or technology should ultimately boil down to “Will I get what I’m expecting and will our investment, including my time and my staff’s time, pay off under the constraints we work?”  Also, never forget to ask yourself the most important question of all: “So what?  How will this affect my decisions?”

Use common sense, look closely and ask hard questions.

______________________________________________________________________

Parts of this article have been adapted from Innovation Or Sales Pitch? published in GreenBook on April 14, 2014.

Kevin Gray is President of Cannon Gray, a marketing science and analytics consultancy.

 

Arrange a Conversation 

Browse

Article by channel:

Read more articles tagged: Featured