Practical Questionnaire Tips

Myths and misperceptions -“Survey research is easy.  Anybody can do it.” This is a misperception that seems quite widespread, to the point where it’s almost become a marketing research urban legend.  As a marketing science person, I am a heavy user of survey research data and am concerned that fundamental survey research skills are actually eroding.

Survey research is not easy.  Ask the pollsters!  In the interests of disclosure, I should make it clear that none of my company’s revenue is derived from data collection, though I do frequently provide input into sample and questionnaire design.  The ax I will grind is that a considerable amount of time and budget is wasted because of poor questionnaire design.  We often spend more time and more money than we have to in order to collect less valuable data.  That’s a lose-lose-lose proposition.

Common flaws

But first things first – the objectives.  By far and away what has the biggest impact on survey quality is the quality of the research objectives.  Unfortunately, they can be blurry and like many questionnaires, essentially a product of an ad hoc committee.  (I believe it was economist George Stigler who once remarked that Lindbergh’s flight across the Atlantic would have been even more remarkable had he been accompanied by a committee…) One result is that nice-to-know questions may outnumber need-to-know questions.

Excessive questionnaire length has long been an issue in marketing research and, with mobile surveys on the rise, will become even more so.  I do not wish to launch a global campaign against questionnaire obesity but in mr it’s a serious problem.  Inspired by the Body Mass Index (BMI), I would like to propose a Questionnaire Mass Index (QMI):

QMI = (Time Wasted on Nice-to-Know Questions)^2 / (Time Required for Need-to-Know Questions) x 100

So, if your average interview length is 20 minutes and respondents, on average, spend 4 minutes answering questions that actually have little business meaning, your QMI score would be 100. The lower the score, the better. Though I am being tongue-in-cheek here, a simple guideline such as this can help us discipline ourselves and improve the health of our questionnaires.

There are countless ways to reduce the flab in our surveys. Even on a mobile device, we still have much more leeway than in the “old days” when surveys were often conducted by telephone and administered by a human interviewer.  A 10-15 minute mobile survey will generally be able to cover more ground than a 10-15 minute telephone survey. 

Moreover, members of online panels have been profiled to some degree and key demographics and some psychographics have already been collected.  There may be no need to ask these questions again.  Another way to save time is to split questionnaires into chunks so that only the most critical questions are asked of all respondents.  Clever questionnaire design can reduce questionnaire length, lower costs and improve response quality.

I’ve noticed that very similar questions are sometimes asked in 2-3 places in the same questionnaire.  While occasionally this is deliberate – for instance, when very important questions are asked in slightly different ways – more often than not it’s an oversight.  This wastes time and can confuse or irritate respondents.

Ask yourself whether ordinary consumers will interpret the questions in your survey the way you do.  They are not brand managers or marketing researchers.  Also ask yourself if you would be able to answer your own questions accurately!  Highly detailed recall questions have always been discouraged by survey research professionals, and the folks who established consumer diary panels decades ago were well aware that even diary data are not 100% accurate.  Answers to questions about purchase, for example, should be interpreted directionally and should not be used as substitutes for actual sales figures when the latter are available.

Surveys are particularly useful for uncovering attitudes and opinions, which leave no trail at the cash register.  Knowing what consumers buy is important, but knowing whythey buy it is also important.  Deriving the why from the what is much harder than is sometimes assumed.  That said, this is where survey research often fails badly, usually because of poor questionnaire design. Merely copying and pasting attitudinal statements from old questionnaires or from a lengthy, brainstormed list of statements is asking for trouble. 

When developing your own scales, think first of the factors – the underlying dimensions – then the items that you will use to represent these factors. For an in-depth look at how to measure attitudes and opinions, see Psychometrics: An Introduction (Furr and Bacharach), an up-to-date and very readable book on Psychometrics. Another good resource is Marketing Scales, an online repository of more than 3,500 attitude and opinion scales. 

You don’t need to wed yourself to 5 or 7 point Agree-Disagree scales, which are prone to straightlining.  MaxDiff, simple sorting tasks and various other alternatives often work better. However, if the statements themselves do not make sense to respondents, or mean different things to them than they do to you, you’ll have a problem regardless of the type of question you’ve settled on! 

If you conduct an overseas project, local culture should be first and foremost on your mind.  What seems straightforward to you may be unfathomable or even offensive to those from other cultures, even when they are quite fluent in your native tongue. Don’t assume that all statements or items can be translated directly into other languages, either.  Sometimes only rough translations are possible because the corresponding vocabulary does not exist in the local language.  What may seem like a mundane concept to you may not survive the voyage to another society.

Certain types of questions are asked again and again – awareness and usage questions, for example – and there is no need to keep reinventing the wheel.  In fact, this is badpractice that can run up costs and lower data quality.  Considering building banks of standard questions and questionnaire templates for different kinds of surveys.  QUAID (Question Understanding Aid) is an Artificial Intelligence tool that can help you improve the wording of your questions.

Sample design is another place where survey research can go awry.  In my opinion – and I suspect Byron Sharp and his colleagues at the Ehrenberg-Bass Institute would agree – we often survey a slice of consumers that is far too narrow.  Not infrequently, a client wishes to interview women aged 18-24, for instance, when the potential consumer base for their product is vastly more diverse.

Often, these sorts of screening criteria are driven by gut feel or emerged from a few focus groups and have no true empirical foundation. Casting a net which is too narrow runs up research costs, increases field time and can give us a very distorted picture of reality.  This is another lose-lose-lose proposition.

Though advanced analytics can be conducted after the fact, they usually work best when designed into the research.  “Begin at the end and work backwards” is sound advice and especially pertinent when the data will be analyzed beyond the cross tab level.  For example, if you intend to run key driver analysis – the simplest example of which would be correlating product ratings with overall liking – make sure to ask all respondents the questions that will be used in the analysis. 

Data imputation of many kinds is now practical but it is still preferable to have all respondents answer the most important questions.  Involving a marketing scientist in questionnaire design for projects requiring advanced analytics is highly recommended.  Ideally, this will be the person who will conduct the modeling when the data arrive.

There are formatting and layout issues as well as optimization for device type (e.g., PC, mobile) I haven’t gotten into that are covered in some of the books I cite below.

To sum up

Consumer surveys are not easy and now that we have many other data sources (e.g., transaction records, social media), they can actually add more value than ever through their synergy with these other data.  Access to a variety of data can help you both design your survey and interpret its results.

Further reading

This has been a very small article about a very big topic.  For those who wish to learn more – and there is so much to learn – there are online sources, seminars and university courses.

There are many books as well.  Sharon Lohr has written an excellent and very popular textbook on sampling entitled Sampling: Design and Analysis.  Many excellent books have also been published on survey research and questionnaire design, and three I’ve found particularly helpful are Internet, Phone, Mail, and Mixed-Mode Surveys (Dillman et al.), The Science of Web Surveys (Tourangeau et al.) and Web Survey Methodology (Callegaro et al.). 

The Psychology of Survey Response (Tourangeau et al.) and Asking Questions (Bradburn et al.) have stood the test of time and I highly recommend them. The AAPOR publication Public Opinion Quarterly is an excellent source for the latest research on survey research.

A version of this article appeared in Quirk’s on August 22, 2016.

Kevin Gray is president of Cannon Gray, a marketing science and analytics consultancy.

Arrange a Conversation 

Browse

Article by channel:

Read more articles tagged: Featured