Join us for networking & quality resources to help you and your team succeed in digital transformation.
My opinions. Nothing more. In statistical modeling, it’s not only what you do but when you do it that’s important. The sequence can be critical. Some machine learners can give us and idea of how “important” a variable is, for example based on the Gini impurity index.
Statistical methods such as regression take this further by showing us the sign of a coefficient.
When building statistical models, we can also quickly see the effect of adding, deleting or transforming a variable on the model and on the sizes and signs of the coefficients of other variables in the model.
Statistical modeling can teach us a lot that comes in very handy later on, even in informal discussions with decision makers about future projects. 🙂
There are usually many ways to do something, some better or worse in a given situation. What is best may be a matter of judgement, and this is frequently true even in a mathematical field such as statistics.
What is felt to be best under the circumstances may be impractical due to cost, time constraints, lack of expertise or other reasons, forcing us to choose among less ideal options.
One of these alternatives may be “good enough” and the obvious choice, but life isn’t always that cooperative. We may need to think very hard about what to do.
One thing we should NOT do is to conclude that, since the ideal “textbook” way of doing something is infeasible, anything goes. Though normally phrased more delicately, I often encounter this false dichotomy in marketing research and data science.
Sometimes, unfortunately, it is merely a justification for sloppiness or even incompetence. Just because scientists working in STEM disciplines occasionally screw up does not mean standards do not apply to other fields.
Some (many?) marketing researchers appear to be under the impression that all they need is a handful of statistical procedures, and that going with the statistical software’s default settings will be OK 90% of the time.
This is badly mistaken and reflects thinking that is way out of date.
When I was beginning my career in the ’80s, the statistical landscape was less densely populated than today.
Canonical correlation analysis, repeated-measures MANOVA, state-space models, SEM and a few other of the more sophisticated techniques were in use, as were the core sampling procedures, experimental designs and inferential methods.
It was a small city compared to today’s megalopolis. Quite a lot of what I now need to earn my living did not exist or was impractical on the computers of the day. I’ve had to spend a lot of time keeping my skills up to date.
One good thing about the old days was what you learned you learned thoroughly, including basic research principles. With so much time spent learning new techniques and programing languages today, valuable knowledge and skills are not being transmitted.
A few steps forward, a few steps backward.
There are simple descriptive findings and data visualizations that require little skill to produce and increasingly are automated. Sometimes this is all decision makers need, and speed may be of the essence.
However, marketing researchers – both quantitative and qualitative – able to go beyond this basic level of analysis add value that standardized procedures and machines are unable to. Human researchers who are untrained and inexperienced cannot do this, either.
In-depth analysis can seldom be done quickly but many tactical decisions and most strategic decisions do not need to be make quickly and often shouldn’t be. Until artificial general intelligence becomes a reality, speed will almost inevitably equate with shallow.
Article by channel:
Everything you need to know about Digital Transformation
The best articles, news and events direct to your inbox