- Published on
Follow Following Unfollow Keith McNulty
Sign in to follow this author
Leader in People Analytics and People Measurement, Expert Psychometrician and Advanced Analytics Practitioner in HR.
At McKinsey, we recently renamed our People Analytics group and added two critical words. We are now People Analytics and Measurement. I believe this appendage to be a critical acknowledgement that we are determined to tackle some of the hardest problems in this field: measurement problems.
To me at least, the word ‘Analytics’ implies that there is meaningful and worthwhile data to analyze. But how many organizations out there can claim that they have meaningful data on everything they want to better understand? None, I would surmise.
My belief is that people or workforce analytics groups should view this as an opportunity. They should take up the challenge of creating better, more valid, more reliable, more discriminating measures with which to analyze their most important people problems. It’s not an easy challenge, but making progress on it will surely usher in a new era of precision and proof in the HR space.
Wait, I hear you say! Isn’t People Measurement the same as Psychometrics? Obviously Psychometrics is a very important component of People Measurement. So much so that I believe that all people analytics groups should have strong psychometric expertise. It’s necessary, but not sufficient. Over time, as people analytics embraces the wide variety of problems and questions that comes its way, measurement problems will extend outside direct psychological constructs and into areas like economics, health, society, environment and many other disciplines, all with their own unique measurement challenges.
My belief that measurement problems lie at the heart of people analytics has grown out of my attempts to translate people related questions into mathematical structures. In particular, certain themes arise repeatedly in my work, and I regard them as critical measurement problems to tackle for the current generation of people analytics academics and practitioners. Here they are…
I mean this term in the mathematical sense rather than the legal sense. So many organizations I have interacted with or worked with suffer from the problem of poor discrimination in their people measures. The two most common places where I see this are in performance measures and attitudinal measures.
Performance measures often lack depth, definition or separability. The most common model is a single numerical scale of performance, with a relatively small range, and where central tendency bias tends to group large proportions around the middle. Could the 95% of people who are performing to expectations please stand up?
Attitudinal measures, particularly surveys that use a satisfaction scale or similar, often suffer from ‘clumping’ within a particular locality on the scale. Interestingly, the obvious countermeasure of widening the scale does not seem to impact this issue the way one would hope.
Poorly discriminating measures cause a host of statistical and interpretation challenges. Range restriction – caused when trying to relate a scale to a highly restricted subset of another scale – can cause erroneous conclusions. For example, when trying to understand whether interview scores correlate with on the job performance, and when your performance ratings are 90% centralized on the middle rating, this will in all likelihood generate an extremely low correlation. Does this mean interviews do not predict job performance? No. It means we need better measures of job performance.
Poor discrimination can also result in a psychological devaluation of the measure in those that work with it. For example, performance ratings can be dismissed as meaningless. Or in attitudinal surveys, we are only interested in people who ‘totally agreed’ because its the only differentiation we can get.
Levers and techniques exist to experiment more with discrimination in people measures, and I would love to see more experimentation, particularly in practical settings. Example levers (to name but a few) are:
- Increasing the complexity of the measurement system, with layers of measures which potentially aggregate to a single measure. (Ratings and sub-ratings)
- Exploring force ranking in certain situations.
- Experimenting with respondents and respondent types – for example, who assigns performance ratings and how are they assigned.
- Influencing rater behavior by tracking their rating statistics and feeding back their rating tendencies to them.
- Better definition behind rating scales (in the spirit of structured assessment frameworks).
Understanding connections between people is becoming increasingly important in our understanding of the organizational context of an individual. Big questions depend on understanding connection – for example, team effectiveness, satisfaction, retention. Yet not enough attention has yet been placed into how to effectively measure ‘connection’.
Social media data seems to be the most popular approach to understanding the connections possessed by a person, but it is totally unclear how that fits into the context of a specific organization. For example, what relevance do extra-organizational Facebook connections have? How meaningful are intra-organizational connections – how often have connections even met each other in person?
In particular, how do we drive to understand more meaningful connections inside an organization? With the wealth of digital information that is starting to flood our way, surely its time for more research or progress on this, particularly from practitioners. Which data helps illustrate connection and collaboration: timesheets, accounting, email and diary metadata, security data? This question has a substantial overlap with questions of ethics and privacy which deserve further discussion also.
This one is more for the I-O geeks, but nevertheless extremely important. As technology now affords us the ability to set task simulations to assess people’s abilities, I strongly believe that we need more accurate multidimensional measurement models to underpin this. In some ways the cart is currently before the horse.
Current approaches to ability measurement depend almost entirely on unidimensional mathematics. Item Response Theory is well developed for the scenario where each item or task associates with a single construct to be measured. But what happens when a task requires the use of multiple abilities at the same time?
Recent exciting developments in game based assessment hold promise from a technological perspective, and reassure us that we can start to move away from stale, dry, multiple-choice batteries to develop more engaging, task-based assessments. But without rapid further developments in the field of multidimensional item response theory, I fear that such assessments will either be forced to regress into ‘jazzed up’ multiple-choice batteries, or else struggle to demonstrate genuine validity, reliability and ultimately fail to prove their utility.
I lead McKinsey’s internal People Analytics and Measurement function. Originally I was a Pure Mathematician, then I became a Psychometrician. I am passionate about applying the rigor of both those disciplines to complex people questions. I’m also a coding geek and a massive fan of Japanese RPGs. You can message me on LinkedIn or engage with our All opinions expressed are my own and not to be associated with my employer or any other organization I am associated with. People Analytics group within McKinsey’s Organization Practice.
MIRT is complex mathematics, and its development sits firmly in the domain of the academic. Progress has been made in the underlying theory, but its use in the field has been extremely limited. Practitioners, however, hold the key to experimentation – to help the academics test and learn which models work and don’t work by providing access to test populations. And lets face it, who wouldn’t want to take an hour off to play a game?
People Analytics and People Measurement are symbiotic. You can’t analyze if you don’t have measures, while at the same time, analytics can help construct and refine the measures you use. People Measurement problems are the hardest to solve, but progress on solutions will almost certainly have a greater impact on this field than any analytic conclusion can ever have. I’m looking forward to some breakthroughs on these and other measurement problems over the next few years as more attention and resource is dedicated to this field.
Article by channel:
Everything you need to know about Digital Transformation
The best articles, news and events direct to your inbox
Read more articles tagged: People Analytics