How can People Analytics inform accurate Performance Measurement?

There’s a lot of conversation right now surrounding ratings, ongoing conversations, peer feedback, etc. However, I am curious how people analytics provide insight into what the leading indicators of objective performance (customer satisfaction scores, sales growth, marketing ROI, profitability, etc.) are.

1 Like

Hi, Kev. Thanks for your question. I’m going to copy and paste a reply from a previous post that related to your question on “objective performance.” Before I do I’m going to pick up on your notion of “leading indicators” of downstream outcomes like sales growth, marketing ROI, profitability, etc.

First, as you’re likely eluding to, “performance” is often regarded as a linchpin metric that “SHOULD” associate/correlate with these downstream outcomes. Trouble is, they often don’t. What’s worse is I’ve seen many a program use analytics to try and fit the analytics to justify the appropriateness of the measure. This is bass ackwards. What’s better is understanding the prospective “drivers” (I’m not going to talk about language and causation here, yet just be mindful of 'em) of downstream outcomes then ask yourself if the appropriate measures exit. The has to be balanced with what measures (data) are available. Performance data is often available so, as the logic goes, let’s tell a story around it. The trouble is, the analyst, people analytics professional, HR leader, or other, has the have the knowledge and fortitude to understand what’s just poor data and poor analysis. Given my limited view of the world, I can could on one hand the use cases using performance data linking, in a highly credible, confidence inspiring way, with downstream outcomes. Why? Because the data has not been designed for that purpose.

For example, if we take Norton & Kaplan’s Strategy Mapping Approach from years ago we’d ask ourselves what drives sales growth? We’d then ask, What drives that driver? We’d then ask, What drives the driver of the driver? And so. In the end we’d have a set of hypothesis that we, should the data be available (or gettable), can test analytically. This is basic research design and, in today’s world, can be automated so data and insights are “on-demand.” Trouble is, most organizations haven’t applied creativity to create processes, measures, and experiences unique to them. It’s really, after 40+ years or so, pretty much the same ranking system in a different wrapper. We must do better. How?

This is where the previous post comes in. It’s below, and by no means an end to the conversation. Quite the opposite, it’s an invitation - maybe even a challenge – to innovate. Hope you find this perspective and following ideas valuable. Thanks again for your question, and wishin’ you the very best!


To your question, and I sense you may have consciously or subconsciously lit the fire that’s been burning inside of me for the past 25+ years. This fire relates to what is known as “performance management.” I place “performance management” in quotes because I don’t believe it’s the right language, thus it’s not the right data, not the right process, not the right technologies, etc. Most performance management processes are looking to allocate compensation, distribute rewards, and identify “high performers.” The concept (and language) is very old in management thinking and is supposed to be a critical process of any “high performing” organization. The thing is, and this gets to your question about People Analytics (PA) failing to solve a crucial HR problem: people/employees/workers want to Contribute , not “perform.” They also want to be viewed for what they contribute over time and not over the short-sighted window that’s a quarter, 6 month period, or even a year (depending). Also, considering idiosyncratic rater effect, ratings often speak more to the rater that the people being rated. Rating people also assuming an omniscient, unbiased perspective by the rater.
Even with these truths in mind, PA is still takes this data, data that is widely known to be poor or blatantly inaccurate, and tries to build a confidence-inspiring story around it. As a result, many people become disengaged. They know they, and others who might provide critical value/contributions yet lack high performance ratings, are likely being overlooked by PA and, in turn, the leaders that consumer such information; and, even when they are being considered in such data and analysis, they’re likely placed into a sub-cluster deemed less important. This approach lacks creativity and humanity; and PA is often the customer of such data and does not take the initiative to change it. So, as we proceed over time PA leaders and professionals, in my view, must be much more assertive, creative, and courageous to bring about positive change in the process formerly known as performance management (shout out to Prince). What should it be instead? What do call it? The answer lies in the culture leaders want to create. I do, though, advocate the discussions focus on (1) past contributions, (2) development, (3) future contributions (intentions), (4) resourcing, and (5) ideas. Such a structure will enhance engagement with both the supervisor and direct report, and it’ll also generate data that can help identify key contributors (those who contribute more than others over time), fast movers (those who learn faster and more over time), and other insights that, to date, have proved elusive. In summary, if I’d like to see PA assert itself more and help HR, leaders, and workers, it’s in this process. Hope this helps, and to humanizing the work experience for the benefit of all!

2 Likes

I love the idea of starting with the outcome variable (e.g., sales growth) and then mapping out the hypotheses.

This kind of work seems to be happening more often in the world of sales and customer analytics than in people analytics, but it seems like that’s changing. In part, thanks to thought leaders such as yourself.

2 Likes