In my previous post I argued that HR needs to stop focusing on “best practices” and making its processes world class. A related problem happens when it comes to evaluating how HR is doing, and what criteria should be used. The problem is that HR too often focuses on the programs as designed, not whether they really address what the business needs. Three examples illustrate this point: compensation, leadership development and competency models, and training and development.
Compensation. A key issue in evaluating an HR program is whether you evaluate the program’s design or its intent. For example, merit raises are supposed to motivate people to perform. The design goal of the program is to differentiate compensation based on performance, which is one target measurement. The program’s intent–providing increased motivation to perform–is separate and much more difficult to measure.
The problem, as Steve Kerr noted years ago, is that when it comes to compensation, what we can measure with respect to on-the-job behaviors is different than what we want to control (or at least influence). We want people to apply discretionary effort in ways that are perfectly aligned with the job objectives – that’s what we want to impact. But we have only a blunt tool – merit raises – that are supposed to reward people for sustained performance over an entire year. Bonuses are slightly better because they don’t reward the person forever based on one year’s performance, but they are no better from a measurement standpoint: there is too weak a link between the evaluation tool (the bonus) and the behavior we want to impact.
The best we can do for evaluating whether we’re setting comp right is to make sure we aren’t paying too little or too much based on external benchmarks, and to make sure raises and bonuses are greater for those with better performance. But there’s still a big gap from there to motivating and guiding the behaviors we need for top performance, and traditional comp evaluations can’t get us there.
Leadership development and competency models. Competency models form the backbone of how we look at job requirements and performance in many roles. For independent contributor roles, competency models have been proven to be effective ways of specifying what skills and behaviors need to be applied for effective performance. Whether it’s an administrative assistant, software engineer, truck driver, research scientist, nurse, machine operator or salesperson, competency models for those types of roles can be effective ways of driving performance that promotes strategy execution.
Yet for leadership roles, where competency models are used all the time, there is a big gap between what is measured versus what the business needs to succeed. Leadership competency models specify the behaviors that we want in our leaders. But they don’t address key aspects of the job like decision making, which can matter as much or more for strategy execution as the types of behaviors the leaders are exhibiting.
The problem is that leadership competency models focus too much on the “how” – how leaders act – and not enough on whether they accomplish their core decision making responsibilities at a world class level. The competency models can be very effective for feedback and professional development, but they don’t close the gap with what the organization needs for better strategy execution.
Training and development. For training and development (T&D), the intent is to increase organizational capability. The program design is to do it through individually-focused skill building. Kirkpatrick’s four steps model for T&D evaluation, which is widely used, is a classic example that focuses mostly on program design, not intent.
There are four steps in the Kirkpatrick model: (1) measure the learners’ reactions to the T&D program; (2) show that learning occurred and that skills were developed; (3) show application of the learning on the job; and (4) demonstrate business impact. People who apply the Kirkpatrick model almost never get to business impact because they get bogged down with steps 2 and 3.
The problem is that this model puts business impact at the end of the evaluation, following two difficult measurement steps. It usually is not practical, and sometimes not even feasible, to demonstrate that specific learning occurred and that it is being applied on the job. As a result, business impact often is never measured or demonstrated.
All three of these examples – compensation, leadership, and T&D – show the problems with how most HR evaluation happens today. There is too much emphasis on how the programs are designed, with measurements that don’t say enough about business impact and accomplishing strategic objectives. Elevating HR’s game means doing something different than the status quo.
The answer most often lies in taking a big step back and asking “what’s getting in the way of us accomplishing the business objectives.” Rather than ask “how can this particular HR program be improved,” you want to look for the gaps in strategy execution that aren’t being addressed by the current programs and work design. This means taking a more systematic, holistic look at the strategy and organization design, and building and testing models of organizational effectiveness and strategy execution.
This post is drawn from my new book Strategic Analytics: Advancing Strategy Execution and Organizational Effectiveness, which will be published in November.