|Photo: Marcus Buckingham|
|Photo: ATD (blog)|
If you were my manager and you watched my performance for an entire year, how accurate do you think your ratings of me would be on attributes such as my “promotability” or “potential?” How about more speciﬁc attributes such as my customer focus? Do you think that you’re one of those people who, with enough time spent observing me, could reliably rate these aspects of my performance on a 1-to-5 scale?
These are critically important questions, because in the vast majority of organizations we operate as though the answer to all of them is yes, with enough training and time, people can become reliable raters of other people. We have constructed our entire ediﬁce of HR systems and processes on this answer.
Likewise, when, as part of your performance appraisal, we ask your boss to rate you on the organization’s required competencies, we do it because of our belief that these ratings reliably reveal how well you are actually doing on these competencies. The same applies to the widespread use of 360-degree surveys. We use these surveys because we believe that other people’s ratings of you will reveal something about you that can be reliably identiﬁed, and then improved.
We’re wrong. Research reveals that neither you nor any of your peers are reliable raters of anyone. As a result, virtually all of our people data is fatally ﬂawed. Over the last 15 years a signiﬁcant body of research has demonstrated that each of us is a disturbingly unreliable rater of other people’s performance. The effect that ruins our ability to rate others has a name: the Idiosyncratic Rater Effect, which tells us that my rating of you is driven not by who you are, but instead by my own idiosyncrasies. This effect is large and resilient. No amount of training seems able to lessen it, and on average, 61% of my rating of you is a reﬂection of me. Bottom line: when we look at a rating we think it reveals something about the ratee, but it doesn’t. Instead, it reveals a lot about the rater.
Despite the repeated documentation of the Idiosyncratic Rater Effect in academic journals, in the world of business we appear unaware of it. We have yet to grapple with what this effect does to our people practices. We take these ratings—of performance, of potential, of competencies — and we use them to decide who gets trained on which skill, who gets promoted to which role, who gets paid which level of bonus, and even how our people strategy aligns to our business strategy. All of these decisions are based on the belief these ratings actually reﬂect the people being rated. After all, if we didn’t believe that, if we thought for one minute that these ratings might be invalid, then we would have to question everything we do to and for our people. How we train, deploy, promote, pay, and reward our people, all of it would be suspect.
Is this really a surprise? You’re sitting in a year‐end meeting discussing a person and you look at their performance ratings, and you think to yourself “Really? Is this person really a ‘5’ on strategic thinking? Says who—and what did they mean by ‘strategic thinking’ anyway?” You look at the behavioral deﬁnitions of strategic thinking and you see that a “5” means that the person displayed strategic thinking “constantly” whereas a “4” is only “frequently” but still, you ask yourself, “How much weight should I really put on one manager’s ability to parse the difference between ‘constantly’ and ‘frequently’? Maybe this ‘5’ isn’t really a ‘5’. Maybe this rating isn’t real.”
Source: ATD (blog)