The Importance of Importance: The Role of the Rater
In my last blog, I surfaced the critical issue in 360 feedback processes as to ways to identify the behaviors/competencies that should have the highest priority (importance) for development planning. If there is one thing that there seems to be unanimity with in this field is that focusing the leader’s efforts on 2-3 things is a best practice. So how do we determine what those 2-3 things are when the leaders are so different across functions, levels, experience, ability (competence), potential, and so on?
If we start with the instrument content (items) as defining the universe of behaviors (putting aside write in comments for another day), then topics for discussion include 1) the need for different versions for different functions, 2) different versions for different positions (e.g., executive, supervisory, individual contributor), 3) different versions for different rater groups (e.g., direct reports vs. peers), and, finally, the length of the instrument.
I would like to put aside for the moment the question of multiple versions and pick it up later. The question of versions is not so much about importance as it is opportunity to perform (relevance to job) and opportunity to observe by raters.
Last time, I started the discussion of length of the instrument and raised some questions regarding guidelines in the One Page Talent Management book. I posted the blog link in the OPTM LinkedIn discussion, and Marc Effron responded with his rejoinders. I think we have agreed to disagree, though I am apprehensive about disagreeing since he feels that people that disagree with him should lose their job: “…the talent management leader should be the one who says that (how many items there should be) and they should be promptly fired if their answer is anything above 25.” (Note: Don’t bother to look for this thread since it has apparently been removed. I will leave it to others to surmise why.)
I would like to open up the Pandora’s Box of who is in the best position to indicate what the “most important” issues are for the participant: the raters or the manager (boss), or the coach? It doesn’t necessarily have to be an “either/or” type of thing, but many 360 processes have a strong inclination one way or the other, including having the instrument designed to ask raters for the most important items. Some processes exclude the manager from seeing the feedback report, and I would take that as a strong signal about the perceived role of the boss. Some have reports go to the coach and not the manager.
For example, my impression from the OPTM chapter on 360 is that they definitely place heavy weight on the raters. For starters, their process has the raters identify the most important behaviors (and then offer comments and suggestions). Secondly, I don’t see any specific reference to the role of the manager and his/her potential role in using the feedback.
I attended a SIOP Workshop on Talent Management a few years ago that was led, in part, by Morgan McCall (if that name doesn’t ring a bell, start with the concept of “derailers”). In his commentary, he said that “the manager is the most important factor in an employee’s development.” I totally agree with him. The manager should be in the best position to place the feedback in context (e.g., special circumstances affecting the ratings) and then apply it to the current and future needs of the ratee, the team and the organization.
As for raters providing importance ratings (or the equivalent), I think it is a bad idea in both 360’s and employee surveys. I have toyed with the metaphor of “putting the inmates in charge of the asylum,” with deserved trepidation (no, employees are not inmates and companies are not asylums), but, of course, to question the wisdom of putting decision making into the hands of the less informed and questionably motivated. I would say it is quite clear that, in employee surveys, the issues that are labeled as “most important” are not the issues that drive engagement and behavior, such as turnover.
In the context of 360’s, we do acknowledge the views of raters through the ratings themselves. (The OPTM suggestion of not providing Top/Bottom score lists is interesting. When there are only 15-25 items, maybe they are indeed less necessary than with longer instruments. Of course, they also have raters pick the most important items.) I basically don’t think employees should dictate what the highest priorities are. Importance ratings are arguably more about the raters’ agendas than the ratees’. Plus they probably know less about the developmental plans of the ratee than the manager. Including importance ratings (or the equivalent) in a 360 sets explicit expectations that that is their role, i.e., to identify the highest priority development needs. Those expectations should not be encouraged.
As for the coach, I will address this in a blog called, “When coaches go too far.” Let me just say that usually coaches are the least informed of all parties about the best development plans for the ratee, and usually are not around long enough to provide continuity.
Raters are not in as good a position to determine developmental priorities as the manager (boss). If the manager is not in the best position to interpret the feedback and to guide development priorities, than the system is broken. Managers need to be held accountable for the proper use of 360 feedback, for ensuring follow through by the ratee, and to assist in providing the developmental resources. When 360’s don’t matter, this may be one of the reasons why.
©2010 David W. Bracken