Who is the customer? Take Two
In an earlier blog, I asked the question, “Who is the customer in a 360 process?” The particular focus of that discussion was the length of 360 questionnaires.
I have recently been exploring the websites of various 360 Feedback providers to see the products/services that are offered, and how they are positioned. I was surprised by how many vendors are offering processing services without consulting services, which I think relates back to the potential problems that computer-driven solutions can cause that were covered a couple blogs ago.
What I would really like to briefly address, stemming from this search exercise, is the report formats and associated decisions in their design. One of the many decisions in designing a report is whether to show the actual frequency of responses. In my search of 360 websites, when report samples were provided they more often than not showed the mean score for each rater group (e.g., self, boss, direct reports, peers, customers) but not how the average was derived (i.e., the rating distribution).
From experience, I know where this decision comes from. Typically, the rationale is that showing the frequencies will potentially draw attention to outliers (a single rating, usually at the low end of the scale) and cause problems if the ratee misuses that information. Misuse can come in the form of making assumptions as to who gave the rating, and/or exact some form of retribution on the supposed outlier.
These things do happen. My question is whether the best solution to this problem is to deny this potentially useful data to the ratee and other consumers of the report, such as their manager and possibly a coach.
When discussing the question of the length of the survey, I proposed that short (less than 25 items, for example) surveys can be a sign that the rater is a more important “customer” of a 360 process than the ratee. The abbreviated survey makes the task supposedly easier for the raters but denies the ratee from having useful data on a broader range of behaviors that may be useful depending on his/her situation.
Similarly, not showing frequencies again identifies the rater as being more important than the ratee. Not providing the frequencies somehow protects the rater from potential abuse. On the other hand, providing that information to the ratees can be extremely useful in understanding the degree of consensus among raters. (Some reports provide some index of rater agreement in lieu of rating distributions, but I have consistently found those to be almost useless and frequently misunderstood).
Distributions can also help the ratee and other uses to see how outliers do have a major impact on mean scores, especially when the N is small (which it often is with 360’s). I have also found it useful to be able to see cases where there is one outlier on almost every question. When possible, I have asked the data processor to verify that the outlier was the same person (without identifying who it was), and I informed the ratee and his/her manager that scores have been affected by one person who has abused their role as a feedback provider. I also have provided counsel on how to use that information, including not making assumptions as to who the outlier is and to not attempt to discover who it was.
With one client that was provided with rating distributions, the report had this kind of pattern with one outlier on every item. (I have told this story before but in a different context.) Since he met with his raters (a best practice) and shared his report (another best practice), he felt compelled to mention that apparently someone had some problems with his management style and that he would appreciate it if the person, whoever it was, would come talk with him sometime. He left it at that. Sure enough, a member of the team did come to see him and, with embarrassment, confessed that he had accidentally filled it out backwards (misread the scale). Think how this helped the manager in so many ways to not have to make assumptions as to the cause of the feedback results. If he did not have the detail of the frequencies he (and his boss/coach) would not know how the averages had been artificially affected. It is also one more reason why 360’s should be used with some judgment, not just treated as a “score.”
We do need to protect raters from being abused. We also need to help them feel safe from identification so that they will continue to be honest in their responses. One way to approach that is to ensure that managers and coaches reinforce to the ratees the proper ways to read, interpret and use the feedback. There should also be processes in place to identify ratees who do behave improperly and to hold them accountable for their actions.
Ratees should be the most important “customer” in 360 processes. They need to understand how their raters responded and, by the way, on a sufficient number of items to apply to their situation. Design and implementation decisions that treat raters as being more important than ratees are misguided.
©2011 David W. Bracken