Strategic 360s

Making feedback matter

Silly Survey Formats?

with one comment

[tweetmeme source=”anotherangle360″]

My recent webinar, “Make Your 360 Matter” led to a blog entry called “Snakes in Suits” that was primarily about 360 processes being true to their objectives. Dale Rose, a highly experienced consultant and good friend (and collaborator) was motivated to submit a comment, part of which included this thought:

This also raises one of the problems with using that silly survey format where you can list all the ratees together while answering the survey. If raters are comparing across people while rating, then they are not thinking closely about what is going on specific to that person because a bunch of their attention is focused on comparing them to someone else. What happens when the context changes and I’m rating them compared to two different people? At best, if ratees have a professional helping to interpret the data they may actually think about the implications and draw reasonable conclusions. At worst, the shift in context messes with the data so much that no one knows what the differences mean.

In communicating with Dale, I learned that he had been unable to listen in to the webinar, which had included a brief discussion of the rating format that he references. To bring everyone up to speed, the multiratee format we are addressing is (or can be) a spreadsheet with names of ratees on one axis and the competencies on the other. The cells are where ratings are entered; the version I shared had a drop down list of response alternatives (e.g., strongly agree to strongly disagree). The instructions have the rater work across the ratees, which encourages comparisons. Some users do not like the idea of comparisons, and that is one of a number of reasons (besides Dale’s) that it might not make sense to implement.

I have successfully used this format on a few occasions. One was with a group of anesthesiologists who wanted to give feedback to each other, and also get feedback from nurses they regularly worked with. This format worked very well since the ratees were all of relatively equal status, and there was a large number of them (19).  I have used it with other groups where raters have had to give multiple ratings.

Part of my original motivation for trying this format came from comments with raters who had to complete many forms. I remember one manager who told me he took his forms home, spread them out on his deck and tried to consider all of the ratees at the same time. Another manager told me how she had wanted to go back and redo some of her ratings when she got to the 8th or 10th and realized that her own internal calibration had changed as she completed the ratings. In other words, she was saying that she was a different person (rater) when she did the first one compared to when she had more experience and perspective in doing later questionnaires.

Another way that raters become “different” as they fill out forms is simple fatigue which undoubtedly affects both the quality and quantity (i.e., response rate) of feedback. This becomes an issue of fairness where, by luck of the draw, the ratees later in the queue are penalized in terms of the feedback they receive.

If (and I emphasize “if”) your process supports comparisons, this multiratee format seems to solve many problems. Some users have commented on the potential problem of having the list of ratees not being comparable in position, level, etc., and indeed there should be care to include ratees that have similar levels of responsibility.

Now let’s consider Dale’s view that this whole notion is “silly.”  Let me start by saying that Dale is very experienced, and his opinions carry a lot of weight with me and others. He and I have collaborated often and we agree more often than not, but not always. This topic is one where we don’t agree, and where this is no “right” answer but more a perspective on how to treat raters and what we can/should expect of them.

His main point seems to be that raters should be considering the context of the ratee when providing feedback (i.e., giving a rating).  This suggests that the rater should muse over the ratee’s situation (however that is defined) before making each evaluation. I would assume and hope that raters are explicitly instructed to consider this context factor so that there is some semblance of consistency in communicating our expectations for the role of rater. But then it promotes inconsistency by asking raters to consider a complex situational variable and probably apply it in unpredictable ways.

In contrast, I am an advocate of making the rater’s task as simple and straightforward as possible. In past blogs, I have positioned that thought as attempting to minimize the individual differences in raters that can create rater error (or inconsistency). Adding a “context” instruction can only make the ratings that much more complex to both give and interpret.

My position is that the “context” discussion should happen after the data is in, not during its collection. I absolutely believe (and it appears Dale agrees ) that 360 results need to be couched in the ratee’s situation, whether that is by the ratee’s manager and/or coach, and especially be any other users (e.g., human resources).

In the final tabulation, I believe that this “silly” rating format has many more benefits than problems. It can be an effective solution to the rater overload issue that some consultants try to solve by making instruments shorter and shorter at the expense of providing quality information to the ratee. It also solves some of the problems that occur when raters are asked to complete multiple ratings that penalize the ratees at the end of the queue.

I am quite sure that we will be hearing from Dale.

©2010 David W. Bracken

Advertisements

One Response

Subscribe to comments with RSS.

  1. Dave: Thanks for the opportunity to vent! My concern boils down to this. Yes, we have to make it easy for raters to provide feedback, but not at the expense of accurate ratings. Let me give a concrete example of how this “silly format” can go awry. If, without looking up specific performance data, I were to rate the hitting ability of Cody Ross compared to the hitting ability of Ryan Howard based on my recent observations I would likely rate Cody near the top of the scale and Ryan near the bottom. [Note: for those who are not following the fact that the Giants just made it into the World Series: Cody is baseball player for the SF Giants who won the MVP award for the recent NLCS]. But, if I were to rate Cody Ross, Ryan Howard, and Chase Utley side by side then my ratings of Ryan would increase because Chase Utley looked more horrible than Ryan did and my ratings of Cody might go up even more because against these two he looked like a superstar. So my thesis is this: if you provide raters with an explicit and salient comparison, their ratings will be influenced (which is Dave’s point too, by the way!). The effects of this comparison are where we differ. For example, in two days (Game 1 of the World Series) we will have a chance to observe the hitting of Cody Ross and the hitting of Josh Hamilton (MVP of the ALCS who plays for Texas). I would like to think Cody Ross will look like the same hitter he did last week, but based on recent observations, Josh is likely to make Cody look a bit “middle of the road.” So, you see, by using this silly rating form, my ratings are floating all over the place. By giving me an explicit salient comparison set, the 360 form itself is encouraging this behavior. If I am thinking of promoting Cody next year, I want accurate data that isn’t just “flavor of the month”. I would much prefer the raters to use their own larger universe of comparators and provide a rating focused solely on their observations of Cody over time and in many situations. Of course, this get’s even messier if next year I want to see how people think Cody is doing and compare his improvement year over year. In that case, the “silly” format might include as comparisons hitters like Daric Barton, Rajai Davis, and Kurt Suzuki (sad to say I am a long time A’s fan and we have some pretty weak hitters). All of the sudden, Cody’s scores increased because we changed the comparisons (but not necessarily because he became a better hitter). But that’s just my .02. All that really matters in the next week is that the Giants take Texas in 7!

    Dale rose

    October 25, 2010 at 12:05 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: