Strategic 360s

Making feedback matter

It’s wonderful, Dave, but…

with 2 comments

[tweetmeme source=”anotherangle360″]

This is one of my favorite cartoons (I hope I haven’t broken too many laws by using it here; I’m certainly not using it for profit!).  I sometimes use it to ask whether people are more “every problem has a solution” or “every solution has a problem” types. Clearly, Tom’s assistant is the latter.

I thought of this cartoon again this past week during another fun (for me, at least) debate on LinkedIn about the purpose of 360’s, primarily about the old decision making vs. development only debate.

Now, I don’t believe that 360 is comparable to the invention of the light bulb (though there is a metaphor lurking in there somewhere), nor did I invent 360. But, as a leading proponent of using 360 for decision making purposes (under the right conditions), by far the most common retort is something along the lines of, “It’s (360) wonderful, Dave, but using it for decisions distorts the responses when raters know it might affect the ratee.”

Yes, there is some data that suggests that raters report their ratings would be affected if they knew they would penalize the ratee in some way.  And it does make intuitive sense to some degree. But I offer up these counterpoints for your consideration:

  • I don’t believe I have ever read a study (including meta analyses) that even considers, let alone studies, rater training effects, starting with whether it is included as part of the 360 system(s) in question. In my recent webinar (Make Your 360 Matter), I presented what I think is some compelling data from a large sample of leaders on the effects of rater training and scale on 360 rating distributions. (We will discuss this data again at our SIOP Pre-Conference Workshop in April.) In the spirit of “every problem has a solution,” I propose that rater training has the potential to ameliorate leniency errors.
  • There is a flip side to believing that your ratings will affect the ratee in some way, which, of course, is believing that your feedback doesn’t matter. I am not aware of any studies that directly address that question, but there is anecdotal and indirect evidence that this also has negative outcomes. What would you do if you thought your efforts made no difference (including not being read)? Would you even bother to respond? Or take time to read the items? Or offer write in comments? Where is the evidence that “development only” data is more “valid” than that used for other purposes?  It may be different, but that does not always mean better.

The indirect data I have in mind are the studies published by Marshall Goldsmith and associates on the effect of follow up on reported behavioral change. (One chapter is in The Handbook of MultiSource Feedback; another article is “Leadership is a Contact Sport,” which you can find at marshallgoldsmith.com.)  The connection I am making here is in suggesting that lack of follow up by the ratee can be a signal that the feedback does not matter, with the replicated finding that reported behavior change is typically zero or even negative. Conversely, when the feedback does matter (i.e., the ratee follows up with raters), behavior change is almost universally positive (and increases with the more follow up reported).

It’s all too easy to be an “every solution has a problem” person. We all do it. I do it too often. But maybe it would help if we became a little more aware of when we are falling into that mode.  It may sound naïve to propose that “every problem has a solution,” but it seems like a better place to start.

©2010 David W. Bracken

Advertisements

2 Responses

Subscribe to comments with RSS.

  1. I would echo Dave’s comments and add a parallel to performance appraisal or employee survey ratings… metrics designed to create change. That is that usefulness trumps accuracy of ratings. While we draw comfort when our scores statistically relate to good outcomes, that is just a clue reflecting what we really want — We want these scores to create action and change to improve those outcomes.

    Thinking about “performance appraisal,” consider the basketball coach who benches a star player because he didn’t run his laps in practice. While being benched is often a measure of performance (“you aren’t as good as those on the court”), in this case it’s the coach (aka a manager) trying to make a developmental point with his player (employee). The “accuracy” of being benched as a measure of performance is subordinate to whether or not the star player shakes off his laziness next practice and is ultimately in better shape for the playoffs.

    Coaches, and managers, live in one world, not separate decision and development worlds. To them, these two issues often belong together. We need to work our best to understand and resuce unconscious bias. But what might be a conscious bias to you or me, might be an adaptive and intelligent response on the part of the rater. The solution there is to better understand rater interests and objectives… which we hope align with being useful (like the coach above), and we hope align with being accurate (which didn’t happen with coach above).

    Scott Brooks

    November 19, 2010 at 12:24 pm

  2. Dear Dave,

    Excellent observations as usual. To paraphrase our 42nd President during his impeachment trial, I wonder if some of this depends on what our meaning of the word “matter” is. If it (360 decision making outcomes) matter in ways that can seriously affect the lives, (well-being, material comfort, future prospects) of ratees, the imperative for good old fashioned science with all of its checks and balances, reproducibility of results, normalcy of distributions and so on become ever more essential. If a ratee might be denied access to important developmental experiences such as training, promotion or lateral assignments, participation level in a bonus pool or merit raise, or even a general perception of those in positions of power that “Dave, according to his 360 results is an also ran kind of guy”…well, that’s I think where the limits of decision making, given the current state of 360 science, begin to get on thin ice.

    That said, I believe you are absolutely on the right track both in terms of tackling the rater-training end of the process as well as the challenge that investments in such processes be able to “show” something for itself. While I lament that American industry has been overrun with a bean-counter, flat-file, bottom right hand corner mentality (with attending damage to the competitive advantage of our industries and innovative edge), if I were paying your invoices, I would want to know that my people had engaged the material, planned some agreed upon developmental work and worked the developmental plan.

    I did quite a lot of struggling with how to make good on the promise of anonymity (for direct reports and peers)and yet help the ratee make sense of small sample scale means. The frequency distribution doesn’t work because in a ratee’s experience with, say 5-6 subordinates, she is going to know which one seemed to provide feedback that makes Adolf Hitler look like a great humanitarian in comparison. I did come up with a scheme which I termed a consistency index but even still, technical challenges attending the overall method are not trivial.

    It’s all a question of balance I suppose. Notwithstanding, if we are going to give people guns, we must teach them how to use the things safely.

    Carl

    November 19, 2010 at 12:51 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: