It’s wonderful, Dave, but…
This is one of my favorite cartoons (I hope I haven’t broken too many laws by using it here; I’m certainly not using it for profit!). I sometimes use it to ask whether people are more “every problem has a solution” or “every solution has a problem” types. Clearly, Tom’s assistant is the latter.
I thought of this cartoon again this past week during another fun (for me, at least) debate on LinkedIn about the purpose of 360’s, primarily about the old decision making vs. development only debate.
Now, I don’t believe that 360 is comparable to the invention of the light bulb (though there is a metaphor lurking in there somewhere), nor did I invent 360. But, as a leading proponent of using 360 for decision making purposes (under the right conditions), by far the most common retort is something along the lines of, “It’s (360) wonderful, Dave, but using it for decisions distorts the responses when raters know it might affect the ratee.”
Yes, there is some data that suggests that raters report their ratings would be affected if they knew they would penalize the ratee in some way. And it does make intuitive sense to some degree. But I offer up these counterpoints for your consideration:
- I don’t believe I have ever read a study (including meta analyses) that even considers, let alone studies, rater training effects, starting with whether it is included as part of the 360 system(s) in question. In my recent webinar (Make Your 360 Matter), I presented what I think is some compelling data from a large sample of leaders on the effects of rater training and scale on 360 rating distributions. (We will discuss this data again at our SIOP Pre-Conference Workshop in April.) In the spirit of “every problem has a solution,” I propose that rater training has the potential to ameliorate leniency errors.
- There is a flip side to believing that your ratings will affect the ratee in some way, which, of course, is believing that your feedback doesn’t matter. I am not aware of any studies that directly address that question, but there is anecdotal and indirect evidence that this also has negative outcomes. What would you do if you thought your efforts made no difference (including not being read)? Would you even bother to respond? Or take time to read the items? Or offer write in comments? Where is the evidence that “development only” data is more “valid” than that used for other purposes? It may be different, but that does not always mean better.
The indirect data I have in mind are the studies published by Marshall Goldsmith and associates on the effect of follow up on reported behavioral change. (One chapter is in The Handbook of MultiSource Feedback; another article is “Leadership is a Contact Sport,” which you can find at marshallgoldsmith.com.) The connection I am making here is in suggesting that lack of follow up by the ratee can be a signal that the feedback does not matter, with the replicated finding that reported behavior change is typically zero or even negative. Conversely, when the feedback does matter (i.e., the ratee follows up with raters), behavior change is almost universally positive (and increases with the more follow up reported).
It’s all too easy to be an “every solution has a problem” person. We all do it. I do it too often. But maybe it would help if we became a little more aware of when we are falling into that mode. It may sound naïve to propose that “every problem has a solution,” but it seems like a better place to start.
©2010 David W. Bracken
Written by David Bracken
November 19, 2010 at 9:35 am
Subscribe to comments with RSS.