Strategic 360s

Making feedback matter

Put your scale where your money is (or isn’t)

with 5 comments

[tweetmeme source=”anotherangle360″]

I had the opportunity to listen in on a web broadcast recently where Marc Effron was the featured speaker (as part of his “One Page Talent Management” approach, which I have not read as of yet), and the topic was advertised as, “Why 360’s Don’t Work (and What You Can Do About It.)”  I found it very interesting and thought provoking, as did others on the call judging from the questions that were submitted during the call.  In fact, I listened to it twice! Perhaps his most controversial position is his support for using 360 for decision making, which I support under the right conditions. I plan to write a couple blogs regarding a few of his positions regarding 360, and I hope Marc will comment if he feels I have misrepresented him and/or am incorrect in some way (which is definitely a possibility).

Marc proposed a number of approaches to 360 that he believes make the process more effective and efficient. One that caught my attention was his approach to the rating scale. In fact, it motivated me to submit a question during the session, and I will discuss his response as well.

Marc spoke of a “Do More/Less” scale that he uses that ranges from “Do Much More” at one end, to “Do Much Less” at the other end, and “Don’t Change” as the midpoint.  I have seen a presentation where it is a 5 point scale, but I could easily see a 7 point scale as well.

During the web cast, as he was describing this scale, I believe he said, “We don’t care how good or bad you are.”  In other words, he is proposing an “ipsative” approach to measurement (if I remember my graduate training correctly) where the focus is within-person ability (versus “normative” which is between-person comparison).  In this context, the ipsative scale acknowledges that we all have a stack ranking of abilities from best to worst, regardless of how well we perform in comparison to others. In a development focused process, this has great appeal in communicating that we all are better at some things than others, and we all have a “worst at” list even if we think we are still pretty good at those things relative to others.

It seems arguable as to whether all raters use the “More/Less” scale as an ipsative scale. I am assuming that Marc intends it to be used that way based on his “don’t care how good or bad you are” comment. It would be nice if the instructions to the rater reinforced that point (i.e., don’t compare this person to others), and maybe they do.  There are other ways to generate within-person rankings, such as paired comparisons and check lists, which seem more direct but probably have their own drawbacks (I have never used them, so I am no expert).

I see ipsative approaches to 360 rating scales as potentially being fantastic solutions for “development only” processes where users are forbidden from using the data for decision making (or so they say).  Many of us know of supposed “development only” programs where the data is being used for decision making in various ways, creating potential inconsistency, unfairness, and misuse. If these companies used an ipsative scale such as Marc’s, that would theoretically prevent them from using it for decision making since the data is totally within-person and inappropriate (or worse) to use for comparing performance results across employees.

The problem with Marc’s situation is that he IS using this scale for decision making. So that was my question to him, namely how can you use a nonevaluative (ipsative) scale to make comparisons (i.e., decisions) among employees?  His response was basically that, a) the “Do More” list is generally indicative of areas in need of development, and b) the 360 results are supplemented by other data . Point A seems to fly in the face of the “don’t care how good or bad you are” position. It would also seem to be inconsistent with the “develop your strengths” movement where people are encouraged to leverage their strengths (in a nutshell).  The second point is sound advice regarding not using 360 results in isolation, but doesn’t give me much faith in the rigor of his 360 data.

If we are going to use 360 results to make decisions about employees, that means that someone is going to get more of something (e.g., pay, promotions, development opportunities, training, high potential designation) and someone is going to get less based, at least in part, on the 360 data. That is what “decision making” means.

Marc speaks of transparency in use of 360 as a central premise to his approach. If that is the case, we can start by being totally “transparent” with our raters by telling them how their ratings are being used. If it is a totally “development only” process, use a within-person (ipsative) scale with explicit directions to not compare the ratee to others when assigning ratings.

If the 360 is supporting decision making, ask the rater to help us make comparisons by using what I call a “normative” scale. I have successfully used normative scales, and they usually look something like this:

5 = Role Model (in the top 5-10%)

4 = Above Average (in the top 20-25%)

3 = Comparable to Other Leaders

2 = Below Average  (in the bottom 20-25%)

1 = Far Below Average (in the bottom 5-10%)

The directions to the raters can help define the comparison group, or “other leaders.” But clearly we are creating a frame of reference for the rater that encourages something closer to a normal distribution and a direct attack on leniency error. I believe that the traditional leniency problem with 360 problems is at least partially attributable to the ambiguousness of common rating scales such as the Likert (Agree/Disagree) scale where the user (rater) is left to attach their own meaning to the scale points. Rater training can help combat rating errors, but, as I have noted before, is rarely implemented.

Want to be transparent and communicate the most important decision about your 360 process, i.e., its purpose? Put your scale where your money is (or isn’t): ipsative for development only, normative for decision making.

©2010 David W. Bracken

Advertisements

Written by David Bracken

September 2, 2010 at 1:49 pm

5 Responses

Subscribe to comments with RSS.

  1. David – Thank you for your column! I’m thrilled that you liked most of what you heard on Patricia Wheeler’s webinar. I hope I can shed some light on the conundrum of ipsative vs. evaluative in relation to the OPTM360 (blatant plug: more info at http://www.onepagetm.com/optm360.html) .

    We suggest a developmental, not evaluative, scale to reduce the negative reactions to the typical 360. As discussed on the call, typical 360s result in lots of emotion and very little action. It’s all about how good you are and how well you compare to others. Our “Do much more” to “Do much less” scale instantly removes that negative reaction. We’ve found in 500+ administrations that managers tremendously prefer it to getting typical 360 feedback. So our intent with that part of the design was to get managers to take in the information without it causing defensive reactions — the first step to making a 360 a useful tool.

    What we aren’t saying is that because the OPTM360 is a better developmental tool, it doesn’t also provide valuable information for helping to make organization decisions. Given that behaviors are critical performance drivers in most companies, and that 360s are the most objective source of behavior information, we can’t see why a company should ignore this valuable information. We don’t see any logical inconsistency in having a tool that’s both great for development and can help companies make smarter choices about who goes into what roles, what development investments should be made, etc.

    You’re correct that we suggest 360 behavioral data be considering in key organization decisions. We’re also very clear about, in fact one of the three themes in our book is, transparency. We believe in 100% transparency about what a 360 is used for. That’s why we love the OPTM360 because we can tell a manager: “Of course your behaviors are important. We’ve been making decisions about you for years partially based on how you behave. We’ve just written it down and told you about it! Now that you know behaviors are important, we want to do everything possible to help you have great behaviors. This OPTM360 will help you understand EXACTLY what to do to change where needed.”

    I know you’re not proposing this, but the near piety with which 360s are treated by many consultants and practitioners is hilarious. It’s as if they think that by only showing the 360 to the participant, no one else will know how they behave. Of course they know how they behave — they work with them!! We in HR should end this completely non-business focused behavior immediately! Full transparency for all!

    BTW, send me your mailing address and I’ll send you a copy of the book in appreciation of your thoughtful comments. (marc@talentstrategygroup.com)

    Marc Effron

    September 2, 2010 at 6:13 pm

    • Marc, thank you very much for your comments. I believe we agree on most points, and you are doing a great service to the field of 360 feedback.
      I will say this about the scale issue at hand: If I read you correctly, you are telling raters that the purpose is development only when it is not. (I totally agree that development is not precluded by a decision making purpose.) Eventually, they will learn the truth, at the risk of creating distrust in the system and leadership in general.

      David Bracken

      September 2, 2010 at 10:53 pm

  2. Hi – No, to be clear, we are saying the tool is developmental — meaning that compared to traditional 360s, the OPTM360 helps managers develop more quickly. We are also saying the results of the OPTM360 should be transparent — to the person’s manager, HR, talent mgmt., etc. — and be used by the organization as they feel is appropriate. AND the organization should be transparent about all of it.

    Marc Effron

    September 2, 2010 at 11:10 pm

    • Thanks again, Marc. I’ll take one more run at this and then let it go. Every 360 process that is “administrative” is also developmental. So what differentiates 360 processes is not whether they are developmental or not, but whether, as you say, they are transparent and used for decision making. To suggest to raters that the purpose is “developmental” is only dropping one shoe.
      I am looking forward to reading the book!

      David Bracken

      September 7, 2010 at 12:53 pm

  3. […] have advocated for the need to have the scale to match the purpose in an earlier blog (https://dwbracken.wordpress.com/2010/09/02/put-your-scale-where-your-money-is-or-isnt/) so I will move on to another pet […]


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: