Strategic 360s

Making feedback matter

There is no such thing as “development only”

with one comment

[tweetmeme source=”anotherangle360″]

As I noted in my last blog, John Golden’s LinkedIn discussion generated a lot of interesting observations and opinions, none of which answered his question about legal challenges to 360 feedback.  The opinions could be broadly sorted into two categories: the need for validation (which I addressed in my previous blog), and the debate over usage, developmental vs. decision making. (I continue to search for cases; the difficulty in finding any might be interpreted as a general acceptance of the use of 360 for development and decision making?)

As with the last blog, I am going to use some of the comments (in italics) from John’s discussion as a stimulus for some observations from me. (I do so with some trepidation since I was accused of taking at least one comment out of context. So I hope that anyone who feels misquoted or misused should certainly point that out.)   I will also cite a benchmark study by the 3D Group of 51 organizations published in 2009, with trending from a 2004 study.

I am not aware of any lawsuits, but the literature (about 5 years ago) pretty clearly pointed out that 360 assessments SHOULD NOT BE USED FOR personnel decisions.

There is literature on both sides of this debate, so saying “clearly” is a matter of perspective; there are “clear” positions on both sides.  In 1996, I organized a debate at SIOP on this topic, which was then transcribed and published by the Center for Creative Leadership (“Should 360-Feedback be used only for developmental purposes?”, 1997).  Bob Jako and I had/have a clear point of view on using 360 for decision making, and Maxine Dalton and Vicky Pollman had/have a clear point of view for development use.

I (and others) have come to believe that the distinction is a false one. For starters, even companies that use 360 for decision making purposes also use it for development. In the 3D Group study, 92% report 360 being used for development, even if also used for decision making.

I have often wondered what practitioners are trying to communicate when they say “development only.” Sometimes they are contrasting with other uses that they see as inappropriate, and I understand that. What I don’t understand is the implication that the process needs to be less “rigorous” (as one person put it), which I take as being less reliable and/or valid (related concepts, of course). One operationalization of less rigor is to have fewer raters, maybe as few as 3-5 total.

The figure below (click Uses of 360 to see larger version) places potential uses (not necessarily an exhaustive list) of 360 feedback on a continuum, with the criteria for placement, moving left to right, being greater:

  • Potential impact on the participant
  • Number of employees potentially affected
  • Importance to the organization

Uses of 360

So “development planning” is positioned as a type of decision. Even in “development only” scenarios, decisions are often being made about the participant, including access (or lack thereof) to developmental experiences (e.g., assignments, training, mentoring, coaching, career paths) that also involve organizational resources. Again, think of the decision not to provide developmental resources to an individual, a consequence rarely measured (or measurable) in my experience.

To the far left is “information only,” the practice of providing the feedback to the employee, often with explicit instructions not to share the report with anyone else. Equally ineffective is the practice of allowing that to happen even if not prescribed. In my more cynical moments (like now), I call that a “parlor game,” where it is all done for amusement and the hope that the participant will create some internal accountability to take action. Whether or not that action is aligned with organizational objectives is also left for speculation.

“Performance assessment” refers to the use of 360 as solely a performance measure with no developmental component. While rare (as indicated by the 92% development use figure in the 3D data), it can be an appropriate use of 360 data.

As for “Downsizing,” (termination), I will discuss that later in this blog.

However, given that 360’s typically measure strengths and weaknesses across various competencies, I personally find it a stretch to advise clients to use it for any purpose beyond creating a development plan. Using it in this manner too, should relieve the legal issues that could lead to a law suit. If the data is not used for performance reviews or for personnel decisions, then it seems that a participant would have little basis for a suit.

As noted above, the question of what a “decision” is can be debated. Let me offer an alternative perspective that I heard from a lawyer at a 360 conference. He proposed that, since 360’s have been shown to be more reliable, valid, and fair (if done correctly) than traditional data sources (e.g., supervisory evaluations), then an organization is more at risk legally if it does not use 360 for decision making. Since our professional standards do require us to search for the “best” method to use for personnel decisions, we are obligated to determine if 360’s are the best method. This logically requires that we make our best effort to implement reliable, valid 360 processes.

For those of you who have not read the first book on 360 feedback by Edwards and Ewen (1996), I suggest you do so. While they are not psychologists, they had access to a large amount of 360 data across many organizations. Their point of view was that 360 should be used for decision making. The “development only” faction should consider this source as well.

Like interviews and assessments, they should be used as a part of the decision making process–not in isolation.

I totally agree with this observation, for many reasons. I propose that the foremost reason is the data itself is susceptible to the vagaries of 360 processes that often rely on some small numbers to generate the data. I believe that reporting scores out to two decimal points further compounds the perception of an over-exactness of the data.

As importantly, 360 data can be generated under widely varying circumstances that often the manager/boss of the participant understands best and should take into consideration.  For example, a leader may be given a developmental assignment to turn around a poor performing group where some tough actions are needed and maybe some feathers ruffled. It probably is not fair to attach the same meaning to those numbers when compared to a more stable context.

Let me add here that the involvement of the manager/boss in development planning is critical; I can’t be swayed from that. Not sharing 360 results with the manager threatens credibility and consistency of use. And once you do that, then the “organization” (as represented by the manager) is a co-owner and it is no longer “development only” by definition.

Due to small numbers, it is also possible that a rater might abuse his/her responsibilities and give all “1” ratings (where “1” is worst), for example, and have a significant effect on the average. I do NOT support “Olympic Scoring” where the top and bottom scores are automatically eliminated. But I do believe that there should be a mechanism for eliminating clearly inappropriate rating patterns, starting with all “1’s” and probably extending to all “anything,” including all “5’s”, which is a different form of abdicating responsibility as a rater.

Finally, as others noted in John’s discussion, performance includes many factors beyond just 360 results.  360 results should supplement other performance data.

As for termination decisions, these should NEVER be made with 360 data; termination should be based on unacceptable performance which a company should not need a 360 assessment to identify.

Just because you can do something doesn’t mean you should. I recently heard of a fan at a Cleveland Indians game wearing a Miami Heat version of a Lebron James jersey in the bleachers. Suffice it to say, he was escorted out for his own safety after a very short time. (If you don’t know what this is all about, you are lucky.) Using 360 for termination/downsizing falls into this category of “bad idea” as well.

I was getting on a plane and, due to congestion in the aisle, stopped beside a flight attendant who, for some reason, felt obliged to strike up a conversation with me. “You know”, he said, “this plane is designed to land on water.” (This was long before Sully Sullenberger.) He continued, “Of course, you can only do it once.”  That’s also what it’s like if you use 360 for downsizing, i.e., you can only do it once (for any purpose). The only time I have heard of a legal challenge (third hand) was in the usage of 360 for layoffs

One thing to keep in mind is that regardless of whether multi-rater feedback is best suited for development or appraisal, the reality is that it is used for both. Today’s (well intentioned) organizations often find themselves migrating from developmental 360s to at least some degree of evaluative 360s (with administrative consequences) whether or not they originally intended to do so.

The first sentence is definitely true; the second needs some qualifiers. My experience is that there are many unauthorized uses of 360 data, which is one of the greatest legal exposures (i.e., inconsistent practices with untrained users). One of my clients a couple years ago decided to dig into this question with a systematic investigation as to how their “development only” program was really being used. They were “shocked” to find that the data was being used for other purposes (this feels like a line out of my favorite movie, Casablanca: “I’m shocked, shocked to find that gambling is going on in here!”).

As for the trends proposed in this comment, the 3D data only partially support this contention. While use of 360 trended upward for links to pay and promotion, it trended downward with uses such as performance management, high potentials, and succession planning. Curiously, trends were also downward for career planning and training.

Regarding the use of “I don’t know” 360 response options, there is no guarantee that the rater knows that they don’t know. That said, I am a strong advocate for multi raters. We just need to be cautious on what they are measuring, how they are administered, and how they are applied.

I am not sure if this fits into this discussion well, but do want to use it to propose that “Not Observed” may be a better choice than “Don’t Know.” It reinforces the need to report only on what is observed. Regardless of the choice, there does need to be a way to opt out of answering, and leaving the response blank is not a good solution (i.e., we want every question read and answered, even if “Not Observed”). To the last sentence, we should all give an “amen.”

It would seem that, once an organization does use them for administrative purposes, there may be no going back to their effective use for development. So much of what makes the process developmental (ownership of the data, anonymity of the raters, constructive rather than punitive purpose, etc.) would be lost. The adage has always been that organizations should maintain two systems (one for administrative, and one for developmental).

The implication that administrative use prevents a development use is not supportable; best practice says that development should always be part of the process, and almost always is. The second sentence is arguable but I see the point. I do not get how anonymity is lost, though.  Two systems has indeed been a reasonable solution, though it reinforces the flawed reasoning that administrative use precludes development.

My proposition is that almost all (if not all) “development only” processes result in some decisions that affect the person’s life and career, even if the “decision” is to do nothing (i.e., provide no organizational resources). Suggesting that those decisions require less rigor is not in the best interest of the person nor the organization. This has implications for many design and implementation factors in a 360 process, including instrument design/content, rater selection, rater training, report generation, accountability, and action planning.

3D Group (2009). Current practices in 360-degree feedback: A benchmark study of North American companies. 3D Group Technical Report #8326. Berkeley, CA: Data Driven Decisions, Inc.

Bracken, D.W., Dalton, M.A., Jako, R.A., McCauley, C. D., , and Pollman, V. A.  (1997) Should 360-Degree Feedback Be Used Only for Developmental Purposes?  Greensboro: Center for Creative Leadership.

Edwards, M.R., and Ewen, A.J. (1996). 360 degree feedback: The powerful new model for employee assessment and performance improvement. New York: AMACOM.

©2010 David W. Bracken

Advertisements

Written by David Bracken

August 23, 2010 at 11:14 pm

One Response

Subscribe to comments with RSS.

  1. Great post David, very thorough. We’ve seen these same trends of moving away from development only. Many of our clients are starting to use the data for finding high potentials or succession planning. We’re also seeing the trend of sharing the full feedback report, or a version of it, with a participant’s manager. In any of these scenarios we like to position this ahead of time with the participants so that they aren’t surprised that their data is being shared. It can also alleviate some of the legal concerns when they acknowledge that they know it’s going to be shared.
    Was the LinkedIn discussion that sparked your posts in a particular LinkedIn group?
    Thanks,
    Tom

    Tom Kuhne

    August 24, 2010 at 2:32 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: