Strategic 360s

Making feedback matter

There Are “Right” Answers

with 2 comments

[tweetmeme source=”anotherangle360″]

For those of you who might attend the next SIOP (Society for Industrial and Organizational Psychology) Conference in Chicago in April, I am pleased to note that we have been accepted to conduct a panel consisting of contributors to The Handbook of Multisource Feedback, which is approaching its 10th anniversary of publication. The panel is titled, “How has 360 degree Feedback evolved over the last 10 years?”  Panel members include Allan Church, Carol Timmreck, Janine Waclawski, David Peterson, James Farr, Manny London, Bob Jako and myself.

We received a number of thoughtful, useful comments and suggestions from the reviewers of the proposal, one of which stated this:

I would like to see a serious discussion of whether or not 360 is a singular practice. It seems as though 360 can be used with so many different interventions (succession, development, training needs analysis, supplement to coaching, …the list is HUGE) that when we say something like “is 360 legal” it is almost impossible to answer without many caveats regarding the details of the specific 360 process that was used. It’s almost as though we need to move on from ‘is 360 xyz’ to ‘if we do 360 this way, we get these outcomes and if we do 360 that way we get those outcomes.’ Can’t wait to hear the panel, this is much needed.

This is an extremely insightful observation. I have broached this topic in earlier blogs regarding alignment of purpose and decisions in design and implementation.  But there are some things that are required regardless of purpose.

To look at extremes, we might consider 360 processes where the N=1, i.e., where a single leader is given the opportunity to get developmental feedback. This is often in preparation for an experience such as a leadership development/training program, or some development program (e.g., high potential).  In these instances, it is an ad hoc process where an off-the-shelf instrument may be most practical. The instrument can be lengthy since raters will only have to fill it out one time. And typically there are major resources available to the participant in the form of coaches, trainers, and/or HR partners to ensure that the feedback is interpreted and used productively.

Compare the N=1 scenario to the N>1 process. By N>1, I use shorthand to indicate 360 processes that are applied across some segment of the population, such as a function, department, or entire organization. In these cases, it becomes much more important to have a custom designed instrument that reflects unique organization requirements (competencies, behaviors) that can create system change while simultaneously defining effective leadership to raters and ratees alike. The process requires some efficiencies due to many raters being involved, and some being asked to complete multiple forms.  We also need to plan for ways to support the many ratees in their use of the feedback.

BUT, we might also say that there are some things that are so basic as to be necessary whether N=1 or N>1.  Just this week I was sent this interview of Cindy McCauley (of the Center for Creative Leadership) (http://www.groupstir.com/resources_assets/Why%20Reliability%20and%20Validity%20Matter%20in%20360%20Feedback.pdf). Many readers will already know who Cindy is; if not, suffice to say she is highly respected in our field and has deep expertise in 360 Feedback. (In fact, she contributed a chapter to the book, “Should 360 Feedback Be Used Only for Development Purposes?” that I was also involved with.) In this interview, Cindy makes some important points about basic requirements for reliability and validity that I interpret to be applicable to all 360 processes.

What really caught my attention was this statement by Cindy:

…the scores the managers receive back mean a lot to them. They take them very seriously and are asked

to make decisions and development plans based on those scores. So you want to be sure that you can

rely on those scores, that they’re consistent and reflect some kind of accuracy.

I take the liberty (which Cindy would probably not) to expand the “make decisions” part of this statement to apply more broadly, that others (such as the leader’s manager) also use the feedback to make decisions. When she says that managers make decisions on their feedback, what decisions can they make without the support of the organization (in the person of their boss, most typically)? This is basically the crux of my argument that there is no such thing as “development only” processes. Development requires decisions and the commitment of organization resources. This only reinforces her point about the importance of validity and reliable measurement.

So what’s my point? My point is that I believe that too many ad hoc (N=1) 360 processes fall short of meeting these requirements for validity and reliability. Another debate for another time is whether off-the-shelf instruments have sufficient validity to measure unique organization requirements.  I do believe it is accurate to say that reliable measurement is often neglected in ad hoc processes when decisions are made about number of raters and quality of ratings.

For example, research indicates that raters have different “agendas” and that subordinates are the least reliable feedback providers, followed by peers and then managers. Lack of reliability can be combated in at least two ways: rater training and number of raters. We can put aside rater training (beyond having good instructions); it rarely happens despite its power and utility.

So we can improve reliability with numbers. In fact, this is really why 360 data is superior to traditional, single source evaluations (i.e., performance appraisals).  For N>1 processes, I STRONGLY recommend that all direct reports (subordinates) participate as raters. This has multiple benefits, including beefing up the number of raters for the most unreliable rater group. Then, for peers, aiming for 5-7 respondents is recommended.

My contention is that the majority of ad hoc (N=1) processes do not adhere to those guidelines. (I have no data to support that assertion, just observation.)  The problem of unreliable data due to inadequate number of raters is compounded by the fact that the decisions resulting from that flawed data are magnified due to the senior level of the leaders and the considerable organization resources devoted to their development.

When I started writing this blog, I was thinking of the title, “There is No “Right Answer,” meaning that decisions need to fit the purpose. But actually there are some “Right Answers” that apply regardless of purpose. Don’t let the “development only” argument lead to implementation decisions that reduce the reliability and validity of the feedback. In fact, many guidelines should apply to all 360 processes, whether N=1 or N>1.

©2011 David W. Bracken

Advertisements

2 Responses

Subscribe to comments with RSS.

  1. Dave, I’m glad you liked my comment on the 360 panel at SIOP. I will be in the audience throwing out more nuggets like that. 🙂 I wonder what your thoughts are about the concept of inter-rater reliability in 360. For example, what is the conclusion if you have, say, 7 direct reports and two of them rate you low but five rate you high on an item? I suspect that some would argue this situation suggests the data are not “reliable.” My own view is that the data are accurately reflecting that you treat the two differently than you treat the five.

    Dale rose

    January 4, 2011 at 12:27 pm

    • Hi Dale! Of course, didn’t know it was your comment; nice to know who the author is.
      I am in agreement with you on your take on inter-rater reliability, with this additional thought. Another possible cause of this situation is that the raters interpret the question differently. It’s almost impossible to make an item so specific to prevent this from happening. For example, if you ask, “Ensures team has sufficient resources”, some might think of budget, others headcount, others equipment (e.g., computers), and so on. Creating an item for each of those possibilities would make the survey too long. So, then the solution is to talk with the raters to get clarification . (I know I’m not saying anything you don’t know/do. Hopefully others will chime in).

      David Bracken

      January 4, 2011 at 12:49 pm


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: