Strategic 360s

Making feedback matter

Who is the customer in a 360 process?

with 2 comments

[tweetmeme source=”anotherangle360″]

In a recent blog I raised the question of rating scales, a topic covered by Marc Effron in a webcast a couple weeks earlier. Marc and I had a couple of exchanges after that, which hopefully allowed the reader to see our perspectives a little better. (I wish that others would join in occasionally too!)

Another subject Marc broached in his webcast was the problem of 360’s being onerous, being viewed as too cumbersome for users to manage. He certainly has a valid point since practitioners are constantly being told that the instrument can’t be “too long” or that raters are being overburdened or even the participants are being asked to do too much (like read a report).

The length of a 360 survey is just one of many decisions that are made in the design and implementation of a 360 process.  Some of these decisions may be relatively minor by themselves in determining the effectiveness (behavior change?) and sustainability of the process, but others are potentially major factors, including things such as rating scale, behavioral model/items, rater groups, rater selection,  manager approval, rater training, report content and format, and  data ownership, to name a few. To this point, I suggested in an earlier blog entry that it is these (and other) major decisions that make research (e.g., meta analyses) difficult when these factors vary so widely and are often not reported.

In our article, “360 Feedback from Another Angle” (Bracken, Timmreck, Fleenor and Summers, 2001), we proposed that there is no one “right” answer to most of these design/implementation decisions. As with most decisions, there are factors that come into play that go beyond cost/benefit. Our decisions are also usually affected by our values and experiences, i.e., “who we are,” despite attempts to be objective.

If you have had the opportunity to design, implement, and/or experience a 360 process, it might be useful to reflect on these decisions and why they ended up the way they did. In the “Angle” article, we propose that these decisions seem to have implicit or explicit assumptions about who the most important customer is in each case.  When we say “customer,” we are referring to the stakeholders in the 360 process, which most commonly include the participant (ratee), raters, and the organization. Depending on who else is involved (another decision), stakeholders might also include a coach and the manager.

Let’s consider the questionnaire length issue that is a partial solution to Marc’s observation of 360’s being viewed as “onerous.”  I personally have never seen a one question 360 instrument, but, if it did exist, it might ask, “How effective is this person as a leader?”  I would propose that questionnaires DO exist that have the raters provide ratings at the dimension level (with the behavioral indicators listed but not rated), so that the rater only has to answer 6-10 questions (for example).

When I hear or see organizations implement a survey that is less than 20 questions and/or ask for ratings at the overall dimension level, it seems to me that they are explicitly saying that the most important customers are the raters and the organization, not the participant. This design decision indicates that the prime objective is to minimize the time the rater must invest in filling out a questionnaire (although I have heard of research that it actually takes just as long to read and consider the dimension-level ratings as it does to actually rate each behavior).  The organization benefits from reducing soft (time) costs, as well as hard costs due to processing and reporting. The organization can still get bottom line scores if that is their objective.

The loser in this decision is usually the participant, along with the manager and coach if involved. (I must note that blanket statements are dangerous, and some participants could care less about their feedback, so less is just fine with them.)  Let’s take the one item survey, or even the 6-10 item survey. They get “feedback” data, but how useful is it? Who knows what the raters were focusing on? Were they weighting some behaviors over others? Does it just reflect a mental average with huge halo? Most importantly, what possible action(s) can be taken without just taking a wild guess? Sometimes organizations ask for write ins to provide focus, but those vary widely in quantity and quality.

The length of survey decision is often confounded by the number of ratings per person that are being requested. In a one-off, ad hoc survey length is usually something of a moot point, as raters may be willing to tolerate excessively long (>100 item) survey if they only do it occasionally. But if a group is going through the experience, than peers and bosses may have to fill out multiple forms in a short time period. BUT most direct reports (often the largest rater group) will only complete one survey, i.e., on their boss.

Obviously we cannot ignore any customer group in our decision making relating to design and implementation. BUT it is my humble opinion that, whenever possible, the participants should have the highest priority. If that means having longer surveys (like in the 30-50 item range), then let’s find solutions to not making that so “onerous” for the raters. For example, I have had some success using formats where the rating form is basically a spreadsheet where the rows are the items and the columns are the ratees. This format has many advantages when raters have to assess more than one person, and we can cover that topic in another blog someday soon.

Let me just add, in closing, that “onerous” is also defined by some users and practitioners as making the participant do too much “work” to analyze their data and decide on what is most important. Two “solutions” to that supposed problem are to have the raters tell the ratee what is most important, and/or to have the computer do the work. In future blogs, I will address why I believe these solutions are seriously flawed. Tune in!

©2010 David W. Bracken

Advertisements

Written by David Bracken

September 14, 2010 at 2:49 pm

2 Responses

Subscribe to comments with RSS.

  1. Great post, Dave. I do encourage you to follow up with a piece on the pros and cons of a spreadsheet format for 360 assessment when a rater has more than one person to rate. I see the benefit from a speed and convenience perspective but I also fear that it forces comparative ratings and if 3-4 raters have to rate the same person but have a different mix of ratees to evaluate, isn’t it going to create some sort of bias? Let me know if you’d like to collaborate on writing a piece about this.

    Michel

    Michel Buffet

    September 14, 2010 at 3:57 pm

  2. Sometimes it is less about the nuances of building a better mousetrap and more about stepping back and making sure you have chosen the right location or conditions in which to place the device. My experience is that people are more apt to complain or outright reject long instruments when they have to complete many of them, not just one or two.

    Think of it as the following: C=( LxV)/P, or the likelihood of complaining about the 360 survey is a function of length of time to complete times the volume to be completed, divided by the amount of preparation given to the 360 process.

    If you have done a good job preparing the raters, justifying the time to complete 360 surveys and anticipating the time needed, managers are more likely to buy into the process and the time, especially when they have some line of sight to value for the entire process. Conversely, if you have done a poor job in the preparation phase, then resistance (as in “Why am I wasting my time on this”) will bubble up quickly.

    Mind you, a well designed, face valid and easy to complete instrument may compensate for greater length, while a short instrument that is too generic or vague may quickly trigger reluctance to be completed multiple times.

    Better design, absolutely. Let’s all remember that the 360 instrument is nested in the 360 implementation process and we should be looking at the design of both in tandem.

    Thanks for the stimulating post, Dave

    Marc Sokol

    September 22, 2010 at 10:01 am


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: