Who is the customer in a 360 process?
In a recent blog I raised the question of rating scales, a topic covered by Marc Effron in a webcast a couple weeks earlier. Marc and I had a couple of exchanges after that, which hopefully allowed the reader to see our perspectives a little better. (I wish that others would join in occasionally too!)
Another subject Marc broached in his webcast was the problem of 360’s being onerous, being viewed as too cumbersome for users to manage. He certainly has a valid point since practitioners are constantly being told that the instrument can’t be “too long” or that raters are being overburdened or even the participants are being asked to do too much (like read a report).
The length of a 360 survey is just one of many decisions that are made in the design and implementation of a 360 process. Some of these decisions may be relatively minor by themselves in determining the effectiveness (behavior change?) and sustainability of the process, but others are potentially major factors, including things such as rating scale, behavioral model/items, rater groups, rater selection, manager approval, rater training, report content and format, and data ownership, to name a few. To this point, I suggested in an earlier blog entry that it is these (and other) major decisions that make research (e.g., meta analyses) difficult when these factors vary so widely and are often not reported.
In our article, “360 Feedback from Another Angle” (Bracken, Timmreck, Fleenor and Summers, 2001), we proposed that there is no one “right” answer to most of these design/implementation decisions. As with most decisions, there are factors that come into play that go beyond cost/benefit. Our decisions are also usually affected by our values and experiences, i.e., “who we are,” despite attempts to be objective.
If you have had the opportunity to design, implement, and/or experience a 360 process, it might be useful to reflect on these decisions and why they ended up the way they did. In the “Angle” article, we propose that these decisions seem to have implicit or explicit assumptions about who the most important customer is in each case. When we say “customer,” we are referring to the stakeholders in the 360 process, which most commonly include the participant (ratee), raters, and the organization. Depending on who else is involved (another decision), stakeholders might also include a coach and the manager.
Let’s consider the questionnaire length issue that is a partial solution to Marc’s observation of 360’s being viewed as “onerous.” I personally have never seen a one question 360 instrument, but, if it did exist, it might ask, “How effective is this person as a leader?” I would propose that questionnaires DO exist that have the raters provide ratings at the dimension level (with the behavioral indicators listed but not rated), so that the rater only has to answer 6-10 questions (for example).
When I hear or see organizations implement a survey that is less than 20 questions and/or ask for ratings at the overall dimension level, it seems to me that they are explicitly saying that the most important customers are the raters and the organization, not the participant. This design decision indicates that the prime objective is to minimize the time the rater must invest in filling out a questionnaire (although I have heard of research that it actually takes just as long to read and consider the dimension-level ratings as it does to actually rate each behavior). The organization benefits from reducing soft (time) costs, as well as hard costs due to processing and reporting. The organization can still get bottom line scores if that is their objective.
The loser in this decision is usually the participant, along with the manager and coach if involved. (I must note that blanket statements are dangerous, and some participants could care less about their feedback, so less is just fine with them.) Let’s take the one item survey, or even the 6-10 item survey. They get “feedback” data, but how useful is it? Who knows what the raters were focusing on? Were they weighting some behaviors over others? Does it just reflect a mental average with huge halo? Most importantly, what possible action(s) can be taken without just taking a wild guess? Sometimes organizations ask for write ins to provide focus, but those vary widely in quantity and quality.
The length of survey decision is often confounded by the number of ratings per person that are being requested. In a one-off, ad hoc survey length is usually something of a moot point, as raters may be willing to tolerate excessively long (>100 item) survey if they only do it occasionally. But if a group is going through the experience, than peers and bosses may have to fill out multiple forms in a short time period. BUT most direct reports (often the largest rater group) will only complete one survey, i.e., on their boss.
Obviously we cannot ignore any customer group in our decision making relating to design and implementation. BUT it is my humble opinion that, whenever possible, the participants should have the highest priority. If that means having longer surveys (like in the 30-50 item range), then let’s find solutions to not making that so “onerous” for the raters. For example, I have had some success using formats where the rating form is basically a spreadsheet where the rows are the items and the columns are the ratees. This format has many advantages when raters have to assess more than one person, and we can cover that topic in another blog someday soon.
Let me just add, in closing, that “onerous” is also defined by some users and practitioners as making the participant do too much “work” to analyze their data and decide on what is most important. Two “solutions” to that supposed problem are to have the raters tell the ratee what is most important, and/or to have the computer do the work. In future blogs, I will address why I believe these solutions are seriously flawed. Tune in!
©2010 David W. Bracken