Strategic 360s

Making feedback matter

Posts Tagged ‘rater reliability

I Need an Exorcism

leave a comment »

[tweetmeme source=”anotherangle360″]

Being the 360 Feedback nerd I am, I love it when some new folks get active on the LinkedIn 360 discussion group. One discussion emerged recently that caught my eye, and I have been watching it with interest, mulling over the perspectives and knowing I had to get my two cents in at some point.

Here is the question:

How many raters are too many raters?

We normally recommend 20 as a soft limit. With too many, we find the feedback gets diluted and you have too many people that don’t work closely enough with you to provide good feedback. I’d be curious if there are any suggestions for exceptions.

This is an important decision amongst the dozens that need to be made in the course of designing and implementing 360 processes. The question motivated me to pull out The Handbook of Multisource Feedback and find the excellent chapter on this topic by James Farr and Daniel Newman (2001), which reminded me of the complexity of this decision. Let me also reiterate that this is another decision that has different implications for “N=1” 360 processes (i.e., feedback for a single leader on an ad hoc basis) versus “N>1” systems (i.e., feedback for a group of participants); this blog and discussion is focused on the latter.

Usually people argue that too many surveys will cause disruption in the organization and unnecessary “soft costs” (i.e., time). The author of this question poses a different argument for limiting the rater population, which he calls “dilution” due to inviting unknowledgeable raters.  For me, one of the givens of any 360 system is that the raters must have sufficient experience with the ratee to give reliable feedback. One operationalization of that concept is to require that an employee must have worked with/for the ratee for some minimum amount of time (e.g., 6 months or even 1 year), even if he/she is a direct report. Having the ratee select the raters (with manager approval) is another practice that is designed to help get quality raters that then also facilitate the acceptance of the feedback by the ratee. So “dilution” due to unfamiliarity can be combated with that requirement, at least to some extent.

One respondent to this question offers this perspective:

The number of raters depends on the number of people that deal with this individual through important business interactions and can pass valuable feedback based on real experience. There is no one set answer.

I agree with that statement. Though, while there is no one set answer, some answers are better than others (see below).

In contrast, someone else states:

We have found effective to use minimum 3 and maximum 5 for any one rater category.

The minimum of 3 is standard practice these days as a “necessary but not sufficient” answer to the number of raters. As for the maximum of 5, this is also not uncommon but seems to ignore the science that supports larger numbers.  When clients seek my advice on this question of number of raters, I am swayed by the research published by Greguras and Robie (1998) who collected and researched the question of the reliability of various rater sources (i.e., subordinates, peers and managers). They came to the conclusion that different rater groups provide differing levels of reliable feedback, probably because the number of “agendas” lurking within the various types of raters. The least reliable are the subordinates, followed by the peers, and then the managers, the most reliable rater group.

One way to address rater unreliability is to increase the size of the group (another might be rater training, for example). Usually there is only one manager and best practice is to invite all direct reports (who meet the tenure guidelines), so the main question is the number of peers. This research suggests that 7-9 is where we need to aim, noting also that that is the number of returns needed, so inviting more is probably a good idea if you expect less than a 100% response rate.

Another potential rater group is external customers. Recently I was invited to participate in a forum convened by the American Board of Internal Medicine (ABIM) to discuss the use of multisource feedback in physician recertification processes. ABIM is one of 24 member Boards of the American Board of Medical Specialties (ABMS), which has directed that some sort of multisource (or 360) feedback be integrated into recertification.

The participants in this forum included many knowledgable, interesting researchers on the use of 360 in the context of medicine (a whole new world for me, which was very energizing). I was invited to represent the industry (“outside) perspective. One of the presenters spoke to the challenge of collecting input from their customers (i.e., patients), a requirement for them. She offered up the number of 25 as the number of patients needed to create a reliable result, using very similar rationale as Greguras and Robie regarding the many individual agendas of raters.

Back to LinkedIn, there was then this opinion:

I agree that having too many raters in any one rater group does dilute the feedback and make it much harder to see subtleties. There is also a risk that too many raters may ‘drown out’ key feedback.

This is when my head started spinning like Linda Blair in The Exorcist.  This perspective is SO contrary to my 25 years of experience in this field that I had to prevent myself from discounting it as my head continued to rotate.  I have often said that a good day for me includes times when I have said, “Gee, I have never thought of (insert topic) in that way.” I really do like hearing new and different views, but it’s difficult when they challenge some foundational belief.

For me, maybe THE most central tenet of 360 Feedback is the reliance on rater anonymity in the expectation (or hope) that it will promote honesty. This goes back to the first book on 360 Feedback by Edwards and Ewen (1996) where 360’s were designed with this need for anonymity being in the forefront. That is why we use the artificial form of communication of using anonymous questionnaires and usually don’t report in groups of less than 3. We know that violations of the anonymity promise result in less honesty and reduced response rates, with the grapevine (and/or social media) spreading violated trust throughout the organization.

The notion that too many raters will “drown out key feedback” seems to me to be a total reversal of this philosophy of protecting anonymity. It also seems to place an incredible amount of emphasis on the report itself where the numbers become the sole source of insight. Other blog entries of mine have proposed that the report is just the conversation starter, and that true insight is achieved in the post-survey discussions with raters and manager.

I recall that in past articles (see Bracken, Timmreck, Fleenor and Summers, 2001) we made the point that every decision requires what should be a conscious value judgment as to who the most important “customer” is for that decision, whether it be the rater, ratee, or the organization. For example, limiting the number of raters to a small number (e.g., 5 per group or not all Direct Reports) indicates that the raters and organization are more important than the ratee, that is, that we believe it is more important to minimize the time required of raters than it is to provide reliable feedback for the ratee. In most cases, my values cause me to lobby on behalf of the ratee as the most important customer in design decisions.  The time that I will rally to the defense of the rater as the most important customer in a decision is when anonymity (again, real or perceived) is threatened. And I see these arguments for creating more “insight” by keeping rater groups small or subdivided are misguided IF these practitioners share the common belief that anonymity is critical.

Finally (yes, it’s time to wrap this up), Larry Cipolla, an extremely experienced and respected practitioner in this field, offers some sage advice with some comments, including the folly of increasing rater group size by combining rater groups. As he says, that is pure folly. But I do take issue with one of his practices:

We recommend including all 10 raters (or whatever the n-count is) and have the participant create two groups–Direct Reports A and Direct Reports B.

This seems to me to be a variation on the theme of breaking out groups and reducing group size with the risk of creating suspicions and problems with perceived (or real) anonymity. Larry, you need to show that doing this kind of subdividing creates higher reliability in a statistical sense that can overcome the threats to reliability created by using smaller N’s.

Someone please stop my head from spinning. Do I just need to get over this fixation with anonymity in 360 processes?

References

Bracken, D.W., Timmreck, C.W., and Church, A.H. (2001). The Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Bracken, D.W., Timmreck, C.W., Fleenor, J.W., and Summers, L. (2001). 360 feedback from another angle. Human Resource Management, 1, 3-20.

Edwards, M. R., and Ewen, A.J.  (1996). 360° Feedback: The powerful new model for employee assessment and performance improvement. New York: AMACOM.

Farr, J.L., and Newman, D.A. (2001). Rater selection: Sources of feedback. In Bracken, D.W., Timmreck, C.W., and Church, A.H. (eds.), The Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Greguras, G.J., and Robie, C. (1998).  A new look at within-source interrater reliability of 360-degree feedback ratings. Journal of Applied Psychology, 83, 960-968.

©2012 David W. Bracken

That’s Why We Have Amendments

with one comment

[tweetmeme source=”anotherangle360″]

I used my last blog (https://dwbracken.wordpress.com/2011/08/09/so-now-what/)  to start LinkedIn discussions in the 360 Feedback and I/O Practitioners group, asking the question: What is a “valid” 360 process?  The response from the 360 group was tepid, maybe because the group has a more general population that might not be that concerned with “classic” validity issues (which is basically why I wrote the blog in the first place).  But the I/O community went nuts (45 entries so far) with comments running the gamut from constructive to dismissive to deconstructive.

Here is a sample of some of the “deconstructive” comments:

…I quickly came to conclusion it was a waste of good money…and only useful for people who could (or wanted to) get a little better.

It is all probably a waste of time and money. Good luck!!

There is nothing “valid” about so-called 360 degree feedback. Technically speaking, it isn’t even feedback. It is a thinly veiled means of exerting pressure on the individual who is the focal point.

My position regarding performance appraisal is the same as it has been for many years: Scrap It. Ditto for 360.

Actually, I generally agree with these statements in that many 360 processes are a waste of time and money. It’s not surprising that these sentiments are out there and probably quite prevalent. I wonder, though, if we are all on the same page. In another earlier blog, I suggested that discussions about the use and effectiveness of 360’s should be separated by those that are designed for feedback to a single individual (N=1) and those that are designed to be applied to groups (N>1).

But the fact is that HR professionals have to help their management make decisions about people, starting with hiring and then progressing through placement, staffing, promotions, compensation, rewards/recognition, succession planning, potential designation, development opportunities, and maybe even termination.

Nothing is perfect, especially so when it comes to matters that involve people. As an example, look to the U.S. Constitution, an endearing document that has withstood the test of time. Yet the Founding Fathers were the first to realize that they needed to make provisions for the addition of amendments to further make refinements. Of course, some of those amendments were imperfect themselves and were later rescinded.

But we haven’t thrown out the Constitution because it is imperfect.  Nor do we find it easy to come to agreements what the revisions should be.  But one of the many good things about humans is a seemingly natural desire to make things better.

Ever since I read Mark Edwards and Ann Ewen’s seminal book, 360 Degree Feedback, I have believed that 360 Feedback has the potential to improve personnel decision making when done well. The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for multisource feedback when used for decision making,” coauthored with Carol Timmreck, where we made a stab at defining what “done well” can mean.

In our profession, we have an obligation to constantly seek ways of improving personnel decision making. There are two major needs we are trying to meet, which sometimes cause tensions. One is to provide the organization with more accurate information on which to base these decisions, which we define as increased reliability (accurate measurement) and validity (relevant to job performance). Accurate decision making is good for both the organization and the individual.

The second need is to simultaneously use methods that promote fairness. This notion of fairness is particularly salient in the U.S. where we have “protected classes” (i.e., women, minorities, older workers), but hopefully fairness is a universal concept that applies in many cultures.

Beginning with the Edwards & Ewen book and progressing from there, we can find more and more evidence that 360 done well can provide decision makers with better information (i.e., valid and fair) than traditional sources (e.g., supervisory evaluations).  I actually heard a lawyer state that organizations could be legally exposed for not using 360 feedback because is more valid and fair than methods currently in use.

I have quoted Smither, London and Reilly (2005) before, but here it is again:

We therefore think it is time for researchers and practitioners to ask “Under what conditions and for whom

is multisource feedback likely to be beneficial?” (rather than asking “Does multisource feedback work?”).

©2011 David W. Bracken

So Now What?

with 7 comments

[tweetmeme source=”anotherangle360″]

This is the one year anniversary of this blog. This is the 44th post.  We have had 2,026 views, though the biggest day was the first with 38 views.  I have had fewer comments than I had hoped (only 30), though some LinkedIn discussion have resulted. Here is my question: Where to go from here? Are there topics that are of interest to readers?

Meanwhile, here is my pet peeve(s) of the week/month/year:  I was recently having an exchange with colleagues regarding a 360 topic on my personal Gmail account and up pops ads on the margin for various 360 vendors (which is interesting in itself), the first of which is from Qualtrics (www.qualtrics.com) with the heading, “Create 360s in Minutes.”

The topic of technology run amok has been covered before here (When Computers Go Too Far, http://wp.me/p10Xjf-3G), my peevery was piqued (piqued peevery?) when I explored their website and saw this claim:  USE VALIDATED QUESTIONS, FORMS and REPORTS.”

What the heck does that mean?  What are “validated” forms and reports, for starters?

The bigger question is, what is “validity” in a 360 process?  Colleagues and I (Bracken, Timmreck, Fleenor and Summers, 2001; contact me if you want a copy) have offered up a definition of validity for 360’s that holds that it consists of creating sustainable change in behaviors valued by the organization.  Reliable items, user friendly forms and sensible reports certainly help to achieve that goal, but certainly cannot be said to be “valid” as standalone steps in the process.

The Qualtrics people don’t share much about who they are. Evidently their founder is named Scott and teaches MBA’s.  They appear to have a successful enterprise, so kudos!  I would like to know how technology vendors claim to have “valid” tools and what definition of validity they are using.

Hey maybe I will get my 31st comment?

©2011 David W. Bracken

I Don’t Care

leave a comment »

[tweetmeme source=”anotherangle360″]

Last week I led a workshop for the Personnel Testing Council of Metropolitan Washington that was a modified reprise of the workshop Carol Jenkins and I did at the Society for Industrial and Organizational Psychology in April. I really enjoy these workshops and the opportunity to interact face-to-face with practitioners in the field of 360 degree feedback.

I do wish that participants in these workshops would engage me in a little more debate, and, to that end, I sometimes throw out comments in the hope of raising some hackles. For example, at the PTCMW session, I twice said “I don’t care” regarding two topics that I will explain below. Unfortunately, no one took the bait in the workshop, but maybe I can lure some of you into the discussion using this blog as a vehicle.

So here are the two areas where a ton of research is being done but where, as a practitioner, I don’t care:

1)      The personality of the participant. I don’t care. Everyone seems to want to know how the personality of the participant is going to affect his/her reaction to the feedback.  In past blogs, I have fessed up to being a behaviorist, and in that respect all I really “care” about is getting the person to accept the feedback and to change, whether they want to or not. In my last blog, I used the examples of people’s apparent reluctance to do simple things like apologize for mistake and/or to say “thank you.”  Behaviorally, those are pretty easy things to do, but evidently some internal force (e.g., personality) makes it difficult.  In fact, those internal forces vary greatly across people, and I find chasing them down to not be a very fruitful use of time for the participant or for myself. If the organization and feedback tells you that you need to modify your behavior, just do it!

Sometimes what is going on inside the person’s head is more an issue of awareness than of personality, and awareness is something we can change through 360’s. Occasionally the journey from awareness to acceptance is difficult due to personality factors. It is our job to design the 360 process to make it difficult to not accept the feedback, including ensuring that raters are knowledgeable, reliable, motivated and in sufficient quantity.

On a practical level, when many 360 processes involve dozens or hundreds of participants, it becomes very challenging to integrate personality assessment, for example, into the mix. Not to say it can’t be done. Carol Jenkins does some of that in her practice with groups of feedback recipients. But part of my “I don’t care” mentality has come from a need to get large numbers of people to use the feedback productively without being able to “get inside their head.”

2)      The gap between self-ratings and “other” ratings. I don’t care. As a psychologist, I do find it interesting to see how ratees approach self-ratings, especially the first time around. And they usually change their self-ratings once they see how they are perceived by others. But I am increasingly convinced that self-ratings are more a reflection of the ratee’s agenda than any real self-assessment. (All raters are susceptible to using their ratings to this kind of error.) One memorable instance for me was in working with a Chief Legal Officer who gave himself all 5’s and stated, “do you think I would be crazy enough to actually document less than optimal performance?”

I DO think that participants should complete the rating process, but for other reasons. One is to ensure that they are familiar with the content and how he/she is expected to behave as defined by the organization. Secondly, it is some evidence of at least minimal commitment to the process.

In general, I am not very interested in why a ratee behaves in a certain way if it needs to change. It is highly unlikely that we can change the “why” part of behavior (i.e., personality) other than to affect their awareness of how they are perceived and the importance of accepting that feedback on the way to behaving differently.What is going on in the person’s head is fun for psychologists to research, but doesn’t necessarily help achieve sustainable behavior change.

©2011 David W. Bracken

What You See Is What You Get

leave a comment »

[tweetmeme source=”anotherangle360″]

Every month or so I get an invitation/newsletter from Marshall Goldsmith and Patricia Wheeler. This month’s had a couple gems in it, and I have provided the link at the end of this article.  Marshall’s entry on life lessons is very much worth reading. But Patricia’s offering particularly struck me since I have been thinking a lot about leader behavior. As you will see it also relates directly to the hazards of misdiagnosis, another human flaw that is especially salient for those of us in consulting and coaching where we are prone to jumping to conclusions too quickly.

Several years ago my mother experienced stomach pains.  Her physician, one of the best specialists in the city, ordered the usual tests and treated her with medication.  The pains continued; she returned to his office and surgery was recommended, which she had.  After discharge the pains recurred, stronger than ever; she was rushed to the emergency room, where it was determined that her physician had initially misdiagnosed her. She had further surgery; unfortunately she was unable to withstand the stress of two surgeries, fell into a coma and died several days later.  Several days after her second surgery, her physician approached me, almost tearfully, with an apology.

“I apologize,” he said, “this is my responsibility.”  He should have done one additional test, he said, requiring sedation and an invasive procedure, but he did not want to impose the pain of that procedure on her, feeling at the time that his diagnosis was correct.  “I am truly sorry and I will never make that mistake again.”  What struck me at the time and continues to stay with me is that this doctor was willing to take the risk of telling the whole difficult truth, and that taking responsibility for the situation was more important to him than the very real possibility of a malpractice suit.  I forgave him, and I believe my mother would have as well.

Real apologies have positive impact that, in most if not all cases, outweigh the risk factors.  Ask yourself, when does an apology feel heartfelt to you? When does it seem empty?  Think of a time when you heard a public or corporate figure apologize and it rang true and think of a time when it didn’t.  What made the difference? Here are a few guidelines:

Is it from the heart or the risk management office?  If your apology reads like corporate legalese, it won’t be effective.

Is it unequivocal?  Too many apologies begin with “I’m sorry, but you were at fault in this too.”  An attempt to provoke the other party into apologizing or accepting fault will fail.

Is it timely?  If you delay your apology, perhaps wishing that the issue would just go away (trust me, it won’t), its effect will diminish proportionately.

Does it acknowledge the injury and address the future?  In other words, now that you know your words or actions have caused injury, what will you do going forward?

While we can’t avoid all errors, missteps and blind spots, we can at least avoid compounding them with empty words, blaming and justification.

Patricia is focusing on a particular behavior, i.e., apologizing. This behavior, like all other behaviors, is modifiable if we are aware of the need to change and motivated to do so.  It may not be easy and you may not be comfortable doing it, but that is no excuse. And, by the way, people really don’t care what is going on inside your head to justify not changing (e.g., “they know that I’m sorry without me saying it”). Making an apology is often difficult, as Patricia points out, and maybe that’s why it can be so striking and memorable when someone does it well.

In his book, “What Got You Here Won’t Get You There,” Marshall makes a similar point about the simple behavior of saying “thank you,” which is a common shortcoming in even the most successful leaders.  Leaders find all sorts of excuses for avoiding even that seemingly easy behavior, including “that’s just not me.” The point is that what you do and what people see (i.e., behaviors) IS who you are.

The good news for us practitioners of 360 Feedback is that observing behaviors is what it is (or should be) all about. In a 360 process, the organization defines the behaviors it expects from its leaders, gives them feedback on how successful they are in doing so, and then (ideally) holds them accountable for changing.

This also means that we go to great lengths to ensure that the content of 360 instruments uses items that describe behaviors, hopefully in clear terms.  We need to ensure that we are asking raters to be observers and reporters of behavior, not mind readers or psychologists.  We need to especially wary of items that include adjectives that ask the rater to peer inside the ratee’s head, including asking what the ratee “knows” or “is aware of” or “believes” or even what the leader is “willing” to do.

As a behaviorist, in the end I only care what a leader does and not why (or if) he/she wants to do it. That’s the main reason why I have found personality assessments to be of little interest, with the exception of possibly providing insights into how the coaching relationship might be affected by things like openness to feedback or their preferred style for guidance and learning.

Another piece of good news for us behaviorists came out in a recent article in Personnel Psychology titled, “Trait and Behavioral Theories of Leadership: An Integration and Meta-Analytic Test of Their Relative Validity” (Derue, Nahrgang, Wellman and Humphrey, 2011).  To quote from the abstract, they report:

Leader behaviors tend to explain more variance in leadership effectiveness than leader traits, but results indicate that an integrative model where leader behaviors mediate the relationship between leader traits and effectiveness is warranted.

The last part about mediation suggests that, even when traits do a decent job (statistically) of predicting leader effectiveness, they are “filtered” through leader behaviors. For example, all the intelligence in the world doesn’t do much good if you are still a jerk (or bully, or psychopath, etc.)

All of this reinforces the importance of reliably measuring leader behaviors, especially if we believe that the “how” of performance is at least as important as the “what.”

Link:  http://email.e-mailnetworks.com/hostedemail/email.htm?h=bdd4c78f38fd64341d6760533238799c&CID=4826566929&ch=487D1DD320A1A801E8ACD8949CEAC445

©2011 David W. Bracken

What I Learned at SIOP

with one comment

[tweetmeme source=”anotherangle360″]

The annual conference of the Society for Industrial/Organizational Psychology (SIOP) was held in Chicago April 14-16 with record attendance. I had something of a “360 Feedback-intensive” experience by running two half-day continuing education workshops (with Carol Jenkins) on 360 feedback, participating on a panel discussion of the evolution of 360 in the last 10 years (with other contributors to The Handbook of Multisource Feedback), and being the discussant for a symposium regarding Implicit Leadership Theories that largely focused on cultural factors in 360 processes. Each forum gave me an opportunity to gauge some current perspectives on this field, and here are a few that I will share.

The “debate” continues but seems to be softening. The “debate” is, of course, how 360 feedback should be used: development only and/or for decision making. In our CE Workshop, we actually had participants stand up and stand in corners of the room to indicate their stance on this issue, and, judging from that exercise, there are still many strong proponents of each side of that stance. That said, one of the conclusions the panel seemed to agree upon is that there is some blurring of the distinction between uses and some acknowledgement that 360’s are successfully being used for decision making, and that 360’s are far less likely to create sustainable behavior change without accountability that comes with integration with HR systems.

We need to be sensitive to the demands we place on our leaders/participants. During our panel discussion, Janine Waclawski (who is currently an HR generalist at Pepsi) reminded us of how we typically inundate 360 participants with many data points, beginning with the number of items multiplied by the number of rater groups. (I don’t believe the solution to this problem is reducing the number of items, especially below some arbitrary number like 20 items.)  Later, I had the opportunity to offer commentary on four terrific research papers that had a major theme of how supervisors need to be aware of the perspectives of their raters that may well be caused by their cultural backgrounds.

As someone who is more on the practitioner end of the practitioner-scientist continuum, I tried to once again put myself in the seat of the feedback recipient (where I have been many times) and consider how this research might be put into practice. On one hand, organizations are using leadership competency models and values statements to create a unified message (and culture?) that spans all segments of the company. We can (and should) have debates about how useful and realistic this practice is, but I think most of us agree that the company has a right to define the behaviors that are expected of successful leaders. 360 processes can be a powerful way to define those expectations in behavioral terms, to help leaders become aware of their perceived performance of those behaviors, to help them get better, and to hold leaders accountable for change.

On the other hand, the symposium papers seem to suggest that leader behaviors should be molded from “the bottom up,” i.e., by responding to the expectations of followers (raters) that may be attributed to their cultural backgrounds and their views of what an effective leader should be (which may differ from the leader’s view and/or the organization’s view of effective leadership).  By the way, this “bottoms up” approach applies also to the use of importance ratings (which is not a cultural question).

My plea to the panel (perhaps to their dismay) was to at least consider the conundrum of the feedback recipient who is being given this potentially incredibly complex task of not only digesting the basic data that Janine was referring to, but then to fold in the huge amount of information created by having to consider the needs of all the feedback providers. Their research is very interesting and useful in raising our awareness of cultural differences that can affect the effectiveness of our 360 processes. But PLEASE acknowledge the implications for putting all of this to use.

The “test” mentality is being challenged.  I used the panel discussion to offer up one of my current pet peeves, namely to challenge the treatment of 360 Feedback as a “test.”  Both in the workshops and again at the panel, I suggested that applying practices such as randomizing items and using reverse wording to “trick” the raters is not constructive and most likely is contrary to our need to help the raters provide reliable data. I was gratified to receive a smattering of applause when I made that point during the panel.  I am looking forward to hopefully discussing (debating) this stance with the Personnel Testing Council of Metropolitan Washington in a workshop I am doing in June, where I suspect some of the traditional testing people will speak their mind on this topic.

This year’s SIOP was well done, once again. I was especially glad to see an ongoing interest in the evolution of the field of 360 feedback judging from the attendance at these sessions, let alone the fact that the workshop committee identified 360 as a topic worthy of inclusion after going over 10 years since the last one.  360 Feedback is such a complex process, and we are still struggling with the most basic questions, including purpose and use.

©2011 David W. Bracken

The Death Card

leave a comment »

[tweetmeme source=”anotherangle360″]

A number of (pre-recession) years ago, I belonged to a firm that was operating in the black and held some very nice off-site meetings for its consultants. At one such event, we had an evening reception that had some fun activities, one of which being a Tarot reader. I don’t even read horoscopes but there was no one waiting and I decided to give it a try (the first and last time).  I obviously didn’t know much about Tarot but it seemed like the last card to be turned over was the most important. And, lo and behold, it was the Death card! I remember a pause from the Reader (perhaps an intake of breath?), and then a rapid clearing of the cards with some comment to the effect of, “That’s not important.”  Session over.

Well, I guess the good news is that I am still here (most people would agree with that I think).  My purpose for bringing this up is not to discuss superstitions and the occult, but to reflect on how people react to and use 360 feedback.

In fact, I have been known to call some 360 processes “parlor games, “which relates directly to my Tarot experience. That was a true “parlor game.”  What is a parlor game? My definition, for this context, is an activity that is fun and has no consequences, where a person can be the focus of attention with low risk of embarrassment and effort.  Since I strongly believe in self determination, I do my best to not let arbitrary events that I cannot control to affect my life. That would include a turn of a card, for starters.

So how do we ensure that 360 Feedback isn’t a parlor game and does matter? I propose that two important factors are Acceptance and Accountability.

Some of the design factors that promote Acceptance would include:

  • Use a custom instrument (to create relevance)
  • Have the rater select raters, with manager approval (to enhance credibility of feedback)
  • Enhance rater honesty and reliability (to help credibility of data)
  • Invite enough raters to enhance reliability and minimize effects of outliers
  • Be totally transparent to purpose, goals, and use (not mystical, magic, inconsistent or arbitrary)

Factors that can help create Accountability (and increase the probability of behavior change) include:

  • Require leaders to discuss results and development plans with raters (like going public with a New Year’s Resolution)
  • Include results as a component of performance management, typically in the development planning section, to create consequences for follow through, or lack thereof
  • Ensure that the leader’s manager is also held accountable for properly using results in managing and coaching
  • Conduct follow-up measures such as mini-360’s and/or annual readministrations.

Some 360 processes appear to define success as just creating awareness in the participants, hoping that the leader will be self motivated to change. That does happen; some leaders do change, at least for a while, and maybe even in the right way. (Some people probably change based on Tarot readings too!).  For those leaders who need to change the most, it usually doesn’t happen without Acceptance and Accountability.

Simply giving a feedback report to a leader and stopping there seems like a parlor game to me. A very expensive one.

©2011 David W. Bracken