Strategic 360s

Making feedback matter

Posts Tagged ‘Handbook of Multisource Feedback

I Have a Dream

with 3 comments

[tweetmeme source=”strategic360s”]

In the next couple weeks, I have a workshop to do on “Creating a Coaching Climate” for the Greater Atlanta Chapter of ASTD, and then a conversation hour at SIOP on “Strategic 360 Feedback” that I wrote about last week (https://dwbracken.wordpress.com/2014/04/18/holes-in-the-wall-a-siop-preview/).

Clearly I am still trying to influence people about some things that I feel strongly about. So I was thankful that my wife brought to my attention a TED talk by Simon Sinek that has over 16 million views (http://www.ted.com/talks/simon_sinek_how_great_leaders_inspire_action­­) that she thought I would find interesting because it was positioned to be about leadership. And it is. But, as importantly, it is about influencing others (which is part of leadership). It is also about sales, and he uses the word “buy” often, which can be taken both literally (sales) and as a euphemism (“buy into”).

In this TED talk, Mr. Sinek proposes that the best way to influence others is not to talk about “what you do”, or “how you do it”, but to express “why” you do it, i.e., the passion behind the subject. He reminded us that Martin Luther King didn’t say, “I have a plan,” (though he undoubtedly did). Instead, he said “I have a dream,” and went on to describe what that dream looked like. There are many other examples, such as John F. Kennedy’s dream of putting a man on the moon that was not only realized but created countless scientific innovations that have become part of our daily lives.

So part of my dream is captured in the tagline from The Handbook of Multisource Feedback that I also referenced in my last blog: Large scale change occurs when a lot of people change just a little. One of the great things about being an I/O psychologist is we have the opportunity and challenge to touch “a lot of people” with our work. One way we do that is the ways we help organizations make better decisions about people, such as in the decisions about who to hire, fire, promote and develop, and by constantly striving to improve the accuracy of those decisions for the benefit of the organization and the individual. And you may (or may not) know that I am a proponent of use 360 Assessments to help improve the quality (i.e., reliability and validity) of decisions we have to make about many employees (e.g., development plans, training, promotions, staffing, compensation, succession plans, high potential identification).

We can also touch “a lot of people” with processes that affect employees once they are on board. The versions of 360 processes that The Handbook primarily focuses on are those that do touch “a lot of people” to create change one person at a time (but all at once). What is missing in that phrase is the critical notion of creating sustainable change. My criticism of many 360 processes is that they do not burden themselves with worrying about what it takes to create sustainable behavior change, seemingly feeling that the simple act of creating awareness of a need to change (a gap between observed and desired behaviors) will somehow make people magically change. Some do, but not often enough nor are they the people who need it most.

Sustained behavior change can also be thought of as a habit. Part of my dream is to have behavior change (which is a choice) become a reflex, a natural reaction.

My son-in-law, who has two daughters (with my daughter, of course), put a post on Facebook last week that asked, “Am I the only one who puts the toilet seat down in my hotel room?” I, and a few others, responded “No, I do it too”, and I (also having two daughters) have been known to use this very behavior as an example of a voluntarily adopted behavior that becomes a habit, even if the behavior has no obvious benefit to the actor. The “benefit” to the actor is that he/she (“he” in this case) is part of an organization (the household, family) and by being considerate of others, can expect to in turn maintain the cohesiveness of the organization.

Last year, right after Nelson Mandela’s death, I listened in on an interview of a BBC journalist who had made a career out of following the life of Mandela. He shared that he was so moved by this man that he gave his son the middle name “Nelson,” and the interviewer asked what he hoped to affect his son’s life by doing so (which is an interesting question). The journalist, though, had an immediate answer: He hoped that his son would show kindness to others as a reflex (i.e., ingrained habit, my words).

The notion of “kindness” is one I am hearing more often in organizations, sometimes in the context of the desire to be empathetic without sacrificing the need to make tough decisions about people. Then I saw this article (http://goo.gl/iz5Qdj) about “compassion” that seems to capture the idea of kindness and shared values. Defined as “when colleagues who are together day in and day out, ask and care about each other’s work and even non-work issues,” some cited research indicates that to the extent that there’s a greater culture of companionate love, that culture is associated with greater satisfaction, commitment and accountability.”

This piece on compassion then goes on to say, “Management can do something about this, They should be thinking about the emotional culture. It starts with how they are treating their own employees when they see them. Are they showing these kinds of emotions? And it informs what kind of policies they put into place. This is something that can definitely be very purposeful — not just something that rises organically.”

You can create a culture by the behaviors that leaders exhibit, whether it’s a culture of compassion, kindness, quality, customer service, fear, anger, fun, feedback, and so on. The point is that these cultures can be defined by behaviors. And a behavior is a choice, i.e., whether to do it or not. And the behavior can become a habit or reflex. We shouldn’t buy the excuse, “Well, that isn’t who I am.” I/we don’t care. The type of person/leader you are is determined by what you do, not what you think or think you think.

And when employees (at all levels) report that they want to be respected, valued, developed, and have trust in their leaders (see this report from APA: (http://www.apaexcellence.org/resources/goodcompany/newsletter/article/530), organizations should listen and act, i.e., define the desired behaviors and hold leaders accountable. Someday those behaviors will become habits/reflexes.

 

So, what is my dream? In this context, it includes things like this:

  • That more organizations will acknowledge the intuitive and research-based advantages to treating their employees with respect and kindness, and engendering trust along the way, and then do something to create sustainable change.
  • Focus on the potential benefits of processes like 360’s that can potentially improve our decisions, not focus on the challenges in doing so
  • Speaking of decisions, that we can use tools like 360’s to identify leaders early in their career who are poised to do damage via inappropriate behaviors, and get rid of them (or at least not promote them)
  • Admit that human nature is such that behavior change requires not only awareness but accountability for sustainable change to occur
  • Acknowledge that sustainable culture change requires integration into HR processes to create ongoing alignment, accountability, and measurement
  • That kindness, compassion and respect become habits for all of us.

 

That’s enough dreaming for now.

©2014 David W. Bracken

Written by David Bracken

April 23, 2014 at 6:16 pm

Holes in the Wall: A SIOP Preview

with one comment

[tweetmeme source=”strategic360s”]

I will be leading a Conversation Hour at the upcoming SIOP Annual Conference, surprisingly titled, “Strategic 360 Feedback.” I would love to hear from any of you as to what you would like to talk about in your use of 360’s for more than “just” leadership development, whether you are going to be there or just wish you were.

One topic I do want to address is the use of 360’s in creating large scale change in organizations (climate change??), harkening back to the tagline at the beginning of The Handbook of Multisource Feedback: “Large scale change occurs when a lot of people change just a little.”

I am thinking about using a metaphor building off the observation (criticism?) of “when you have a hammer, everything looks like a nail,” here applied to 360’s. Of course, I look at things a little differently, as in missed opportunities. To extend the metaphor, I see many (most) organizations frustrated with the inability to sustain processes such as performance management systems or other culture change initiatives. So let’s say the “initiative” is like a picture we are trying to hang on the wall. So we have to get a hook nailed into the wall. I believe they are trying to push in nails with just their thumbs, and, of course, the picture might hang on the wall for minutes or a few hours, but then crashes with a large thump and lots of broken glass. And leaves a hole in the wall, maybe adding to all the holes already there from other unsuccessful attempts to hang that picture or other pictures.

To wrap up the metaphor, let’s survey the scene (so to speak). A broken picture with lots of accompanying noise that everyone can see and refer to, including the cost of repair if they are going to try to hang it again. And of course the holes in the wall everyone will point at as evidence of all the failed attempts to hang pictures in the past. So where is the hammer (i.e., 360 feedback processes)?

Well, let’s see. We had a hammer but lost it. And someone hit their thumb with the last one. The last time we used it, it was too small (or big, take your pick). A new hammer is expensive. The person who had the hammer left the company and took it with them (and we really didn’t like that hammer anyways). The last time we used it, we used the wrong end (must have been a manager). Maybe a shoe would work next time?

Like any tool, a hammer (aka 360) can be misused and even dangerous. Allan Church and I produced an article that tries to demonstrate how the 360 “hammer” can be used to improve performance management in the right hands. http://www.orgvitality.com/articles/HRPSBrackenChurch OV.pdf

And maybe hang on the wall for a long time.

Please let me know if you have any observations about how your “hammer” hasn’t worked and/or how this metaphor works or doesn’t work for you.

See you in Hawaii??

P.S.  The 3rd meeting of the Strategic 360 Forum will convene in Chicago on September 16.  Let me know if you have an interest.

©2014 David W. Bracken

What Is a “Decision”?

with one comment

[tweetmeme source=”anotherangle360″]

My good friend and collaborator, Dale Rose, dropped me a note regarding his plans to do another benchmarking study on 360 Feedback processes. His company, The 3D Group, has done a couple of these studies before and Dale has been generous in sharing his results with me, which I have cited in some of my workshops and webinars. The studies are conducted by interviewing coordinators of active 360 systems.  Given that they are verbal, some of the results have appeared somewhat internally inconsistent and difficult to reconcile, though the general trends are useful and informative.

Many of the topics are useful for practitioners to gauge their program design, such as the type of instrument, number of items, rating scales, rater selection, and so on. For me, the most interesting data relates to the various uses of 360 results.

Respondents in the 2004 and 2009 studies report many uses. In both studies, “development” is the most frequent response, and that’s how it should be.  In fact, I’m amazed that the responses weren’t 100% since a 360 process should be about development. The fact that in 2004 only 72% of answers included development as a purpose is troubling whether we take the answers as factual or if they didn’t understand the question. The issue at hand here is not whether 360’s should be used for development; it is what else they should, can, and are used for in addition to “development.”

In 2004, the next most frequent use was “career development;” that makes sense. In 2009, the next most frequent was “performance management,” and career development dropped way down. Other substantial uses include high potential identification, direct link to performance measurement, succession planning, and direct link to pay.

But when asked whether the feedback is used “for decision making or just for development”, about 2/3 of the respondents indicated “development only” and only 1/3 for “decision making.” I believe these numbers understate the actual use of 360 for “decision making” (perhaps by a wide margin), though (as I will propose), it can depend on how we define what a “decision” is.

To “decide” is “to select as a course of action,” according to Miriam Webster (in this context). I would build on that definition that one course of action is to do nothing, i.e., don’t change the status quo or don’t let someone do something. It is impossible to know what goes on in person’s mind when he/she speaks of development, but it seems reasonable to suppose that it involves doing something beyond just leaving the person alone, i.e., maintaining the status quo.  But doing nothing is a decision. So almost any developmental use is making a decision as to what needs to be done, what personal (time) and organizational (money) resources are to be devoted to that person. Conversely, denying an employee access to developmental resources that another employee does get access to is a decision, with results that are clearly impactful but difficult to measure.

To further complicate the issues, it is one thing to say your process is for “development only,” and another to know how it is actually used.  Every time my clients have looked behind the curtain of actual use of 360 data, they unfailingly find that managers are using it for purposes that are not supported. For example, in one client of mine, anecdotal evidence repeatedly surfaced that the “development only” participants were often asked to bring their reports with them to internal interviews for new jobs within the organization. The bad news was that this was outside of policy; the good news was that leaders saw the data as useful in making decisions, though (back to bad news) they may have been untrained to correctly interpret the reports.

Which brings us to why this is an important issue. There are legitimate “development only” 360 processes where the participant has no accountability for using the results and, in fact, is often actively discouraged from sharing the results with anyone else. Since there are not consequences, there are few, if any, consequential actions or decisions required. But most 360 processes (despite the benchmark results suggesting otherwise) do result in some decisions being made, which might include doing nothing by denying an employee access to certain types of development.

The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for Multisource Feedback When Used for Decision Making.”  My sense is many designers and implementers of 360 (multisource) processes feel that these Guidelines don’t apply because their system isn’t used for decision making. Most of them are wrong about that. Their systems are being used for decision making, and, even if not, why would we design an invalid process? And any system that involves the manager of the participant (which it should) is creating the expectation of direct or indirect decision making to result.

So Dale’s question to me (remember Dale?) is how would I suggest wording a question in his new benchmarking study that would satisfy my curiosity regarding the use of 360 results. I proposed this wording:

“If we define a personnel decision as something that affects an employee’s access to development, training, jobs, promotions or rewards, is your 360 process used for personnel decisions?” 

Dale hasn’t committed to using this question in his study. What do you think?

©2012 David W. Bracken

I Need an Exorcism

leave a comment »

[tweetmeme source=”anotherangle360″]

Being the 360 Feedback nerd I am, I love it when some new folks get active on the LinkedIn 360 discussion group. One discussion emerged recently that caught my eye, and I have been watching it with interest, mulling over the perspectives and knowing I had to get my two cents in at some point.

Here is the question:

How many raters are too many raters?

We normally recommend 20 as a soft limit. With too many, we find the feedback gets diluted and you have too many people that don’t work closely enough with you to provide good feedback. I’d be curious if there are any suggestions for exceptions.

This is an important decision amongst the dozens that need to be made in the course of designing and implementing 360 processes. The question motivated me to pull out The Handbook of Multisource Feedback and find the excellent chapter on this topic by James Farr and Daniel Newman (2001), which reminded me of the complexity of this decision. Let me also reiterate that this is another decision that has different implications for “N=1” 360 processes (i.e., feedback for a single leader on an ad hoc basis) versus “N>1” systems (i.e., feedback for a group of participants); this blog and discussion is focused on the latter.

Usually people argue that too many surveys will cause disruption in the organization and unnecessary “soft costs” (i.e., time). The author of this question poses a different argument for limiting the rater population, which he calls “dilution” due to inviting unknowledgeable raters.  For me, one of the givens of any 360 system is that the raters must have sufficient experience with the ratee to give reliable feedback. One operationalization of that concept is to require that an employee must have worked with/for the ratee for some minimum amount of time (e.g., 6 months or even 1 year), even if he/she is a direct report. Having the ratee select the raters (with manager approval) is another practice that is designed to help get quality raters that then also facilitate the acceptance of the feedback by the ratee. So “dilution” due to unfamiliarity can be combated with that requirement, at least to some extent.

One respondent to this question offers this perspective:

The number of raters depends on the number of people that deal with this individual through important business interactions and can pass valuable feedback based on real experience. There is no one set answer.

I agree with that statement. Though, while there is no one set answer, some answers are better than others (see below).

In contrast, someone else states:

We have found effective to use minimum 3 and maximum 5 for any one rater category.

The minimum of 3 is standard practice these days as a “necessary but not sufficient” answer to the number of raters. As for the maximum of 5, this is also not uncommon but seems to ignore the science that supports larger numbers.  When clients seek my advice on this question of number of raters, I am swayed by the research published by Greguras and Robie (1998) who collected and researched the question of the reliability of various rater sources (i.e., subordinates, peers and managers). They came to the conclusion that different rater groups provide differing levels of reliable feedback, probably because the number of “agendas” lurking within the various types of raters. The least reliable are the subordinates, followed by the peers, and then the managers, the most reliable rater group.

One way to address rater unreliability is to increase the size of the group (another might be rater training, for example). Usually there is only one manager and best practice is to invite all direct reports (who meet the tenure guidelines), so the main question is the number of peers. This research suggests that 7-9 is where we need to aim, noting also that that is the number of returns needed, so inviting more is probably a good idea if you expect less than a 100% response rate.

Another potential rater group is external customers. Recently I was invited to participate in a forum convened by the American Board of Internal Medicine (ABIM) to discuss the use of multisource feedback in physician recertification processes. ABIM is one of 24 member Boards of the American Board of Medical Specialties (ABMS), which has directed that some sort of multisource (or 360) feedback be integrated into recertification.

The participants in this forum included many knowledgable, interesting researchers on the use of 360 in the context of medicine (a whole new world for me, which was very energizing). I was invited to represent the industry (“outside) perspective. One of the presenters spoke to the challenge of collecting input from their customers (i.e., patients), a requirement for them. She offered up the number of 25 as the number of patients needed to create a reliable result, using very similar rationale as Greguras and Robie regarding the many individual agendas of raters.

Back to LinkedIn, there was then this opinion:

I agree that having too many raters in any one rater group does dilute the feedback and make it much harder to see subtleties. There is also a risk that too many raters may ‘drown out’ key feedback.

This is when my head started spinning like Linda Blair in The Exorcist.  This perspective is SO contrary to my 25 years of experience in this field that I had to prevent myself from discounting it as my head continued to rotate.  I have often said that a good day for me includes times when I have said, “Gee, I have never thought of (insert topic) in that way.” I really do like hearing new and different views, but it’s difficult when they challenge some foundational belief.

For me, maybe THE most central tenet of 360 Feedback is the reliance on rater anonymity in the expectation (or hope) that it will promote honesty. This goes back to the first book on 360 Feedback by Edwards and Ewen (1996) where 360’s were designed with this need for anonymity being in the forefront. That is why we use the artificial form of communication of using anonymous questionnaires and usually don’t report in groups of less than 3. We know that violations of the anonymity promise result in less honesty and reduced response rates, with the grapevine (and/or social media) spreading violated trust throughout the organization.

The notion that too many raters will “drown out key feedback” seems to me to be a total reversal of this philosophy of protecting anonymity. It also seems to place an incredible amount of emphasis on the report itself where the numbers become the sole source of insight. Other blog entries of mine have proposed that the report is just the conversation starter, and that true insight is achieved in the post-survey discussions with raters and manager.

I recall that in past articles (see Bracken, Timmreck, Fleenor and Summers, 2001) we made the point that every decision requires what should be a conscious value judgment as to who the most important “customer” is for that decision, whether it be the rater, ratee, or the organization. For example, limiting the number of raters to a small number (e.g., 5 per group or not all Direct Reports) indicates that the raters and organization are more important than the ratee, that is, that we believe it is more important to minimize the time required of raters than it is to provide reliable feedback for the ratee. In most cases, my values cause me to lobby on behalf of the ratee as the most important customer in design decisions.  The time that I will rally to the defense of the rater as the most important customer in a decision is when anonymity (again, real or perceived) is threatened. And I see these arguments for creating more “insight” by keeping rater groups small or subdivided are misguided IF these practitioners share the common belief that anonymity is critical.

Finally (yes, it’s time to wrap this up), Larry Cipolla, an extremely experienced and respected practitioner in this field, offers some sage advice with some comments, including the folly of increasing rater group size by combining rater groups. As he says, that is pure folly. But I do take issue with one of his practices:

We recommend including all 10 raters (or whatever the n-count is) and have the participant create two groups–Direct Reports A and Direct Reports B.

This seems to me to be a variation on the theme of breaking out groups and reducing group size with the risk of creating suspicions and problems with perceived (or real) anonymity. Larry, you need to show that doing this kind of subdividing creates higher reliability in a statistical sense that can overcome the threats to reliability created by using smaller N’s.

Someone please stop my head from spinning. Do I just need to get over this fixation with anonymity in 360 processes?

References

Bracken, D.W., Timmreck, C.W., and Church, A.H. (2001). The Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Bracken, D.W., Timmreck, C.W., Fleenor, J.W., and Summers, L. (2001). 360 feedback from another angle. Human Resource Management, 1, 3-20.

Edwards, M. R., and Ewen, A.J.  (1996). 360° Feedback: The powerful new model for employee assessment and performance improvement. New York: AMACOM.

Farr, J.L., and Newman, D.A. (2001). Rater selection: Sources of feedback. In Bracken, D.W., Timmreck, C.W., and Church, A.H. (eds.), The Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Greguras, G.J., and Robie, C. (1998).  A new look at within-source interrater reliability of 360-degree feedback ratings. Journal of Applied Psychology, 83, 960-968.

©2012 David W. Bracken

That’s Why We Have Amendments

with one comment

[tweetmeme source=”anotherangle360″]

I used my last blog (https://dwbracken.wordpress.com/2011/08/09/so-now-what/)  to start LinkedIn discussions in the 360 Feedback and I/O Practitioners group, asking the question: What is a “valid” 360 process?  The response from the 360 group was tepid, maybe because the group has a more general population that might not be that concerned with “classic” validity issues (which is basically why I wrote the blog in the first place).  But the I/O community went nuts (45 entries so far) with comments running the gamut from constructive to dismissive to deconstructive.

Here is a sample of some of the “deconstructive” comments:

…I quickly came to conclusion it was a waste of good money…and only useful for people who could (or wanted to) get a little better.

It is all probably a waste of time and money. Good luck!!

There is nothing “valid” about so-called 360 degree feedback. Technically speaking, it isn’t even feedback. It is a thinly veiled means of exerting pressure on the individual who is the focal point.

My position regarding performance appraisal is the same as it has been for many years: Scrap It. Ditto for 360.

Actually, I generally agree with these statements in that many 360 processes are a waste of time and money. It’s not surprising that these sentiments are out there and probably quite prevalent. I wonder, though, if we are all on the same page. In another earlier blog, I suggested that discussions about the use and effectiveness of 360’s should be separated by those that are designed for feedback to a single individual (N=1) and those that are designed to be applied to groups (N>1).

But the fact is that HR professionals have to help their management make decisions about people, starting with hiring and then progressing through placement, staffing, promotions, compensation, rewards/recognition, succession planning, potential designation, development opportunities, and maybe even termination.

Nothing is perfect, especially so when it comes to matters that involve people. As an example, look to the U.S. Constitution, an endearing document that has withstood the test of time. Yet the Founding Fathers were the first to realize that they needed to make provisions for the addition of amendments to further make refinements. Of course, some of those amendments were imperfect themselves and were later rescinded.

But we haven’t thrown out the Constitution because it is imperfect.  Nor do we find it easy to come to agreements what the revisions should be.  But one of the many good things about humans is a seemingly natural desire to make things better.

Ever since I read Mark Edwards and Ann Ewen’s seminal book, 360 Degree Feedback, I have believed that 360 Feedback has the potential to improve personnel decision making when done well. The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for multisource feedback when used for decision making,” coauthored with Carol Timmreck, where we made a stab at defining what “done well” can mean.

In our profession, we have an obligation to constantly seek ways of improving personnel decision making. There are two major needs we are trying to meet, which sometimes cause tensions. One is to provide the organization with more accurate information on which to base these decisions, which we define as increased reliability (accurate measurement) and validity (relevant to job performance). Accurate decision making is good for both the organization and the individual.

The second need is to simultaneously use methods that promote fairness. This notion of fairness is particularly salient in the U.S. where we have “protected classes” (i.e., women, minorities, older workers), but hopefully fairness is a universal concept that applies in many cultures.

Beginning with the Edwards & Ewen book and progressing from there, we can find more and more evidence that 360 done well can provide decision makers with better information (i.e., valid and fair) than traditional sources (e.g., supervisory evaluations).  I actually heard a lawyer state that organizations could be legally exposed for not using 360 feedback because is more valid and fair than methods currently in use.

I have quoted Smither, London and Reilly (2005) before, but here it is again:

We therefore think it is time for researchers and practitioners to ask “Under what conditions and for whom

is multisource feedback likely to be beneficial?” (rather than asking “Does multisource feedback work?”).

©2011 David W. Bracken

What I Learned at SIOP

with one comment

[tweetmeme source=”anotherangle360″]

The annual conference of the Society for Industrial/Organizational Psychology (SIOP) was held in Chicago April 14-16 with record attendance. I had something of a “360 Feedback-intensive” experience by running two half-day continuing education workshops (with Carol Jenkins) on 360 feedback, participating on a panel discussion of the evolution of 360 in the last 10 years (with other contributors to The Handbook of Multisource Feedback), and being the discussant for a symposium regarding Implicit Leadership Theories that largely focused on cultural factors in 360 processes. Each forum gave me an opportunity to gauge some current perspectives on this field, and here are a few that I will share.

The “debate” continues but seems to be softening. The “debate” is, of course, how 360 feedback should be used: development only and/or for decision making. In our CE Workshop, we actually had participants stand up and stand in corners of the room to indicate their stance on this issue, and, judging from that exercise, there are still many strong proponents of each side of that stance. That said, one of the conclusions the panel seemed to agree upon is that there is some blurring of the distinction between uses and some acknowledgement that 360’s are successfully being used for decision making, and that 360’s are far less likely to create sustainable behavior change without accountability that comes with integration with HR systems.

We need to be sensitive to the demands we place on our leaders/participants. During our panel discussion, Janine Waclawski (who is currently an HR generalist at Pepsi) reminded us of how we typically inundate 360 participants with many data points, beginning with the number of items multiplied by the number of rater groups. (I don’t believe the solution to this problem is reducing the number of items, especially below some arbitrary number like 20 items.)  Later, I had the opportunity to offer commentary on four terrific research papers that had a major theme of how supervisors need to be aware of the perspectives of their raters that may well be caused by their cultural backgrounds.

As someone who is more on the practitioner end of the practitioner-scientist continuum, I tried to once again put myself in the seat of the feedback recipient (where I have been many times) and consider how this research might be put into practice. On one hand, organizations are using leadership competency models and values statements to create a unified message (and culture?) that spans all segments of the company. We can (and should) have debates about how useful and realistic this practice is, but I think most of us agree that the company has a right to define the behaviors that are expected of successful leaders. 360 processes can be a powerful way to define those expectations in behavioral terms, to help leaders become aware of their perceived performance of those behaviors, to help them get better, and to hold leaders accountable for change.

On the other hand, the symposium papers seem to suggest that leader behaviors should be molded from “the bottom up,” i.e., by responding to the expectations of followers (raters) that may be attributed to their cultural backgrounds and their views of what an effective leader should be (which may differ from the leader’s view and/or the organization’s view of effective leadership).  By the way, this “bottoms up” approach applies also to the use of importance ratings (which is not a cultural question).

My plea to the panel (perhaps to their dismay) was to at least consider the conundrum of the feedback recipient who is being given this potentially incredibly complex task of not only digesting the basic data that Janine was referring to, but then to fold in the huge amount of information created by having to consider the needs of all the feedback providers. Their research is very interesting and useful in raising our awareness of cultural differences that can affect the effectiveness of our 360 processes. But PLEASE acknowledge the implications for putting all of this to use.

The “test” mentality is being challenged.  I used the panel discussion to offer up one of my current pet peeves, namely to challenge the treatment of 360 Feedback as a “test.”  Both in the workshops and again at the panel, I suggested that applying practices such as randomizing items and using reverse wording to “trick” the raters is not constructive and most likely is contrary to our need to help the raters provide reliable data. I was gratified to receive a smattering of applause when I made that point during the panel.  I am looking forward to hopefully discussing (debating) this stance with the Personnel Testing Council of Metropolitan Washington in a workshop I am doing in June, where I suspect some of the traditional testing people will speak their mind on this topic.

This year’s SIOP was well done, once again. I was especially glad to see an ongoing interest in the evolution of the field of 360 feedback judging from the attendance at these sessions, let alone the fact that the workshop committee identified 360 as a topic worthy of inclusion after going over 10 years since the last one.  360 Feedback is such a complex process, and we are still struggling with the most basic questions, including purpose and use.

©2011 David W. Bracken

Has Anything Changed in 10 Years?

leave a comment »

[tweetmeme source=”anotherangle360″]

2011 marks the 10th anniversary of the publication of The Handbook of Multisource Feedback. To mark this occasion, we have convened a panel of contributors to The Handbook for a SIOP (Society of Industrial and Organizational Psychology) session to discuss how the field of 360 has changed (and not changed) in those 10 years. Panel members will include the Editors (Carol Timmreck (who will be moderator), Allan Church and myself), James Farr, Manny London, David Peterson, Bob Jako, and Janine Waclawksi. (See http://www.siop.org for more information.)

In a “good news/bad news” kind of way, we frequently get feedback from practitioners who still use The Handbook as a reference. In that way, it seems to be holding up well (the good news). The “bad news” might be that not much has changed in 10 years and the field is not moving forward.

Maybe the most obvious changes have been in the area of technology, again for good and bad. One of the many debates in this field is whether putting 360 technology in the hands of inexperienced users really is such a great idea. That said, it is a fact that it is happening and will have some potential benefits in cost and responsiveness.

Besides technology, what how else has the field of 360 feedback progressed or digressed in the last decade?

I will get the ball rolling by offering two pet peeves:

1)      The lack of advancement in development and use of rater training as a best practice, and

2)      The ongoing application of a testing mindset to 360 processes.

Your thoughts?

©2011 David W. Bracken