Strategic 360s

Making feedback matter

Posts Tagged ‘2011 siop conference

On the Road… and Web and Print

leave a comment »

[tweetmeme source=”anotherangle360″]

I have a few events coming up in the next 3 weeks or so that I would like to bring to your collective attention in case you have some interest.  One is free, two are not (though I receive no remuneration). I also have an article out that I co-authored on 360 feedback.

In chronological order, on May 25 Allan Church, VP Global Talent Development at PepsiCo, and I will lead a seminar titled, “Integrating 360 & Upward Feedback into Performance and Rewards Systems” at the 2011 World at Work Conference in San Diego (www.worldatwork.org/sandiego2011).  I will be offering some general observations on the appropriateness, challenges, and potential benefits of using 360 Feedback for decision making, such as performance management. The audience will be very interested in Allan’s descriptions of his experiences with past and current processes that have used 360 and Upward Feedback for both developmental and decision making purposes.

On June 8, I am looking forward to conducting a half day workshop for the Personnel Testing Council of Metropolitan Washington (PTCMW) in Arlington, VA, titled “360-Degree Assessments: Make the Right Decisions and Create Sustainable Change” (contact Training.PTCMW@GMAIL.COM or go to WWW.PTCMW.ORG). This workshop is open to the public and costs $50.  I will be building from the workshop Carol Jenkins and I conducted at The Society for Industrial and Organizational Psychology. That said, the word “assessments” in the title is a foreshadowing of a greater emphasis on the use of 360 Feedback in a decision making context and an audience that is expected to have great interest in the questions of validity and measurement.

On the following day, June 9 (at 3:30 PM EDT), I will be part of an online virtual conference organized by the Institute of Human Resources and hr.com on performance management. My webinar is titled, “Using 360 Feedback in Performance Management: The Debate and Decisions,” where the “decisions” part has multiple meanings. Given the earlier two sessions I described, it should be clear that I am a proponent of using 360/Upward Feedback for decision making under the right conditions. The other take on “decisions” is the multitude of decisions that are required to create those “right conditions” in the design and implementation of a multisource process.

On that note, I am proud to say that Dale Rose and I have a new article in the Journal of Business and Psychology (June) titled, “When does 360-degree feedback create behavior change? And how would we know it when it does?” Our effort is largely an attempt to identify the critical design factors in creating 360 processes and the associated research needs.

This article is part of a special research issue (http://springerlink.com/content/w44772764751/) of JBP and you will have to pay for a copy unless you have a subscription. As a tease, here is the abstract:

360-degree feedback has great promise as a method for creating both behavior change and organization change, yet research demonstrating results to this effect has been mixed. The mixed results are, at least in part, because of the high degree of variation in design features across 360 processes. We identify four characteristics of a 360 process that are required to successfully create organization change, (1) relevant content, (2) credible data, (3) accountability, and (4) census participation, and cite the important research issues in each of those areas relative to design decisions. In addition, when behavior change is created, the data must be sufficiently reliable to detect it, and we highlight current and needed research in the measurement domain, using response scale research as a prime example.

Hope something here catches your eye/ear!

©2011 David W. Bracken

Who’s in charge here?

leave a comment »

[tweetmeme source=”anotherangle360″]

In my last blog, I touched on an issue that I would like to give a little more attention, namely the question of who our leaders (i.e., the recipients of 360 Feedback) should be listening to when prioritizing their leadership skills and behaviors: the organization, their coworkers, and/or some combination?  This is a topic that has cultural implications that I alluded to in that earlier blog, and I will post this out on the Society for Industrial/Organizational Psychology (SIOP) Going Global group on LinkedIn to see if I can get a response from someone there. But it also has broad implications for how we approach leadership selection, assessment and development in general.

One end of the continuum is to state that an organization has the right (and need) to define leadership competencies/behaviors, ideally derived from strategy, to support its initiatives and create a unique competitive advantage (see the treatment of leadership as an intangible asset in the book, The Invisible Advantage).  Organizations regularly create leadership models that are used to align HR systems in creating the type of leader needed to succeed at the individual and organization level.  This includes values statements that often are operationalized through behavioral items in 360’s that hopefully apply to leaders and line employees alike.

In the context of 360’s, I have long maintained that 360’s can draw their relevance (read “validity”) from a direct line of sight from strategy to leadership models to 360 content, and that it is not necessary to conduct predictive studies to demonstrate the validity of the 360 process. The content of a 360 instrument can define success, and that leaders that behave in ways consistent with the model are successful by definition.  Those who do not conform to those expectations should be give the choice of changing or to find alternative employment.

That is the “top down” version of “who’s in charge here?”  I was influenced in my thinking many years ago by some monographs by Bill Byham at DDI on this topic that were often in the context of assessment center processes but certainly no less applicable. Dr. Byham has written a number of papers on aligning HR systems with competency models, and you can find much of that on their web site.

The other end of the continuum of “who’s in charge here?” is the “bottoms up” view of leadership effectiveness that suggests that the leader’s behavior should be directed by coworkers, particularly direct reports. This view came into greater clarity for me at SIOP during a symposium that I briefly described in my last blog on implicit leadership models.

For decades, this “bottoms up” view of leadership has been implicit in the use of importance ratings. I have been opposed to importance ratings for as long as I can remember, partially because of the extra burden to raters, but, more importantly, because raters are not in a good position to understand the needs of the leader and the organization. I still believe that importance is best determined by the ratee and his/her boss in conjunction.  Asking raters for importance ratings feels like a customer survey. If your 360 treats raters as “customers” of leader behavior, then use a satisfaction scale and design the whole system accordingly.

The SIOP symposium had a number of interesting research studies that explicitly state that an effective leader should understand and react to the needs of coworkers based on the coworkers’ expectations of how an “effective” leader should behave which is, in turn, derived from their cultural backgrounds (i.e., nationality). This is very interesting and cultural awareness is an important issue in our global community.  As some of these papers pointed out, leaders now (especially with virtual teams) can have coworkers and direct reports from multiple nations and cultures, which creates a requirement that somehow the leader has to understand and adapt to the needs of each of these people. My head began to spin!

By the way, I first learned about the concept of Situational Leadership back in the 80’s and I still believe that philosophically it makes a lot of sense to not treat every subordinate the same way. But if you know Situational Leadership, it focuses exclusively on the person’s “maturity” (ability to perform a task) and the need to adapt leadership style depending on an assessment of that maturity level. It is very task oriented, and has little (if anything) to do with the needs and expectations of the follower.

There was some sentiment on the SIOP panel for asking the leader to negotiate or compromise between the “bottoms up” and “top down” views of leadership. I’m not sure how that would work, but it probably compounds one of the main problems I cited as discussant, namely the overload we are creating for leaders by inundating them with all this information in the form of job expectations. I have to believe that leaders are asking, or will ask, this question of “who is in charge?”, the organization or their coworkers?

I will take a stance. I believe that the organization is “in charge.” I did some consulting for a company of about 3500 people in Dubai that had employees with 80 different passports. They were run by South Africans, and ran the company that way. In a nutshell, they expected employees to conform to a common set of values and expectations, effectively leaving their cultural backgrounds at the door. Or at least the company “culture” should take precedence in defining effective leadership. I believe that this aligned focus on organization needs is a necessity, and that we need to make it clear to our leaders “who is in charge” when it comes to deciding how the company will leverage one of its most powerful intangible assets.

©2011 David W. Bracken

What I Learned at SIOP

with one comment

[tweetmeme source=”anotherangle360″]

The annual conference of the Society for Industrial/Organizational Psychology (SIOP) was held in Chicago April 14-16 with record attendance. I had something of a “360 Feedback-intensive” experience by running two half-day continuing education workshops (with Carol Jenkins) on 360 feedback, participating on a panel discussion of the evolution of 360 in the last 10 years (with other contributors to The Handbook of Multisource Feedback), and being the discussant for a symposium regarding Implicit Leadership Theories that largely focused on cultural factors in 360 processes. Each forum gave me an opportunity to gauge some current perspectives on this field, and here are a few that I will share.

The “debate” continues but seems to be softening. The “debate” is, of course, how 360 feedback should be used: development only and/or for decision making. In our CE Workshop, we actually had participants stand up and stand in corners of the room to indicate their stance on this issue, and, judging from that exercise, there are still many strong proponents of each side of that stance. That said, one of the conclusions the panel seemed to agree upon is that there is some blurring of the distinction between uses and some acknowledgement that 360’s are successfully being used for decision making, and that 360’s are far less likely to create sustainable behavior change without accountability that comes with integration with HR systems.

We need to be sensitive to the demands we place on our leaders/participants. During our panel discussion, Janine Waclawski (who is currently an HR generalist at Pepsi) reminded us of how we typically inundate 360 participants with many data points, beginning with the number of items multiplied by the number of rater groups. (I don’t believe the solution to this problem is reducing the number of items, especially below some arbitrary number like 20 items.)  Later, I had the opportunity to offer commentary on four terrific research papers that had a major theme of how supervisors need to be aware of the perspectives of their raters that may well be caused by their cultural backgrounds.

As someone who is more on the practitioner end of the practitioner-scientist continuum, I tried to once again put myself in the seat of the feedback recipient (where I have been many times) and consider how this research might be put into practice. On one hand, organizations are using leadership competency models and values statements to create a unified message (and culture?) that spans all segments of the company. We can (and should) have debates about how useful and realistic this practice is, but I think most of us agree that the company has a right to define the behaviors that are expected of successful leaders. 360 processes can be a powerful way to define those expectations in behavioral terms, to help leaders become aware of their perceived performance of those behaviors, to help them get better, and to hold leaders accountable for change.

On the other hand, the symposium papers seem to suggest that leader behaviors should be molded from “the bottom up,” i.e., by responding to the expectations of followers (raters) that may be attributed to their cultural backgrounds and their views of what an effective leader should be (which may differ from the leader’s view and/or the organization’s view of effective leadership).  By the way, this “bottoms up” approach applies also to the use of importance ratings (which is not a cultural question).

My plea to the panel (perhaps to their dismay) was to at least consider the conundrum of the feedback recipient who is being given this potentially incredibly complex task of not only digesting the basic data that Janine was referring to, but then to fold in the huge amount of information created by having to consider the needs of all the feedback providers. Their research is very interesting and useful in raising our awareness of cultural differences that can affect the effectiveness of our 360 processes. But PLEASE acknowledge the implications for putting all of this to use.

The “test” mentality is being challenged.  I used the panel discussion to offer up one of my current pet peeves, namely to challenge the treatment of 360 Feedback as a “test.”  Both in the workshops and again at the panel, I suggested that applying practices such as randomizing items and using reverse wording to “trick” the raters is not constructive and most likely is contrary to our need to help the raters provide reliable data. I was gratified to receive a smattering of applause when I made that point during the panel.  I am looking forward to hopefully discussing (debating) this stance with the Personnel Testing Council of Metropolitan Washington in a workshop I am doing in June, where I suspect some of the traditional testing people will speak their mind on this topic.

This year’s SIOP was well done, once again. I was especially glad to see an ongoing interest in the evolution of the field of 360 feedback judging from the attendance at these sessions, let alone the fact that the workshop committee identified 360 as a topic worthy of inclusion after going over 10 years since the last one.  360 Feedback is such a complex process, and we are still struggling with the most basic questions, including purpose and use.

©2011 David W. Bracken

Has Anything Changed in 10 Years?

leave a comment »

[tweetmeme source=”anotherangle360″]

2011 marks the 10th anniversary of the publication of The Handbook of Multisource Feedback. To mark this occasion, we have convened a panel of contributors to The Handbook for a SIOP (Society of Industrial and Organizational Psychology) session to discuss how the field of 360 has changed (and not changed) in those 10 years. Panel members will include the Editors (Carol Timmreck (who will be moderator), Allan Church and myself), James Farr, Manny London, David Peterson, Bob Jako, and Janine Waclawksi. (See http://www.siop.org for more information.)

In a “good news/bad news” kind of way, we frequently get feedback from practitioners who still use The Handbook as a reference. In that way, it seems to be holding up well (the good news). The “bad news” might be that not much has changed in 10 years and the field is not moving forward.

Maybe the most obvious changes have been in the area of technology, again for good and bad. One of the many debates in this field is whether putting 360 technology in the hands of inexperienced users really is such a great idea. That said, it is a fact that it is happening and will have some potential benefits in cost and responsiveness.

Besides technology, what how else has the field of 360 feedback progressed or digressed in the last decade?

I will get the ball rolling by offering two pet peeves:

1)      The lack of advancement in development and use of rater training as a best practice, and

2)      The ongoing application of a testing mindset to 360 processes.

Your thoughts?

©2011 David W. Bracken

Maybe Purpose Doesn’t Matter?

leave a comment »

[tweetmeme source=”anotherangle360″]

While there are many discussions and debates within the 360 Feedback community (including one regarding randomizing items currently on LinkedIn that I will address in a later blog), probably none is more intense and enduring than the issue of the proper use of 360 results. In The Handbook of MultiSource Feedback, a whole chapter (by Manny London) was dedicated to “The Great Debate” regarding using 360 for developmental vs. decision making purposes. In fact, in the late 90’s an entire book was published by the Center for Creative Leadership based on a debate I organized at SIOP.

I have argued in earlier blogs and other forums that I believe this “either/or” choice is a false one for many reasons. For example, even “development only” uses require decisions that affect personal and organizational outcomes and resources. Also, even when used for decision (including succession planning, staffing, promotions, and, yes, performance management), there is always a development component.

One of the aggravating blanket statements that is used by the “development only” crowd is that respondents will not be honest if they believe that the results will be used to make decisions that might be detrimental to the ratee, resulting in inflated scores with less variability. I would say that, in fact, that is by far the most common argument for the “development only” proponents, and one that is indeed supported by some research studies.

I have just become aware of an article published 3 years ago in the Journal of Business and Psychology (JBP) relating to multisource feedback, titled “Factors Influencing Employee Intentions to Provide Honest Upward Feedback Ratings” (Smith and Fortunato, 2008).  For those of you who are not familiar with JBP, it is a refereed journal of high quality that should be on your radar and, in full disclosure, a journal for which I am an occasional reviewer.

The study was conducted at a behavioral health center with a final sample of 203 respondents. The employees filled out a questionnaire about various aspects of an upward feedback process that was being implemented in the future.

The article is fairly technical and targeted toward the industrial/organizational community. I have pulled out one figure for the geeks in the audience to consume if desired (click on “360 Figure”) . But let me summarize the findings of the study.

The outcome (dependent variable) that was of primary interest to the researchers is foreshadowed in the title, i.e., what factors lead to intentions to respond honestly in ratings of a supervisor (upward feedback).  The most surprising result (as highlighted in the discussion by the authors) was that purpose (administrative versus developmental) had no predictive value at all! Of all the predictor variables measured, it was the least influential with no practical (statistical) significance.

What does predict intentions to provide honest feedback? One major predictor is the level of cynacism, with (as you might guess) cynical attitudes resulting in less honesty. The study suggests that cynical employees fear retaliation by supervisors and are less likely to believe that the stated purpose will be followed. The authors suggest that support and visible participation by senior leaders might help reduce these negative attitudes. We also need to continue to protect both real and perceived confidentiality, and to have processes to identify cases of retaliation and hold the offending parties accountable.

The other major factor is what I would label as rater self confidence in their ability as a feedback provider. Raters need to feel that their input is appropriate and valued, and that they know how the process will work. They also have a need to feel that they have sufficient opportunity to observe.  The authors appropriately point to the usefulness of rater training to help accomplish these outcomes. They do not mention the rater selection process as being an important determinant of opportunity to observe, but that is obviously a major factor in ensuring that the best raters are chosen.

One suggestion the authors make (seemingly out of context) that is purported to help improve the honesty of the feedback is to use reverse-worded items to keep raters from choosing only socially desirable responses (e.g., Strongly Agree).  I totally disagree with practices such as reverse wording and randomization which may actually reduce the reliability of the instrument (unless the purpose is for research only). For example, at our SIOP Workshop, Carol Jenkins and I will be showing an actual 360 report that uses both of those methods (reverse wording and randomization). In this report (that Carol had to try to interpret for a client), the manager (“boss”) of the ratee had give the same response (Agree) to two versions of the same item where one was reverse scored. In other words, the Manager was Agreeing that the ratee was both doing and not doing the same thing.

Now what? The authors of this study seem to suggest that situations like this would invalidate the input of this manager, arguably the most important rater of all.  Now we could just contact the manager and try to clarify his/her input. But the only reason we know of this situation is that the manager is not anonymous (and they know that going into the rating process). If this same problem of rating inconsistency occurs with other rater groups, it is almost impossible to rectify since the raters are anonymous and confidential (hopefully).

This is only one study, though a well designed and analyzed study in a respected journal. I will not say that this study proves that purpose does not have an effect on honesty. Nor should anyone say that other studies prove that purpose does affect honesty. To be clear, I have always said that it may be appropriate to use 360 results in decision making under the right conditions, conditions that are admittedly often difficult to achieve. This is in contrast to some practitioners who contend that it is never appropriate to do so, under any conditions.

Someday when I address the subject of organizational readiness, I will recall the survey used in this research which was administered in anticipation of implementing an upward feedback process. This brief (31 item) survey used for this study would be a great tool to assess readiness in all 360 systems.

One contribution of this research is to point out that intention to be honest is as much a characteristic of the process as it is of the person. Honesty is a changeable behavior in this context through training, communication, and practice. Making blanket statements about rater behavior and how a 360 program should or shouldn’t be used are not productive.

360 Figure

©2011 David W. Bracken

Not Funny

leave a comment »

[tweetmeme source=”anotherangle360″]

I seem to be in a bit of a rut with themes around humor and now commercials. Despite trying to bypass as many commercials as possible with my DVR, occasionally I do see one and sometimes even for the better.

One that caught my eye/ear is one by IBM that starts with a snippet of a Groucho Marx (whom I also like very much) where he states, “This morning I shot an elephant in my pajamas.”  Of course, the fun part is when he follows, “How he got in my pajamas, I will never know.”  Ba bump.

The commercial goes on to talk about a computer called Watson that has been developed by IBM with capabilities that will be used to compete on Jeopardy (another favorite). The point is that language has subtle meanings, euphemisms, metaphors, nuances and unexpected twists that are difficult for machines to correctly comprehend.

In the context of 360 Feedback, the problem is that we humans are sometimes not so good at picking up the subtleties of language as well. We need to do everything we can to remove ambiguity in our survey content, acknowledging that we can never be 100% successful.

We have all learned, sometimes the hard way, about how our attempts to communicate with others. How often have we had to come to grips with how our seemingly clear directions have been misunderstood by others?

I became sensitized to this question of ambiguity in language during the quality movement of the 80’s and the work of Peter Senge as embodied in The Fifth Discipline and the accompanying Fifth Discipline Fieldbook. (Writing this blog has spurred me to pull out this book; if you youngsters are not aware of Senge’s writings, it is still worth digging out. There is a 2006 Edition which I confess I have not read yet.)

There are many lessons in these books regarding the need to raise awareness about our natural tendencies as humans to fall back on assumptions, beliefs, values, etc., often unconsciously, in making decisions, trying to influence, and taking actions. One lesson that has particularly stuck with me in the context of 360’s is the concept of mental models, which Senge defines as, “deeply ingrained assumptions, generalizations, or even pictures or images that influence how we understand the world and how we take action.”  In the Fieldbook, he uses an example of the word “chair” and how that simple word will conjure up vastly different mental images of what a “chair” is, from very austere, simple seats to very lush, padded recliners and beyond. (In fact, it might even create an image of someone running a meeting if we are to take it even farther.)

So Groucho created a “mental model” (or assumed one) of us visualizing him in his pajamas with a gun chasing an elephant. Then he smashes that “assumption” we made by telling us that the elephant was wearing the pajamas. That is funny in many ways.

Sometimes we are amused when we find we have made an incorrect assumption about what someone has told us. I have told the story before of the leader who made assumptions about his low score on “Listens Effectively.” He unexpectedly found that his assumptions were unfounded and the raters were simply telling him to put down his PDA. That could be amusing and also a relief since it is an easy thing to act on.

360 Feedback is a very artificial form of communication where we rely on questionnaires to allow raters to “tell” the ratee something while protecting their anonymity. This also has the potential benefit of allowing us to easily quantify the responses which, in turn, can be used to measure gaps (between rater groups, for example) and track progress over time.

Of course this artificial communication creates many opportunities for raters to misunderstand or honestly misuse the intent of the items and, in turn, for ratees to misinterpret the intended message from the raters. We need to do our best to keep language simple and direct, though we can never prevent raters applying different “mental models.”

Take an item like, “Ensures the team has adequate resources.” Not a bad question. But, like “chair,” “resources” can create all sorts of mental images such as people (staff), money (budget), equipment (e.g., computers), access to the leader, and who knows what else! We could create a different item for each type of resource if we had an unlimited item budget, which we don’t.

This potential problem is heightened if there will be multiple languages used, creating all sorts of issues with translations, cultural perspectives, language nuances, and so on.

In the spirit of “every problem has a solution,” I can think of at least four basic recommendations.

First, be diligent in item writing to keep confusion to a minimum.  For example:

  • Use simple words/language
  • Don’t use euphemisms (“does a good job”)
  • Don’t use metaphors (“thinks outside the box”)
  • Don’t use sports language (“creates benchstrength”)
  • Keep all wording positive (or cluster negatively phrased items such as derailers in one dimension with clear instructions)

Second, conduct pilot tests with live raters who can give the facilitator immediate feedback on wording in terms of clarity and inferred meaning.

Third, conduct rater training. Some companies tell me that certain language is “ingrained” in their culture, such as “think outside the box.” (I really wonder how many people really know the origins of that metaphor. Look it up in Wikipedia if you don’t.)  I usually have to defer to their wishes, but still believe that their beliefs may be more aspirational than factual. Including a review of company-specific language (which does have some value in demonstrating the uniqueness of the 360 content) during rater training will have multiple benefits.

Fourth, acknowledge and communicate that it is impossible to prevent misinterpretations by the senders (raters) and the receivers (ratees). This will require that the ratee discuss results with the raters and ensure that they are all “on the same page”. (metaphor intended with tongue in cheek).

I bet that some ratees do actually laugh (or at least chuckle) if/when they hear how some raters interpret the questions.  But more typically it is not funny. And it is REALLY not funny if the ratee invests time and effort (and organizational resources) taking action on false issues due to miscommunication.

(Note: For those interested, Carol Jenkins and I will be talking about these issues in our SIOP Pre-Conference workshop on 360 Feedback on April 13 in Chicago.)

©2011 David W. Bracken

There Are “Right” Answers

with 2 comments

[tweetmeme source=”anotherangle360″]

For those of you who might attend the next SIOP (Society for Industrial and Organizational Psychology) Conference in Chicago in April, I am pleased to note that we have been accepted to conduct a panel consisting of contributors to The Handbook of Multisource Feedback, which is approaching its 10th anniversary of publication. The panel is titled, “How has 360 degree Feedback evolved over the last 10 years?”  Panel members include Allan Church, Carol Timmreck, Janine Waclawski, David Peterson, James Farr, Manny London, Bob Jako and myself.

We received a number of thoughtful, useful comments and suggestions from the reviewers of the proposal, one of which stated this:

I would like to see a serious discussion of whether or not 360 is a singular practice. It seems as though 360 can be used with so many different interventions (succession, development, training needs analysis, supplement to coaching, …the list is HUGE) that when we say something like “is 360 legal” it is almost impossible to answer without many caveats regarding the details of the specific 360 process that was used. It’s almost as though we need to move on from ‘is 360 xyz’ to ‘if we do 360 this way, we get these outcomes and if we do 360 that way we get those outcomes.’ Can’t wait to hear the panel, this is much needed.

This is an extremely insightful observation. I have broached this topic in earlier blogs regarding alignment of purpose and decisions in design and implementation.  But there are some things that are required regardless of purpose.

To look at extremes, we might consider 360 processes where the N=1, i.e., where a single leader is given the opportunity to get developmental feedback. This is often in preparation for an experience such as a leadership development/training program, or some development program (e.g., high potential).  In these instances, it is an ad hoc process where an off-the-shelf instrument may be most practical. The instrument can be lengthy since raters will only have to fill it out one time. And typically there are major resources available to the participant in the form of coaches, trainers, and/or HR partners to ensure that the feedback is interpreted and used productively.

Compare the N=1 scenario to the N>1 process. By N>1, I use shorthand to indicate 360 processes that are applied across some segment of the population, such as a function, department, or entire organization. In these cases, it becomes much more important to have a custom designed instrument that reflects unique organization requirements (competencies, behaviors) that can create system change while simultaneously defining effective leadership to raters and ratees alike. The process requires some efficiencies due to many raters being involved, and some being asked to complete multiple forms.  We also need to plan for ways to support the many ratees in their use of the feedback.

BUT, we might also say that there are some things that are so basic as to be necessary whether N=1 or N>1.  Just this week I was sent this interview of Cindy McCauley (of the Center for Creative Leadership) (http://www.groupstir.com/resources_assets/Why%20Reliability%20and%20Validity%20Matter%20in%20360%20Feedback.pdf). Many readers will already know who Cindy is; if not, suffice to say she is highly respected in our field and has deep expertise in 360 Feedback. (In fact, she contributed a chapter to the book, “Should 360 Feedback Be Used Only for Development Purposes?” that I was also involved with.) In this interview, Cindy makes some important points about basic requirements for reliability and validity that I interpret to be applicable to all 360 processes.

What really caught my attention was this statement by Cindy:

…the scores the managers receive back mean a lot to them. They take them very seriously and are asked

to make decisions and development plans based on those scores. So you want to be sure that you can

rely on those scores, that they’re consistent and reflect some kind of accuracy.

I take the liberty (which Cindy would probably not) to expand the “make decisions” part of this statement to apply more broadly, that others (such as the leader’s manager) also use the feedback to make decisions. When she says that managers make decisions on their feedback, what decisions can they make without the support of the organization (in the person of their boss, most typically)? This is basically the crux of my argument that there is no such thing as “development only” processes. Development requires decisions and the commitment of organization resources. This only reinforces her point about the importance of validity and reliable measurement.

So what’s my point? My point is that I believe that too many ad hoc (N=1) 360 processes fall short of meeting these requirements for validity and reliability. Another debate for another time is whether off-the-shelf instruments have sufficient validity to measure unique organization requirements.  I do believe it is accurate to say that reliable measurement is often neglected in ad hoc processes when decisions are made about number of raters and quality of ratings.

For example, research indicates that raters have different “agendas” and that subordinates are the least reliable feedback providers, followed by peers and then managers. Lack of reliability can be combated in at least two ways: rater training and number of raters. We can put aside rater training (beyond having good instructions); it rarely happens despite its power and utility.

So we can improve reliability with numbers. In fact, this is really why 360 data is superior to traditional, single source evaluations (i.e., performance appraisals).  For N>1 processes, I STRONGLY recommend that all direct reports (subordinates) participate as raters. This has multiple benefits, including beefing up the number of raters for the most unreliable rater group. Then, for peers, aiming for 5-7 respondents is recommended.

My contention is that the majority of ad hoc (N=1) processes do not adhere to those guidelines. (I have no data to support that assertion, just observation.)  The problem of unreliable data due to inadequate number of raters is compounded by the fact that the decisions resulting from that flawed data are magnified due to the senior level of the leaders and the considerable organization resources devoted to their development.

When I started writing this blog, I was thinking of the title, “There is No “Right Answer,” meaning that decisions need to fit the purpose. But actually there are some “Right Answers” that apply regardless of purpose. Don’t let the “development only” argument lead to implementation decisions that reduce the reliability and validity of the feedback. In fact, many guidelines should apply to all 360 processes, whether N=1 or N>1.

©2011 David W. Bracken