Strategic 360s

Making feedback matter

Posts Tagged ‘journal of business and psychology

Built to Fail/Don’t Let Me Fail

leave a comment »

[tweetmeme source=”anotherangle360″]

This is a “two sided” blog entry, like those old 45 rpm records that had hit songs on both sides (think  “We Can Work It Out”/”Daytripper” by the Beatles),though my popularity may not be quite at their level.  This is precipitated by a recent blog (and LinkedIn discussion entry) coming from the Envisia people. The blog entry is called, “Does 360-degree feedback even work?” by Sandra Mashihi and can be found at http://results.envisialearning.com/.   It would be helpful if you read it first, but not necessary.

Sandra begins by citing some useful research regarding the effectiveness of 360 processes. And she concludes that sometimes 360’s “work” and sometimes not.  Her quote is, “Obviously, the research demonstrates varied results in terms of its effectiveness.”

What is frustrating for some of us are the blanket statement about failures (and using terms like “obvious”) without acknowledging that many 360’s are “built to fail.” This is the main thesis of the article Dale Rose and I just published in the Journal of Business and Psychology. http://www.springerlink.com/content/85tp6nt57ru7x522/

http://www.ioatwork.com/ioatwork/2011/06/practical-advice-for-designing-a-360-degree-feedback-process-.html

Dale and I propose four features needed in a 360 process if it is likely to create sustainable behavior change:

1)      Reliable measurement: Professionally developed, custom designed instruments

2)      Credible data: Collecting input from trained, motivated raters with knowledge of ratees

3)      Accountability: Methods to motivate raters and ratees to fulfill their obligations

4)      Census participation: Requiring all leaders in an organizational unit to get feedback

We go on to cite research that demonstrates how the failure to build these features into 360 can, in some cases, almost guarantee failure and/or the ability to detect behavior change when it does occur. One such feature, for example, is whether the ratee follows up with raters (which I have mentioned in multiple prior blogs). If/when a 360 (or a collection of 360’s, such as in a meta analysis) is deemed a “failure”, I always want to know things such as whether raters were trained and whether follow up was required, for starters.

We are leaning more and more about the facets that increase the probability that behavior change will occur as a result of 360 feedback. Yet all too often these features are not built into many processes, and practitioners are surprised (“shocked, I’m shocked”) when it doesn’t produce desired results.

Sandra then goes on to state: “I have found 360-degree feedback worked best when the person being rated was open to the process, when the company communicated its purpose clearly, and used it for development purposes.” I assume that she means “development only” since all 360’s are developmental.  I definitely disagree with that feature. 360’s for “development (only) purposes” usually violate one or more of the 4 features Dale and I propose, particularly the accountability one. They often do not generate credible data because too few raters are used, even the best practice of including all direct reports.

The part about “being open to the process” is where I get the flip side of my record, i.e., don’t hurt my feelings.  In one (and only one) way, this makes sense. If the ratee doesn’t want to be in a development-only process, then by all means don’t force them. It is a waste of time and money. On the other hand, all development only processes are a waste of money in my opinion for most people. (And, by the way, development only is very rare if that means that no decisions are being made as a result.)

But if we ARE expecting to get some ROI (such as sustained behavior change) from our 360’s, then letting some people to opt out so their feelings aren’t hurt is totally contrary to helping the organization manage its leadership cadre. Intuitively, we should expect that those who opt out are the leaders that need it the most, who know that they are not effective and/or are afraid to be “discovered” as the bullies, jerks, and downright psychopaths that we know exist out there.

I have some fear that this fear of telling leaders that they are less than perfect is stemming from this troubling trend in our culture where everyone  has to succeed. I think that the whole “strengths” movement is a sign of that.

Over the last couple of weeks, I have seen a few things that further sensitized me to this phenomenon. One big one is this article in The Atlantic: http://www.theatlantic.com/magazine/archive/2011/07/how-to-land-your-kid-in-therapy/8555/1/.  Protecting our children from failure is not working. Protecting our leaders from failure is also dooming your organization.

I swear I never watch America’s Funniest Videos, but during a rain delay of a baseball game recently, I did stumble upon it and succumbed. AFV is all about failure, and I’m not so sure that people always learn from these failures. But one video I enjoyed showing a 2 year old boy trying to pour apple juice from a BIG bottle into a cup. He put the cup on the floor and totally missed the first two times (with the corresponding huge mess). As a parent and grandparent, I was quite amazed that the person behind the camera just let it happen. But on the third try, the task was accomplished successfully, followed by applause and smiles! There was a huge amount of learning that occurred in just a minute or two because the adults allowed it to happen, with a bit of a mess to clean up.

How many of us would have just poured the juice for him? His learning isn’t over; he will make more mistakes and miss the cup occasionally. But don’t we all.

As a parting note, Dale and I support census participation for a number of reasons, one of which is the point I have already made about otherwise missing the leaders that need it most. We also see 360’s as a powerful tool for organizational change, and changing some leaders and not others does not support that objective. Having all leaders participate is tangible evidence that the process has organization support and is valued. Finally, it creates a level playing field for all leaders for both evaluation and development, communicating to ALL employees what the organization expects from its leaders.

©2011 David W. Bracken

On the Road… and Web and Print

leave a comment »

[tweetmeme source=”anotherangle360″]

I have a few events coming up in the next 3 weeks or so that I would like to bring to your collective attention in case you have some interest.  One is free, two are not (though I receive no remuneration). I also have an article out that I co-authored on 360 feedback.

In chronological order, on May 25 Allan Church, VP Global Talent Development at PepsiCo, and I will lead a seminar titled, “Integrating 360 & Upward Feedback into Performance and Rewards Systems” at the 2011 World at Work Conference in San Diego (www.worldatwork.org/sandiego2011).  I will be offering some general observations on the appropriateness, challenges, and potential benefits of using 360 Feedback for decision making, such as performance management. The audience will be very interested in Allan’s descriptions of his experiences with past and current processes that have used 360 and Upward Feedback for both developmental and decision making purposes.

On June 8, I am looking forward to conducting a half day workshop for the Personnel Testing Council of Metropolitan Washington (PTCMW) in Arlington, VA, titled “360-Degree Assessments: Make the Right Decisions and Create Sustainable Change” (contact Training.PTCMW@GMAIL.COM or go to WWW.PTCMW.ORG). This workshop is open to the public and costs $50.  I will be building from the workshop Carol Jenkins and I conducted at The Society for Industrial and Organizational Psychology. That said, the word “assessments” in the title is a foreshadowing of a greater emphasis on the use of 360 Feedback in a decision making context and an audience that is expected to have great interest in the questions of validity and measurement.

On the following day, June 9 (at 3:30 PM EDT), I will be part of an online virtual conference organized by the Institute of Human Resources and hr.com on performance management. My webinar is titled, “Using 360 Feedback in Performance Management: The Debate and Decisions,” where the “decisions” part has multiple meanings. Given the earlier two sessions I described, it should be clear that I am a proponent of using 360/Upward Feedback for decision making under the right conditions. The other take on “decisions” is the multitude of decisions that are required to create those “right conditions” in the design and implementation of a multisource process.

On that note, I am proud to say that Dale Rose and I have a new article in the Journal of Business and Psychology (June) titled, “When does 360-degree feedback create behavior change? And how would we know it when it does?” Our effort is largely an attempt to identify the critical design factors in creating 360 processes and the associated research needs.

This article is part of a special research issue (http://springerlink.com/content/w44772764751/) of JBP and you will have to pay for a copy unless you have a subscription. As a tease, here is the abstract:

360-degree feedback has great promise as a method for creating both behavior change and organization change, yet research demonstrating results to this effect has been mixed. The mixed results are, at least in part, because of the high degree of variation in design features across 360 processes. We identify four characteristics of a 360 process that are required to successfully create organization change, (1) relevant content, (2) credible data, (3) accountability, and (4) census participation, and cite the important research issues in each of those areas relative to design decisions. In addition, when behavior change is created, the data must be sufficiently reliable to detect it, and we highlight current and needed research in the measurement domain, using response scale research as a prime example.

Hope something here catches your eye/ear!

©2011 David W. Bracken

What is the ROI for 360’s?

with 2 comments

[tweetmeme source=”anotherangle360″]

Tracy Maylett recently started a LinkedIn discussion in the 360 Feedback Surveys group by asking, “Can you show ROI on 360-degree feedback processes?” To date, no one has offered up any examples, and this causes me to reflect on this topic. It will also be part of our (Carol Jenkins and myself) discussion at the Society for Industrial and Organizational Psychology (SIOP) Pre-Conference Workshop on 360 Feedback (April 13 in Chicago; see www.siop.org).

Here are some thoughts on the challenges in demonstrating ROI with 360 processes:

1)      It is almost impossible to assess the value of behavior change. Whether we use actual measurements (e.g., test-retest) or just observer estimations of ratee change, assigning a dollar value is extremely difficult. My experience is that, no matter what methodology you use, the results are often large and cause consumers (e.g., senior management) to question and discount the findings.

2)      The targets for change are limited, by design. A commonly accepted best practice for 360’s is to guide participants in using the data to focus on 2-3 behaviors/competencies. If some overall measure of behavior change is used (e.g., the average of all items in the model/questionnaire), then we should expect negligible results since the vast majority of behaviors have not been addressed in the action planning (development) process.

3)      The diversity of behaviors/competencies will mean that they have differential ease of change (e.g., short vs. long term change) and different value to the organization. For example, what might be the ROI for significant change (positive or negative) in ethical behavior compared to communication? Each is very important but with very different implications for measuring ROI.

4)      Measurable change is dependent on design characteristics of each 360 process.  I have suggested in earlier blogs that there are design decisions that are potentially so powerful as to promote or negate behavior change. One source for that statement is the article by Goldsmith and Morgan called, “Leadership is a contact sport,” which can be found on www.marshallgoldsmith.com.  In this article  (that I have also mentioned before), they share results from hundreds of global companies and thousands of leaders that strongly support the conclusion that follow up with raters may be the single best predictor of observed behavior change.

Dale Rose and I have an article in press with the Journal of Business and Psychology titled, “When does 360-degree Feedback create behavior change?  And would we know it when it does?” One of our major objectives in that article is to challenge blanket statements about the effectiveness of 360 processes since there are so many factors that will directly impact the power of the system to create the desired outcomes. The article covers some of those design factors and the research (or lack thereof) associated with them.

If anyone says, for example, that a 360 process (or a cluster, such as in a meta analysis) shows minimal or no impact, my first question would be, “Were the participants required to follow up with their raters?” I would also ask about things like reliability of the instrument, training of raters, and accountability as a starter list of factors that can result in unsuccessful ability to cause and/or measure behavior change.

Tracy’s question regarding ROI is an excellent one, and we should be held accountable for producing results. That said, we should not be held accountable for ROI when the process has fatal flaws in design that almost certainly will result in failure and even negative ROI.

©2011 David W. Bracken

Maybe Purpose Doesn’t Matter?

leave a comment »

[tweetmeme source=”anotherangle360″]

While there are many discussions and debates within the 360 Feedback community (including one regarding randomizing items currently on LinkedIn that I will address in a later blog), probably none is more intense and enduring than the issue of the proper use of 360 results. In The Handbook of MultiSource Feedback, a whole chapter (by Manny London) was dedicated to “The Great Debate” regarding using 360 for developmental vs. decision making purposes. In fact, in the late 90’s an entire book was published by the Center for Creative Leadership based on a debate I organized at SIOP.

I have argued in earlier blogs and other forums that I believe this “either/or” choice is a false one for many reasons. For example, even “development only” uses require decisions that affect personal and organizational outcomes and resources. Also, even when used for decision (including succession planning, staffing, promotions, and, yes, performance management), there is always a development component.

One of the aggravating blanket statements that is used by the “development only” crowd is that respondents will not be honest if they believe that the results will be used to make decisions that might be detrimental to the ratee, resulting in inflated scores with less variability. I would say that, in fact, that is by far the most common argument for the “development only” proponents, and one that is indeed supported by some research studies.

I have just become aware of an article published 3 years ago in the Journal of Business and Psychology (JBP) relating to multisource feedback, titled “Factors Influencing Employee Intentions to Provide Honest Upward Feedback Ratings” (Smith and Fortunato, 2008).  For those of you who are not familiar with JBP, it is a refereed journal of high quality that should be on your radar and, in full disclosure, a journal for which I am an occasional reviewer.

The study was conducted at a behavioral health center with a final sample of 203 respondents. The employees filled out a questionnaire about various aspects of an upward feedback process that was being implemented in the future.

The article is fairly technical and targeted toward the industrial/organizational community. I have pulled out one figure for the geeks in the audience to consume if desired (click on “360 Figure”) . But let me summarize the findings of the study.

The outcome (dependent variable) that was of primary interest to the researchers is foreshadowed in the title, i.e., what factors lead to intentions to respond honestly in ratings of a supervisor (upward feedback).  The most surprising result (as highlighted in the discussion by the authors) was that purpose (administrative versus developmental) had no predictive value at all! Of all the predictor variables measured, it was the least influential with no practical (statistical) significance.

What does predict intentions to provide honest feedback? One major predictor is the level of cynacism, with (as you might guess) cynical attitudes resulting in less honesty. The study suggests that cynical employees fear retaliation by supervisors and are less likely to believe that the stated purpose will be followed. The authors suggest that support and visible participation by senior leaders might help reduce these negative attitudes. We also need to continue to protect both real and perceived confidentiality, and to have processes to identify cases of retaliation and hold the offending parties accountable.

The other major factor is what I would label as rater self confidence in their ability as a feedback provider. Raters need to feel that their input is appropriate and valued, and that they know how the process will work. They also have a need to feel that they have sufficient opportunity to observe.  The authors appropriately point to the usefulness of rater training to help accomplish these outcomes. They do not mention the rater selection process as being an important determinant of opportunity to observe, but that is obviously a major factor in ensuring that the best raters are chosen.

One suggestion the authors make (seemingly out of context) that is purported to help improve the honesty of the feedback is to use reverse-worded items to keep raters from choosing only socially desirable responses (e.g., Strongly Agree).  I totally disagree with practices such as reverse wording and randomization which may actually reduce the reliability of the instrument (unless the purpose is for research only). For example, at our SIOP Workshop, Carol Jenkins and I will be showing an actual 360 report that uses both of those methods (reverse wording and randomization). In this report (that Carol had to try to interpret for a client), the manager (“boss”) of the ratee had give the same response (Agree) to two versions of the same item where one was reverse scored. In other words, the Manager was Agreeing that the ratee was both doing and not doing the same thing.

Now what? The authors of this study seem to suggest that situations like this would invalidate the input of this manager, arguably the most important rater of all.  Now we could just contact the manager and try to clarify his/her input. But the only reason we know of this situation is that the manager is not anonymous (and they know that going into the rating process). If this same problem of rating inconsistency occurs with other rater groups, it is almost impossible to rectify since the raters are anonymous and confidential (hopefully).

This is only one study, though a well designed and analyzed study in a respected journal. I will not say that this study proves that purpose does not have an effect on honesty. Nor should anyone say that other studies prove that purpose does affect honesty. To be clear, I have always said that it may be appropriate to use 360 results in decision making under the right conditions, conditions that are admittedly often difficult to achieve. This is in contrast to some practitioners who contend that it is never appropriate to do so, under any conditions.

Someday when I address the subject of organizational readiness, I will recall the survey used in this research which was administered in anticipation of implementing an upward feedback process. This brief (31 item) survey used for this study would be a great tool to assess readiness in all 360 systems.

One contribution of this research is to point out that intention to be honest is as much a characteristic of the process as it is of the person. Honesty is a changeable behavior in this context through training, communication, and practice. Making blanket statements about rater behavior and how a 360 program should or shouldn’t be used are not productive.

360 Figure

©2011 David W. Bracken