Strategic 360s

Making feedback matter

Archive for March 2011

What is the ROI for 360’s?

with 2 comments

[tweetmeme source=”anotherangle360″]

Tracy Maylett recently started a LinkedIn discussion in the 360 Feedback Surveys group by asking, “Can you show ROI on 360-degree feedback processes?” To date, no one has offered up any examples, and this causes me to reflect on this topic. It will also be part of our (Carol Jenkins and myself) discussion at the Society for Industrial and Organizational Psychology (SIOP) Pre-Conference Workshop on 360 Feedback (April 13 in Chicago; see

Here are some thoughts on the challenges in demonstrating ROI with 360 processes:

1)      It is almost impossible to assess the value of behavior change. Whether we use actual measurements (e.g., test-retest) or just observer estimations of ratee change, assigning a dollar value is extremely difficult. My experience is that, no matter what methodology you use, the results are often large and cause consumers (e.g., senior management) to question and discount the findings.

2)      The targets for change are limited, by design. A commonly accepted best practice for 360’s is to guide participants in using the data to focus on 2-3 behaviors/competencies. If some overall measure of behavior change is used (e.g., the average of all items in the model/questionnaire), then we should expect negligible results since the vast majority of behaviors have not been addressed in the action planning (development) process.

3)      The diversity of behaviors/competencies will mean that they have differential ease of change (e.g., short vs. long term change) and different value to the organization. For example, what might be the ROI for significant change (positive or negative) in ethical behavior compared to communication? Each is very important but with very different implications for measuring ROI.

4)      Measurable change is dependent on design characteristics of each 360 process.  I have suggested in earlier blogs that there are design decisions that are potentially so powerful as to promote or negate behavior change. One source for that statement is the article by Goldsmith and Morgan called, “Leadership is a contact sport,” which can be found on  In this article  (that I have also mentioned before), they share results from hundreds of global companies and thousands of leaders that strongly support the conclusion that follow up with raters may be the single best predictor of observed behavior change.

Dale Rose and I have an article in press with the Journal of Business and Psychology titled, “When does 360-degree Feedback create behavior change?  And would we know it when it does?” One of our major objectives in that article is to challenge blanket statements about the effectiveness of 360 processes since there are so many factors that will directly impact the power of the system to create the desired outcomes. The article covers some of those design factors and the research (or lack thereof) associated with them.

If anyone says, for example, that a 360 process (or a cluster, such as in a meta analysis) shows minimal or no impact, my first question would be, “Were the participants required to follow up with their raters?” I would also ask about things like reliability of the instrument, training of raters, and accountability as a starter list of factors that can result in unsuccessful ability to cause and/or measure behavior change.

Tracy’s question regarding ROI is an excellent one, and we should be held accountable for producing results. That said, we should not be held accountable for ROI when the process has fatal flaws in design that almost certainly will result in failure and even negative ROI.

©2011 David W. Bracken

What is normal?

leave a comment »

[tweetmeme source=”anotherangle360″]

My good friend, Jon Low (, brought a WSJ article to my attention that delves into the question of what behaviors are “normal” (i.e., tolerated or even encouraged) across different organizations, in this case sorted by industry.  Here are a couple brief excerpts from the article and a link to access it:

Fuld & Co., a competitive-intelligence consultant based in Cambridge, Mass., presented 104 business executives with hypothetical scenarios that would give the executive an opportunity to collect intel about a competitor, but straddled the ethical line. Participants could rate the scenario as “normal,” “aggressive,” “unethical,” or “illegal.”

“Companies have different senses of what’s right and wrong,” said Fuld & Co. President Leonard Fuld.

Executives in financial services and technology are the most cutthroat in collecting intelligence about competitors, while pharmaceutical executives and government officials are the most trepid, according to a recent survey.

What came to my mind in reading this interesting study was the question of the utility of comparing leadership behavior across organizations as we use the results of 360 Feedback processes to guide the development priorities and, sometimes, make other decisions based on the data as well.  Specifically, how useful are external norms as part of 360 reports?

First a brief digression. I have proposed lately that many discussions of best practices in the 360 arena should be divided into two categories, i.e., processes where “N=1” (i.e., ad hoc, single person administrations) vs. “N>1” (i.e., where more than one leader is going through the 360 experience at the same time).  For “N=1” situations, using an off-the-shelf instrument usually makes sense, and usually those instruments have external norms since the content is held constant across all users. Usually there are no internal norms as well. That said, the points I make below highlight the need for caution in using external norms in any setting. End of digression.

I frequently use a quote taken from the book Execution (Bossidy and Charan, 2003) that reads:

The culture of a company is the behavior of its leaders. Leaders get the behavior they exhibit and tolerate.”

I do not recall ever hearing anyone refute the notion that every organization has its own culture. Since that is an accepted axiom, then it would follow from Bossidy and Charan that the definition of successful leadership behaviors should vary across organizations as well.

If you agree with that train of thought, then it seems to follow that using external norms in 360’s makes no sense. For starters, wanting external norms severely constrains the content of the instrument since the organization will be required to use the exact wording and response scale from the standard questionnaire.

Using external norms also flies in the face of the argument that the uniqueness of an organization (and its culture) is a source of competitive advantage.  I (and others) also argue that uniquely relevant 360 content greatly helps in creating motivation for the raters and for helping ratees accept the feedback as being important and relevant to their success.

360’s can be a powerful way to create culture change. Especially when used across the whole organization, the behavioral descriptors “bring the culture to life” and communicate to all employees what it takes to be a successful member of organization and what they should expect from their leaders. Administration across the organization will quickly generate internal norms that can be an extremely powerful tool in helping leaders understand how they compare to their peers, and, in some organizations, help identify outliers at the low end who may require special attention. Conversely, leaders at the top 5th percentile may be used as role models.

Returning to the study cited in the WSJ, I might have thought that ethical behavior might be one area where there would be some consistency across organizations, at least within the same culture (e.g., Western culture). Silly me. If we have significant variance in behavioral expectations across organizations and/or industries in something as basic as ethical behavior, then we similarly should not be surprised if we find differences in other categories of behavior as well.

Do you believe that each organization has a unique culture? If so, using external norms in your 360 probably doesn’t make sense.

©2011 David W. Bracken

Has Anything Changed in 10 Years?

leave a comment »

[tweetmeme source=”anotherangle360″]

2011 marks the 10th anniversary of the publication of The Handbook of Multisource Feedback. To mark this occasion, we have convened a panel of contributors to The Handbook for a SIOP (Society of Industrial and Organizational Psychology) session to discuss how the field of 360 has changed (and not changed) in those 10 years. Panel members will include the Editors (Carol Timmreck (who will be moderator), Allan Church and myself), James Farr, Manny London, David Peterson, Bob Jako, and Janine Waclawksi. (See for more information.)

In a “good news/bad news” kind of way, we frequently get feedback from practitioners who still use The Handbook as a reference. In that way, it seems to be holding up well (the good news). The “bad news” might be that not much has changed in 10 years and the field is not moving forward.

Maybe the most obvious changes have been in the area of technology, again for good and bad. One of the many debates in this field is whether putting 360 technology in the hands of inexperienced users really is such a great idea. That said, it is a fact that it is happening and will have some potential benefits in cost and responsiveness.

Besides technology, what how else has the field of 360 feedback progressed or digressed in the last decade?

I will get the ball rolling by offering two pet peeves:

1)      The lack of advancement in development and use of rater training as a best practice, and

2)      The ongoing application of a testing mindset to 360 processes.

Your thoughts?

©2011 David W. Bracken