Strategic 360s

Making feedback matter

Posts Tagged ‘performance appraisal

The Key to Feedback with Dignity

leave a comment »

Kris Duggan has another fine article in Fast Company titled, “Six Companies That Are Redefining Performance Management” (, with the six being GE, Cargill, Eli Lilly, Accenture, Adobe and Google.  The common denominator is their deemphasis (or even total abandonment) of the formal appraisal process and more focus on feedback and development, presumably via the manager/supervisor, on a more frequent basis. Each organization has its own approach to accomplishing that and the jury is out, though a couple of them are farther along and some preliminary results are coming in.

Kris characterizes the common denominator of these six approaches using these words:

They’re all switching their focus from dictating what employees should do at work to helping develop their skills as individuals.

Wow!  There are a couple of words in that sentence that are really thought-provoking and, in my opinion, taking this discussion in the wrong direction. The first (not in order) is “dictating.”  Since when did organizations abdicate the right (let alone need) to “dictate” to their employees what to do? Using less pejorative words than “dictate,” we call it directing, guiding, managing, leading, and/or aligning.  Reading the word “dictate” makes this person feel like I have been taken back to the days of the union boss ranting against the evils of the management empire who have “taken away our rights and humanity,” or something to that effect.

In a couple of my earlier blogs, including my last one (, also inspired by a Duggan article, I used the ALAMO model where the first “A” stands for Alignment, the most powerful variable in the performance equation because it can be both positive and negative. People need and expect alignment. Values are a form of alignment, guiding behavior. Goals help create alignment.

In that same blog, I propose that there is a time and place for Directing, and a time for Guiding. Both are forms of Alignment but using different styles for different situations. Just within the last 24 hours I heard a former professional football player saying that the biggest difference between college and pro football is that in college you are told what to do; in the pros, you are told why you need to do it.

On February 26 I will be giving a talk at the annual conference of the Society of Psychologists in Management (SPIM) in Atlanta titled, “Create a Feedback Culture, Create Change, Maintain Dignity.”  (See for more information on the conference.)  The “dignity” aspect of the talk is very relevant to this topic of alignment. From one angle, we show dignity to our employees by showing them the respect they expect by providing them with a clear understanding of their role, responsibilities, and how successful performance is defined. And, again, this is in terms of both tangible and intangible (behavioral) accomplishments.

I don’t agree that we protect an employee’s dignity by shielding them from negative feedback, as some would propose. But I will talk about that more at SPIM.

Very importantly, we can and should protect the dignity of the employee by placing accountability on feedback providers and designers of feedback systems to require that feedback is job related, i.e., aligned with factors that are important to the organization, not just whimsical thoughts of individuals (at any level) who might be given free rein to inflict “feedback.”  What comes to mind is the Amazon stories reported in the NY Times about open feedback systems where employees are able to give anonymous comments that were, in some cases, very damaging and not job related, reportedly causing some employees to leave the company.

The second word that Kris uses in the quote that I question is “switching.” The implication is that we can’t have it both ways, i.e., that we have to give up alignment in order to have feedback and development. Maybe the most important message in the ALAMO model is that feedback and development without alignment may be worthless or even counterproductive (i.e., drawing resources away from the organization with no return).

Some may call it dictating when we set expectation as to what the organization needs from you in order to be a successful member. I would rather call it alignment.  But, whatever you call it, your feedback and development processes need to have it.  Feedback without alignment may not only be irrelevant but it may also take away our dignity.

There Are “Right” Answers

with 2 comments

[tweetmeme source=”anotherangle360″]

For those of you who might attend the next SIOP (Society for Industrial and Organizational Psychology) Conference in Chicago in April, I am pleased to note that we have been accepted to conduct a panel consisting of contributors to The Handbook of Multisource Feedback, which is approaching its 10th anniversary of publication. The panel is titled, “How has 360 degree Feedback evolved over the last 10 years?”  Panel members include Allan Church, Carol Timmreck, Janine Waclawski, David Peterson, James Farr, Manny London, Bob Jako and myself.

We received a number of thoughtful, useful comments and suggestions from the reviewers of the proposal, one of which stated this:

I would like to see a serious discussion of whether or not 360 is a singular practice. It seems as though 360 can be used with so many different interventions (succession, development, training needs analysis, supplement to coaching, …the list is HUGE) that when we say something like “is 360 legal” it is almost impossible to answer without many caveats regarding the details of the specific 360 process that was used. It’s almost as though we need to move on from ‘is 360 xyz’ to ‘if we do 360 this way, we get these outcomes and if we do 360 that way we get those outcomes.’ Can’t wait to hear the panel, this is much needed.

This is an extremely insightful observation. I have broached this topic in earlier blogs regarding alignment of purpose and decisions in design and implementation.  But there are some things that are required regardless of purpose.

To look at extremes, we might consider 360 processes where the N=1, i.e., where a single leader is given the opportunity to get developmental feedback. This is often in preparation for an experience such as a leadership development/training program, or some development program (e.g., high potential).  In these instances, it is an ad hoc process where an off-the-shelf instrument may be most practical. The instrument can be lengthy since raters will only have to fill it out one time. And typically there are major resources available to the participant in the form of coaches, trainers, and/or HR partners to ensure that the feedback is interpreted and used productively.

Compare the N=1 scenario to the N>1 process. By N>1, I use shorthand to indicate 360 processes that are applied across some segment of the population, such as a function, department, or entire organization. In these cases, it becomes much more important to have a custom designed instrument that reflects unique organization requirements (competencies, behaviors) that can create system change while simultaneously defining effective leadership to raters and ratees alike. The process requires some efficiencies due to many raters being involved, and some being asked to complete multiple forms.  We also need to plan for ways to support the many ratees in their use of the feedback.

BUT, we might also say that there are some things that are so basic as to be necessary whether N=1 or N>1.  Just this week I was sent this interview of Cindy McCauley (of the Center for Creative Leadership) ( Many readers will already know who Cindy is; if not, suffice to say she is highly respected in our field and has deep expertise in 360 Feedback. (In fact, she contributed a chapter to the book, “Should 360 Feedback Be Used Only for Development Purposes?” that I was also involved with.) In this interview, Cindy makes some important points about basic requirements for reliability and validity that I interpret to be applicable to all 360 processes.

What really caught my attention was this statement by Cindy:

…the scores the managers receive back mean a lot to them. They take them very seriously and are asked

to make decisions and development plans based on those scores. So you want to be sure that you can

rely on those scores, that they’re consistent and reflect some kind of accuracy.

I take the liberty (which Cindy would probably not) to expand the “make decisions” part of this statement to apply more broadly, that others (such as the leader’s manager) also use the feedback to make decisions. When she says that managers make decisions on their feedback, what decisions can they make without the support of the organization (in the person of their boss, most typically)? This is basically the crux of my argument that there is no such thing as “development only” processes. Development requires decisions and the commitment of organization resources. This only reinforces her point about the importance of validity and reliable measurement.

So what’s my point? My point is that I believe that too many ad hoc (N=1) 360 processes fall short of meeting these requirements for validity and reliability. Another debate for another time is whether off-the-shelf instruments have sufficient validity to measure unique organization requirements.  I do believe it is accurate to say that reliable measurement is often neglected in ad hoc processes when decisions are made about number of raters and quality of ratings.

For example, research indicates that raters have different “agendas” and that subordinates are the least reliable feedback providers, followed by peers and then managers. Lack of reliability can be combated in at least two ways: rater training and number of raters. We can put aside rater training (beyond having good instructions); it rarely happens despite its power and utility.

So we can improve reliability with numbers. In fact, this is really why 360 data is superior to traditional, single source evaluations (i.e., performance appraisals).  For N>1 processes, I STRONGLY recommend that all direct reports (subordinates) participate as raters. This has multiple benefits, including beefing up the number of raters for the most unreliable rater group. Then, for peers, aiming for 5-7 respondents is recommended.

My contention is that the majority of ad hoc (N=1) processes do not adhere to those guidelines. (I have no data to support that assertion, just observation.)  The problem of unreliable data due to inadequate number of raters is compounded by the fact that the decisions resulting from that flawed data are magnified due to the senior level of the leaders and the considerable organization resources devoted to their development.

When I started writing this blog, I was thinking of the title, “There is No “Right Answer,” meaning that decisions need to fit the purpose. But actually there are some “Right Answers” that apply regardless of purpose. Don’t let the “development only” argument lead to implementation decisions that reduce the reliability and validity of the feedback. In fact, many guidelines should apply to all 360 processes, whether N=1 or N>1.

©2011 David W. Bracken