Strategic 360s

Making feedback matter

Archive for the ‘reliable measurement’ Category

Strategic New Year!!

leave a comment »

2018 will be a seminal year for Strategic 360 Degree Feedback for several reasons.  To refresh your collective memories, in a previous post ( I defined it as having these characteristics:

  • The content must be derived from the organization’s strategy and values, which are unique to that organization. Often derived from the organization’s values, they can be explicit (the ones that hang on the wall) or implicit (which some people call “culture”). To me, “strategic” and “off-the-shelf” is an oxymoron and the two words cannot be used in the same sentence (though I just did).
  • Participation must be inclusive, i.e., a census of the leaders/managers in the organizational unit (e.g., total company, division, location, function, level). I say “leaders/managers” because a true 360 requires that subordinates are a rater group. One reason for this requirement is that I (and many others) believe 360’s, under the right circumstances, can be used to make personnel decisions and that usually requires comparing individuals, which, in turn, requires that everyone have available the same data. This requirement also enables us to use Strategic 360’s to create organizational change, as in “large scale change occurs when a lot of people change just a little.”
  • The process must be designed and implemented in such a way that the results are sufficiently reliable (we have already established content validity in requirement #1) that we can use them to make decisions about the leaders (as in #4). This is not an easy goal to achieve, even though benchmark studies continue to indicate that 360’s are the most commonly used form of assessment in both public and private sectors.
  • The results of Strategic 360’s are integrated with important talent management and development processes, such as leadership development and training, performance management, staffing (internal movement), succession planning, and high potential processes. Research indicates that properly implemented 360 results can not only more reliable (in a statistical meaning) than single-source ratings, but are also more fair to minorities, women, and older workers. Integration into HR systems also brings with it accountability, whether driven by the process or internally (self) driven because the leader knows that the results matter.

For this past year, I have teamed with Allan Church, John Fleenor and Dale Rose to recruit an all-star roster of practitioners in our field to contribute chapters for an edited book, The Handbook of Strategic 360 Feedback (Oxford University Press). Though a continuation of many of the themes covered in The Handbook of Multisource Feedback (Bracken, Timmreck, & Church, 2001), this Handbook will have more of a practitioner focus with several case studies and new trends in this field.

The four of us will also host a panel discussion at the Annual Conference of the Society of Industrial and Organizational Psychology (SIOP) in Chicago on April 19 at Noon. Joined by Michael Campion and Janine Waclawksi (PepsiCo), we will present our learnings and observations from assembling the thirty-chapter volume.

The 3D Group and PepsiCo will also host another in our series of semi-annual meetings of the Strategic 360 Forum, a consortium of organizations that use 360 Feedback for strategic purposes and are interested in sharing best practices.  This full day meeting will be held in Chicago on April 17 with several Handbook contributors leading discussions on various topics. For more information, go to the 3D Group website (

Finally, Strategic 360 Feedback will continue to be the most powerful tool in our kit for reliably measuring leadership behaviors that form the basis for engagement, motivation, productivity and retention. Using 360’s, we can create culture change and develop leaders by defining, measuring, and holding leaders accountable for behaving consistently with organizational goals and values.

Have a Strategic New Year!

David Bracken

Our Responsibility to Help Organizations Make Good Decisions

leave a comment »

Here are two pieces on performance management that surfaced today that motivated and informed this blog entry:

I was asked by a high school teacher to visit his class and talk to them about my profession, that is, just what does an I/O Psychologist do?  I find that a lot of us in this field struggle with a concise answer to that question, perhaps because we touch so many different parts of the interface between people and organizations.

For the purpose of this 30 minute time with the class of juniors, I landed on the notion of a common denominator for the applying of our trade is that of helping organizations make decisions about people. The obvious starting point is the major role we play in helping organizations decide which people to hire or not, though some of us do get involved in the employment life cycle even before that (e.g., during recruitment and advertising to draw applicants.)

Moving on from employment decisions, we can move through all sorts of stages in the career of an employee where decisions are being made (and they are making decisions as well), and wouldn’t it be nice if those decisions are being made based on criteria that are “valid” (to use our lingo), fair and transparent.  And, I told them, that was a major contribution we as I/O Psychologists bring to the process, using science and experience for the benefit of both the employee and the organization to increase the probabilities that the decision is more likely to lead to successful performance than if it were just a random (e.g., flip of the coin, gut instinct, expeditious) choice.

This little discussion was a few years ago, and it came to mind now as I read some more articles on the ongoing discussion/debate regarding Performance Appraisal/Performance Management.  Depending on what version of a Performance Management Process (PMP) makes up your mental model, a PMP can have direct consequences for an employee. In the current discussion and debate on this topic, people are fretting (and rightly so) about the mechanics of evaluating an employee.  They/we also are worrying about other facets of the PMP process that should include higher quality (and more frequent) interactions between the manager and his/her employees for both performance discussions and development conversations, with aspirations that such interactions happen more often than the once or twice a year that “formal” appraisal systems require.

One proposed solution to creating more frequent interactions between managers and employees is to get rid of the formal sessions, symbolically represented by the evil rating process.  One of the many problems this creates is to remove a source of information that the organization needs to make decisions about people.  It is our responsibility to provide decision makers with methods to provide them (at all levels) with reliable data. If the current PMP system at an organization is not doing that, it is fixable as suggested by Glen Kallas and his blog piece.  Dismantling the system does not help unless somehow that data can be generated by whatever is taking its place.  I don’t see that happening, at least in what I am reading.  If there are data being created in the alternate processes that involve more frequent interactions between managers and employees, then we have the same responsibility to ensure that information is as good or better than what it is replacing.

The Herena blog speaks to the many benefits of maintaining or even enhancing your PMP. Then she (and her CEO) go on to call for supplementing PMP by making their managers into better “coaches,” which is fantastic! Especially when supported from the top.  She doesn’t speak to the benefits of PMPs in terms of the data they produce, though the alignment benefit is extremely important and potentially lost when the system goes away.

IF you agree that the organization needs reliable data to make decisions about people throughout their employment cycle, then no profession is better equipped to do that.  Arguing that the solution is to remove the data generator instead of fixing it seems irresponsible.

I was watching a documentary about George Harrison’s life, and they interviewed his second (and last) wife, Olivia.  They were married for 23 years until his death, and it was clear that their marriage, like many, had a lot of bumps (or whatever euphemism you want to use).  Her observation was that the secret to a long marriage is not getting divorced, which I took to mean not giving up when things are difficult.  Well, there are many reasons we should not be giving up (as Glen and Monique point out), and I hope I am adding one more reason to the mix.

We have a responsibility to help organizations make good decisions about people.  And there are decisions being made constantly, ranging from promotions to pay to job assignments, and even what developmental experience you get or don’t get.  What I suggested to those students is that there should be some comfort in knowing that there are people like us that are trying to create a level playing field and good information so that the decisions that affect them (of which many are life and/or job changing) are based on reliable information.  We need to consider that responsibility when we make or influence other types of decisions, including those decisions that reduce the quality of that data.  In other words, help organizations to not “divorce” their PMPs just because they might be doing what we want them to do.

Written by David Bracken

January 29, 2016 at 6:50 pm

This Picture is Worth…?

leave a comment »


This is the logo I have designed for my business, and it is a something of an ambiguous figure (but not too ambiguous hopefully). Please take a few seconds and think about what you see in the context of our work.

Hopefully the main message is something around conflicting forces. In the business of change, whether it be individual, team or organizational, as we attempt to create sustainable change we are always faced with opposing forces. So there are many opportunities to identify which forces are working in our favor and which are working against us, and so on.

The secondary design message I hoped to create is around the multiple triangles, or “Deltas,” that the arrows create. (How many do you see?) And we use Delta as a not only a symbol of change but also as a measurement of the amount of change. A major part of my business is not only to create sustainable change but to be able to reliably measure it, which will allow for comparisons of improvements as well as comparisons between individuals and organizations.

Or maybe you see a duck.

But what I want people to remember most are the Deltas and the message that change needs to be measurable and measured. Measures need numbers. Sometimes numbers are ratings. Ratings can be both reliable and valid. We can use ratings to compare scores if the scores are reliable.  Yes, it can be done.

So what do you think the picture is worth?