Strategic 360s

Making feedback matter

Posts Tagged ‘decision making

What are “Strategic 360’s”?

leave a comment »

[tweetmeme source=”strategic360s”]

A colleague recently asked me, “Exactly what is ‘Strategic 360 Feedback’?”  Heck, it’s only the name of this blog and in the name the consortium I have helped form, The Strategic 360 Forum (that is meeting for its 5th time in April).  The concepts are also laid out pretty well in the article Dale Rose and I published in 2011 in the Journal of Business in Psychology (“When Does 360-degree Feedback Create Behavior Change? And How Would We Know It When It Does?”).

In as succinct way as I can muster, here are the four core requirements for “strategic” 360 feedback systems:

  1. The content must be derived from the organization’s strategy and values, which are unique to that organization. Often derived from the organization’s values, they can be explicit (the ones that hang on the wall) or implicit (which some people call “culture”). To me, “strategic” and “off-the-shelf” is an oxymoron and the two words cannot be used in the same sentence (though I just did).
  2. Participation must be inclusive, i.e., a census of the leaders/managers in the organizational unit (e.g., total company, division, location, function, level). I say “leaders/managers” because a true 360 requires that subordinates are a rater group. One reason for this requirement is that I (and many others) believe 360’s, under the right circumstances, can be used to make personnel decisions and that usually requires comparing individuals, which, in turn, requires that everyone have available the same data. This requirement also enables us to use Strategic 360’s to create organizational change, as in “large scale change occurs when a lot of people change just a little.”
  3. The process must be designed and implemented in such a way that the results are sufficiently reliable (we have already established content validity in requirement #1) that we can use them to make decisions about the leaders (as in #4). This is not an easy goal to achieve, even though benchmark studies continue to indicate that 360’s are the most commonly used form of assessment in both public and private sectors.
  4. The results of Strategic 360’s are integrated with important talent management and development processes, such as leadership development and training, performance management, staffing (internal movement), succession planning, and high potential processes. Research indicates that properly implemented 360 results can not only more reliable (in a statistical meaning) than single-source ratings, but are also more fair to minorities, women, and older workers. Integration into HR systems also brings with it accountability, whether driven by the process or internally (self) driven because the leader knows that the results matter.

Let me hasten to say that a) all 360’s, strategic or not, should have a development focus, and b) none of this minimizes the value of 360 processes that are used in support of the development of leaders, one at a time. There is no question that innumerable leaders have benefitted from the awareness created by feedback, though often also supported by a coach who not only helps manage the use of the feedback, but also should be creating accountability for the constructive use of the feedback.

Strategic 360 processes and “development only” processes can successfully coexist in a single organization. But they have different purposes, and purpose should be the primary driver of all design and implementation decisions.

Advertisements

What Is a “Decision”?

with one comment

[tweetmeme source=”anotherangle360″]

My good friend and collaborator, Dale Rose, dropped me a note regarding his plans to do another benchmarking study on 360 Feedback processes. His company, The 3D Group, has done a couple of these studies before and Dale has been generous in sharing his results with me, which I have cited in some of my workshops and webinars. The studies are conducted by interviewing coordinators of active 360 systems.  Given that they are verbal, some of the results have appeared somewhat internally inconsistent and difficult to reconcile, though the general trends are useful and informative.

Many of the topics are useful for practitioners to gauge their program design, such as the type of instrument, number of items, rating scales, rater selection, and so on. For me, the most interesting data relates to the various uses of 360 results.

Respondents in the 2004 and 2009 studies report many uses. In both studies, “development” is the most frequent response, and that’s how it should be.  In fact, I’m amazed that the responses weren’t 100% since a 360 process should be about development. The fact that in 2004 only 72% of answers included development as a purpose is troubling whether we take the answers as factual or if they didn’t understand the question. The issue at hand here is not whether 360’s should be used for development; it is what else they should, can, and are used for in addition to “development.”

In 2004, the next most frequent use was “career development;” that makes sense. In 2009, the next most frequent was “performance management,” and career development dropped way down. Other substantial uses include high potential identification, direct link to performance measurement, succession planning, and direct link to pay.

But when asked whether the feedback is used “for decision making or just for development”, about 2/3 of the respondents indicated “development only” and only 1/3 for “decision making.” I believe these numbers understate the actual use of 360 for “decision making” (perhaps by a wide margin), though (as I will propose), it can depend on how we define what a “decision” is.

To “decide” is “to select as a course of action,” according to Miriam Webster (in this context). I would build on that definition that one course of action is to do nothing, i.e., don’t change the status quo or don’t let someone do something. It is impossible to know what goes on in person’s mind when he/she speaks of development, but it seems reasonable to suppose that it involves doing something beyond just leaving the person alone, i.e., maintaining the status quo.  But doing nothing is a decision. So almost any developmental use is making a decision as to what needs to be done, what personal (time) and organizational (money) resources are to be devoted to that person. Conversely, denying an employee access to developmental resources that another employee does get access to is a decision, with results that are clearly impactful but difficult to measure.

To further complicate the issues, it is one thing to say your process is for “development only,” and another to know how it is actually used.  Every time my clients have looked behind the curtain of actual use of 360 data, they unfailingly find that managers are using it for purposes that are not supported. For example, in one client of mine, anecdotal evidence repeatedly surfaced that the “development only” participants were often asked to bring their reports with them to internal interviews for new jobs within the organization. The bad news was that this was outside of policy; the good news was that leaders saw the data as useful in making decisions, though (back to bad news) they may have been untrained to correctly interpret the reports.

Which brings us to why this is an important issue. There are legitimate “development only” 360 processes where the participant has no accountability for using the results and, in fact, is often actively discouraged from sharing the results with anyone else. Since there are not consequences, there are few, if any, consequential actions or decisions required. But most 360 processes (despite the benchmark results suggesting otherwise) do result in some decisions being made, which might include doing nothing by denying an employee access to certain types of development.

The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for Multisource Feedback When Used for Decision Making.”  My sense is many designers and implementers of 360 (multisource) processes feel that these Guidelines don’t apply because their system isn’t used for decision making. Most of them are wrong about that. Their systems are being used for decision making, and, even if not, why would we design an invalid process? And any system that involves the manager of the participant (which it should) is creating the expectation of direct or indirect decision making to result.

So Dale’s question to me (remember Dale?) is how would I suggest wording a question in his new benchmarking study that would satisfy my curiosity regarding the use of 360 results. I proposed this wording:

“If we define a personnel decision as something that affects an employee’s access to development, training, jobs, promotions or rewards, is your 360 process used for personnel decisions?” 

Dale hasn’t committed to using this question in his study. What do you think?

©2012 David W. Bracken

Full Stops, Neutrinos and Rocket Science

with one comment

[tweetmeme source=”anotherangle360″]

I don’t know why I feel compelled to respond to what I see are unreasonable positions (primarily in LinkedIn discussions). But I do, and this blog gives me a vehicle for doing so without taking up a disproportionate amount of air time on that forum.

So what got me going this time? A LinkedIn discussion (that I started on the topic of 360 validity) got diverted into the topic of “proper” use of 360 feedback (development vs decision making).  The particular comment that got me going was, “I believe these assessments should be used for development – full stop.”  (Virtually 100% of 360 processes are used for development, but the context indicates that he meant “development only.”) Having lived and worked in London for a while, I realized (or realised) that the “full stop” has the same meaning as “period,” implying end of sentence and, with emphasis, no more is worth saying.  By the way, I am using this person only as an example of the many, many individuals who have expressed similar dogmatic views on this topic.

There are probably a few things that are appropriate to put a “full stop” on. That would be an interesting blog for someone, e.g., would we include the Ten Commandments? “Thou Shall Not Kill. Full stop.”  Hmmm… but then we have Christians who believe in capital punishment, so maybe it’s only a partial stop (or pause)?  Like I said, I will let someone else take that on.

Are the physical sciences a place for “full stops?”   Like, “The world is flat. Full stop.”  “ The Sun revolves around the Earth. Full stop.” Just this last week, we were presented with the possibility that another supposedly immutable law is under attack, i.e., “Nothing can go faster than the speed of light. Full stop.”  Now we have European scientists who have observed neutrinos apparently traveling faster than the speed of light and are searching for ways to explain and confirm it. If found to be true, it would challenge many of the basics of physics, opening the door to time travel, for example.  The fact that some scientists are apparently challenging the “full stop” nature of the Theory of Relativity is also fascinating, if only for the reason that they are open to exploring the supposedly impossible. And, by the way, are begging for others to challenge and/or replicate their findings.

I firmly believe that the social sciences have no place for “full stops.”  To me, “full stop” means ceasing to explore and learn. It seems to indicate a lack of openness to considering new information or different perspectives.

I suspect there are many practitioners in the “hard” sciences who question whether what we do is a “science” at all. (I think I am running out of my quota of quotations marks.)  Perhaps they see our work with understanding human behavior as a quest with no hope of ever having answers. That’s what I like about psychology. We will never fully know how to explain human behavior, and that’s a good thing. If we can explain it, then we probably could control it. I think that is a scary thought. BUT we do try to improve our understanding and increase the probabilities of predicting what people will do. That is one of the basic goals of industrial/organizational psychology.

(I have been known to contend that what we do is harder than rocket science because there are no answers to what we do, only probabilities.  The truth is that even the hard sciences have fewer “full stops” than even they would like. I just finished reading a book about the Apollo space program, Rocket Men, and it is very interesting to know how many “hard stops” that used to exist were bashed (e.g., humans can’t live in weightlessness, the moon’s crust will collapse if we try to land on it. Insert “hard stops” appropriately), how much uncertainty there was, and how amazing the accomplishment really was.  I also learned that one of the reasons the astronauts’ visors were mirrored was so that aliens couldn’t see their faces. Seriously.)

Increasing probabilities for predicting and influencing employee behavior requires that we also explore options.  I can’t see how it is productive to assert that we know the answer to anything, and that we shouldn’t consider options that help us serve our clients, i.e., organizations, more effectively.

On top of all that, the most recent 3D Group benchmark study indicates that about one third of organizations DO use 360 data for some sort of administrative purpose, and that almost certainly understates the real numbers. What do we tell those organizations? That they should cease doing so since our collective wisdom says that there is no way they can actually be succeeding? That we cannot (or should not) learn from what they are doing to help their organizations make better decisions about their leaders? That a few opinions should outweigh these experiences?

I don’t get it. No stop.

©2011 David W. Bracken

That’s Why We Have Amendments

with one comment

[tweetmeme source=”anotherangle360″]

I used my last blog (https://dwbracken.wordpress.com/2011/08/09/so-now-what/)  to start LinkedIn discussions in the 360 Feedback and I/O Practitioners group, asking the question: What is a “valid” 360 process?  The response from the 360 group was tepid, maybe because the group has a more general population that might not be that concerned with “classic” validity issues (which is basically why I wrote the blog in the first place).  But the I/O community went nuts (45 entries so far) with comments running the gamut from constructive to dismissive to deconstructive.

Here is a sample of some of the “deconstructive” comments:

…I quickly came to conclusion it was a waste of good money…and only useful for people who could (or wanted to) get a little better.

It is all probably a waste of time and money. Good luck!!

There is nothing “valid” about so-called 360 degree feedback. Technically speaking, it isn’t even feedback. It is a thinly veiled means of exerting pressure on the individual who is the focal point.

My position regarding performance appraisal is the same as it has been for many years: Scrap It. Ditto for 360.

Actually, I generally agree with these statements in that many 360 processes are a waste of time and money. It’s not surprising that these sentiments are out there and probably quite prevalent. I wonder, though, if we are all on the same page. In another earlier blog, I suggested that discussions about the use and effectiveness of 360’s should be separated by those that are designed for feedback to a single individual (N=1) and those that are designed to be applied to groups (N>1).

But the fact is that HR professionals have to help their management make decisions about people, starting with hiring and then progressing through placement, staffing, promotions, compensation, rewards/recognition, succession planning, potential designation, development opportunities, and maybe even termination.

Nothing is perfect, especially so when it comes to matters that involve people. As an example, look to the U.S. Constitution, an endearing document that has withstood the test of time. Yet the Founding Fathers were the first to realize that they needed to make provisions for the addition of amendments to further make refinements. Of course, some of those amendments were imperfect themselves and were later rescinded.

But we haven’t thrown out the Constitution because it is imperfect.  Nor do we find it easy to come to agreements what the revisions should be.  But one of the many good things about humans is a seemingly natural desire to make things better.

Ever since I read Mark Edwards and Ann Ewen’s seminal book, 360 Degree Feedback, I have believed that 360 Feedback has the potential to improve personnel decision making when done well. The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for multisource feedback when used for decision making,” coauthored with Carol Timmreck, where we made a stab at defining what “done well” can mean.

In our profession, we have an obligation to constantly seek ways of improving personnel decision making. There are two major needs we are trying to meet, which sometimes cause tensions. One is to provide the organization with more accurate information on which to base these decisions, which we define as increased reliability (accurate measurement) and validity (relevant to job performance). Accurate decision making is good for both the organization and the individual.

The second need is to simultaneously use methods that promote fairness. This notion of fairness is particularly salient in the U.S. where we have “protected classes” (i.e., women, minorities, older workers), but hopefully fairness is a universal concept that applies in many cultures.

Beginning with the Edwards & Ewen book and progressing from there, we can find more and more evidence that 360 done well can provide decision makers with better information (i.e., valid and fair) than traditional sources (e.g., supervisory evaluations).  I actually heard a lawyer state that organizations could be legally exposed for not using 360 feedback because is more valid and fair than methods currently in use.

I have quoted Smither, London and Reilly (2005) before, but here it is again:

We therefore think it is time for researchers and practitioners to ask “Under what conditions and for whom

is multisource feedback likely to be beneficial?” (rather than asking “Does multisource feedback work?”).

©2011 David W. Bracken

On the Road… and Web and Print

leave a comment »

[tweetmeme source=”anotherangle360″]

I have a few events coming up in the next 3 weeks or so that I would like to bring to your collective attention in case you have some interest.  One is free, two are not (though I receive no remuneration). I also have an article out that I co-authored on 360 feedback.

In chronological order, on May 25 Allan Church, VP Global Talent Development at PepsiCo, and I will lead a seminar titled, “Integrating 360 & Upward Feedback into Performance and Rewards Systems” at the 2011 World at Work Conference in San Diego (www.worldatwork.org/sandiego2011).  I will be offering some general observations on the appropriateness, challenges, and potential benefits of using 360 Feedback for decision making, such as performance management. The audience will be very interested in Allan’s descriptions of his experiences with past and current processes that have used 360 and Upward Feedback for both developmental and decision making purposes.

On June 8, I am looking forward to conducting a half day workshop for the Personnel Testing Council of Metropolitan Washington (PTCMW) in Arlington, VA, titled “360-Degree Assessments: Make the Right Decisions and Create Sustainable Change” (contact Training.PTCMW@GMAIL.COM or go to WWW.PTCMW.ORG). This workshop is open to the public and costs $50.  I will be building from the workshop Carol Jenkins and I conducted at The Society for Industrial and Organizational Psychology. That said, the word “assessments” in the title is a foreshadowing of a greater emphasis on the use of 360 Feedback in a decision making context and an audience that is expected to have great interest in the questions of validity and measurement.

On the following day, June 9 (at 3:30 PM EDT), I will be part of an online virtual conference organized by the Institute of Human Resources and hr.com on performance management. My webinar is titled, “Using 360 Feedback in Performance Management: The Debate and Decisions,” where the “decisions” part has multiple meanings. Given the earlier two sessions I described, it should be clear that I am a proponent of using 360/Upward Feedback for decision making under the right conditions. The other take on “decisions” is the multitude of decisions that are required to create those “right conditions” in the design and implementation of a multisource process.

On that note, I am proud to say that Dale Rose and I have a new article in the Journal of Business and Psychology (June) titled, “When does 360-degree feedback create behavior change? And how would we know it when it does?” Our effort is largely an attempt to identify the critical design factors in creating 360 processes and the associated research needs.

This article is part of a special research issue (http://springerlink.com/content/w44772764751/) of JBP and you will have to pay for a copy unless you have a subscription. As a tease, here is the abstract:

360-degree feedback has great promise as a method for creating both behavior change and organization change, yet research demonstrating results to this effect has been mixed. The mixed results are, at least in part, because of the high degree of variation in design features across 360 processes. We identify four characteristics of a 360 process that are required to successfully create organization change, (1) relevant content, (2) credible data, (3) accountability, and (4) census participation, and cite the important research issues in each of those areas relative to design decisions. In addition, when behavior change is created, the data must be sufficiently reliable to detect it, and we highlight current and needed research in the measurement domain, using response scale research as a prime example.

Hope something here catches your eye/ear!

©2011 David W. Bracken

What I Learned at SIOP

with one comment

[tweetmeme source=”anotherangle360″]

The annual conference of the Society for Industrial/Organizational Psychology (SIOP) was held in Chicago April 14-16 with record attendance. I had something of a “360 Feedback-intensive” experience by running two half-day continuing education workshops (with Carol Jenkins) on 360 feedback, participating on a panel discussion of the evolution of 360 in the last 10 years (with other contributors to The Handbook of Multisource Feedback), and being the discussant for a symposium regarding Implicit Leadership Theories that largely focused on cultural factors in 360 processes. Each forum gave me an opportunity to gauge some current perspectives on this field, and here are a few that I will share.

The “debate” continues but seems to be softening. The “debate” is, of course, how 360 feedback should be used: development only and/or for decision making. In our CE Workshop, we actually had participants stand up and stand in corners of the room to indicate their stance on this issue, and, judging from that exercise, there are still many strong proponents of each side of that stance. That said, one of the conclusions the panel seemed to agree upon is that there is some blurring of the distinction between uses and some acknowledgement that 360’s are successfully being used for decision making, and that 360’s are far less likely to create sustainable behavior change without accountability that comes with integration with HR systems.

We need to be sensitive to the demands we place on our leaders/participants. During our panel discussion, Janine Waclawski (who is currently an HR generalist at Pepsi) reminded us of how we typically inundate 360 participants with many data points, beginning with the number of items multiplied by the number of rater groups. (I don’t believe the solution to this problem is reducing the number of items, especially below some arbitrary number like 20 items.)  Later, I had the opportunity to offer commentary on four terrific research papers that had a major theme of how supervisors need to be aware of the perspectives of their raters that may well be caused by their cultural backgrounds.

As someone who is more on the practitioner end of the practitioner-scientist continuum, I tried to once again put myself in the seat of the feedback recipient (where I have been many times) and consider how this research might be put into practice. On one hand, organizations are using leadership competency models and values statements to create a unified message (and culture?) that spans all segments of the company. We can (and should) have debates about how useful and realistic this practice is, but I think most of us agree that the company has a right to define the behaviors that are expected of successful leaders. 360 processes can be a powerful way to define those expectations in behavioral terms, to help leaders become aware of their perceived performance of those behaviors, to help them get better, and to hold leaders accountable for change.

On the other hand, the symposium papers seem to suggest that leader behaviors should be molded from “the bottom up,” i.e., by responding to the expectations of followers (raters) that may be attributed to their cultural backgrounds and their views of what an effective leader should be (which may differ from the leader’s view and/or the organization’s view of effective leadership).  By the way, this “bottoms up” approach applies also to the use of importance ratings (which is not a cultural question).

My plea to the panel (perhaps to their dismay) was to at least consider the conundrum of the feedback recipient who is being given this potentially incredibly complex task of not only digesting the basic data that Janine was referring to, but then to fold in the huge amount of information created by having to consider the needs of all the feedback providers. Their research is very interesting and useful in raising our awareness of cultural differences that can affect the effectiveness of our 360 processes. But PLEASE acknowledge the implications for putting all of this to use.

The “test” mentality is being challenged.  I used the panel discussion to offer up one of my current pet peeves, namely to challenge the treatment of 360 Feedback as a “test.”  Both in the workshops and again at the panel, I suggested that applying practices such as randomizing items and using reverse wording to “trick” the raters is not constructive and most likely is contrary to our need to help the raters provide reliable data. I was gratified to receive a smattering of applause when I made that point during the panel.  I am looking forward to hopefully discussing (debating) this stance with the Personnel Testing Council of Metropolitan Washington in a workshop I am doing in June, where I suspect some of the traditional testing people will speak their mind on this topic.

This year’s SIOP was well done, once again. I was especially glad to see an ongoing interest in the evolution of the field of 360 feedback judging from the attendance at these sessions, let alone the fact that the workshop committee identified 360 as a topic worthy of inclusion after going over 10 years since the last one.  360 Feedback is such a complex process, and we are still struggling with the most basic questions, including purpose and use.

©2011 David W. Bracken

White Broncos and Oysters

leave a comment »

[tweetmeme source=”anotherangle360″]

It’s interesting how perceptions are formed. A few years ago I had a hallway chat with a senior leader in the consulting firm I was with at the time regarding 360 feedback, and I noted that 360 might have some appeal, especially in certain  Western cultures, because it drew upon similar principles as the jury system. He just laughed out loud, uttering something to the effect of, “Yeah, and we know how well those work!.” He was clearly referring to the first O.J.  Simpson trial (the criminal proceeding) where the defendant was acquitted, but later found liable in a civil suit.  I pointed out the problem with using a single instance to create a wide generalization. As an analogy, I noted that I didn’t see him stop traveling after a major plane crash.  To no avail, of course. (I have to believe that there are many people who do not fly for this reason.)

A few years ago I ventured into Wikipedia and searched on 360-Degree Feedback. What a mess! If you dare to make the journey, you may see some of my “tracks” in there as I offered some perspectives. It has very few updates, with the most recent ones referring to the notorious Watson Wyatt study.

But one of the first entries in the Discussion section includes this diatribe:

Okay: it sucks. Take it from a long-time IBMer. It started in about the mid-1980s and began to die about 2000 because of its ineffectiveness and morale-lowering difficulty. It creates a lot of paperwork and doesn’t yield much insight that the employee can’t get from his manager. Peers rate each other too generously and subordinates are afraid to criticize their superiors. Plus, how many people do you need to tell you that you should be more proactive in managing risk and look for ways to expand your skill set? However, in a modified form (such as infrequent subordinate evaluations of their managers), it may remain useful.

Kind of interesting how this person’s reasoning changes if you contrast the first and last sentences.  But which sentence do you (and I) remember? The first, of course, for many reasons.

So we have personal opinions being formed by both “big events” (like the whole White Bronco debacle) and by individual experiences like those of the IBMer. Such are human beings.

We all operate in this mode. To use another analogy, I had an “experience” many years ago with an Atlanta restaurant where I tried both Oysters Rockefeller and soft shell crab for the first time. Got food poisoning, and haven’t tried either since.  All of us have some similar tale whether it’s about food or some other thing we have negative associations with. Sometimes just hearing a story is sufficient to create the aversion. And sometimes these things aren’t particularly rational.

Here’s the problem I would like some help with. How do we get senior leaders (i.e., decision makers) to realize when they are using either personal experiences or anecdotes like the Wikipedia person or the “jury averse” consultant to make major decisions? Is it naïve to think that we can reasonably expect leaders (and people in general) to depersonalize decisions, like whether to implement (or continue) some program (like 360 feedback, or others)?

I was watching the movie Amadeus last night. Monarchs are infamous for their whims, and in this story Emperor Joseph II had issued a ban on ballet in operas (who knows why). Mozart had an interesting strategy for getting the Emperor to change his mind (or actions at least) by playing on the ruler’s love of music and showing the effect of removing the music from the dance sequence. Mozart didn’t directly confront the problem, just got the Emperor to see the consequences of his dictum.

The psychologists in the room might go nuts with our training on “desensitization” and stuff like that. But I am interested in practical experiences and suggestions for overcoming personal biases in leaders.   Please dive in!

©2011 David W. Bracken

Written by David Bracken

February 25, 2011 at 9:34 am