Posts Tagged ‘performance management’
One of my early posts was titled “Snakes in Suits” (http://dwbracken.wordpress.com/2010/10/12/snakes-in-suits/), which is also the title of a book about psychopaths in industry, specifically in leadership positions, and how skilled they are (because they are psychopaths) in escaping detection until the damage has been done. The blog post highlighted a 360 process whose primary purpose is to identify the bottom tail of the performance distribution, essentially managing the quality of the leadership cadre by fixing or removing the poorest performers/behaviors. The metaphor is pulling back the curtain on the pretender/offender, like Toto does in “The Wizard of Oz,” who has escaped discovery for many years through cleverness and deception. Of course, he cries out, “Pay no attention to that man behind the curtain.”
I got to thinking about this topic recently (no, not because of the new Wizard of Oz movie) when I got an update from Bill Gentry at the Center for Creative Leadership regarding his evolving thinking and research on the topic of Integrity (see his YouTube video, http://www.youtube.com/watch?v=4d7yQHHUL-Q&list=UU9ulOx1rJK5FMlC5gbS91cQ&index=1).
One of the possible reasons that the “Snakes in Suits” book didn’t get more traction in our field is the fact that true psychopaths are relatively rare in our society (maybe 3-5% of the population by some estimates), though their “cousins” (bullies, jerks, add your own adjectives) are much more prevalent and all can cause substantial damage. By expanding the definition of inappropriate behavior to include integrity (or lack thereof) as Dr. Gentry highlights, we now have a behavioral requirement that hopefully applies to every leader, and every employee for that matter.
One of Bill’s research articles uncovers a finding where integrity is identified as a critical trait for senior executives but much less so for mid-level executives. His hypothesis is that success in mid-management is much more on the “what” that is achieved (e.g., revenues, sales, budgets) than the “how” (e.g., adherence to the values of the organization). This de-emphasis on the “how” side of performance measurement causes organizations to promote leaders to the most senior levels without sufficient scrutiny of their character, resulting in some flawed leadership at the top of companies where integrity is essential (including some very high profile examples that Bill enumerates as part of his publications).
While I’m at it, I found another piece of research that relates to the significant impact that abusive management can have across large swaths of the organization. This article (cited below) suggests that employees partly attribute abusive supervision to negative valuation by the organization and, consequently, behave negatively toward and withhold positive contributions to it. In other words, employees may believe that abusive supervisors are condoned by the company, and then lose commitment and engagement to said organization. And there is probably a lot of truth in that logic.
Organizations have a responsibility to identify and to address situations where leaders are behaving badly, and the research cited above strongly suggests that it is in the best interests of organizations to do so. So how is that done? Many organizations rely on anonymous processes that encourage employees to “speak up” without fear of retribution. That is such a passive approach as to almost be amusing if it weren’t so important.
Of course, you know where I am going with this. A 360 Degree Feedback process that is consistently administered across the organization AND has provisions for the results being shared with the organization (e.g., Human Resources) is about the only way I can think of where this systemic problem can be addressed. This should be a critical aspect of Talent Management systems in organizations, and as common and ubiquitous as performance management. As the authors of “Snakes in Suits” point out, 360 feedback can be a powerful way to identify the “snakes” early in their careers. One problem is that these snakes are very skilled at avoiding detection by finding loopholes in inconsistently administered 360’s so that they don’t have to participate, or don’t have to share their feedback with anyone.
Who is that leader behind the curtain? It may be a wizard. It may be a jerk. It may be a hero to be honored. But we won’t know unless we have our Toto to pull back the curtain, hopefully before it’s too late.
Blaming the organization for abusive supervision: The roles of perceived organizational support and supervisor’s organizational embodiment. Shoss, Mindy K.; Eisenberger, Robert; Restubog, Simon Lloyd D.; Zagenczyk, Thomas J. Journal of Applied Psychology, Vol 98(1), Jan 2013, 158-168. doi: 10.1037/a0030687
©2013 David W. Bracken
I have recently had the opportunity to read two large benchmarking reports that relate to talent management, leadership development and, specifically, how 360 Feedback is being used to support those disciplines.
The first is the U.S. Office of Personnel Management “Executive Development Best Practices Guide” (November, 2012), in which includes both a compilation of best practices across 17 major organizations and a survey of Federal Government members of the Senior Executive Services, which was in turn a follow up to a similar survey in 2008.
The second report was created by The 3D Group as the third benchmark study specifically related to practices in 360 Degree Feedback. This year’s study differed from the past versions by being conducted online, which had the immediate benefit of expanding the sample to over 200 organizations. This change in methodology, sample and content makes interpretation of trend scores a little dicey, but the results are compelling nonetheless. Thank you to Dale Rose and his team at 3D Group for sharing the report with me once again.
These studies have many interesting results that relate to the practice of 360 Feedback, and I want to grab the low hanging fruit for the purposes of this blog entry.
As the title teases, the debate is over, with the “debate” being whether 360 Feedback can and should be used for decision making purposes. Let me once again acknowledge that 1) all 360 Feedback should be used for leadership development, 2) some 360 processes are solely for leadership development, often one leader at time, and 3) these development-only focused 360 processes should not be used for decision making.
But these studies demonstrate that 360 Feedback continues to be used for decision making, at a growing rate, and evidently successfully since their use is projected to increase (more on this later). The 3D report goes to some length to try to pin down what “decision making” really means so that we can guide respondents in answering how their 360 data are used. For example, is leadership development training a “decision?” I would say yes since some people get it and some don’t based on 360’s, and that affects both the individual’s career as well as how the organization uses its resources (e.g., people, time and dollars).
But let’s make it clearer and look at just a few of the reported uses for 360 results. In the 3D Group report, one of the most striking numbers is the 47% of organizations that indicate they use 360’s for performance management (despite on 31% saying in another question that they use it for personnel decisions). It may well be that “performance management” use means integrating 360 results into the development planning aspect of a PM process, which is a great way to create accountability without overdoing the measurement focus. This type of linkage of development to performance plans is also reinforced as a best practice in the highlights of the OPM study.
In the OPM study, we 56% of the surveyed leaders report participating in a 360 process (up from 41% in 2008), though the purpose is not specified. 360’s are positioned as one of several assessment tools available to these leaders, and an integrated assessment strategy is encouraged in the report.
Two other messages that come out of both of these studies are 1) use of coaches (and/or managers as coaches) for post assessment follow up continues to gain momentum as a key factor in success, and 2) the 360 processes must be linked to organizational objectives, strategies and values in order to have impact and sustainability.
Finally, in the 3D study, 73% of the organizations report that their use of 360’s in the next year will either continue at the same level or increase.
These studies are extremely helpful in gauging the trends within the area of leadership development and assessment, and, to this observer, it appears that some of the research that has promoted certain best practices, such as follow up and coaching, is being considered in the design and implementation of 360 feedback processes. But it is most heartening to see some indications that organizations are also realizing the value that 360 data can bring to talent management and the decisions about leaders that are inherent in managing that critical resource.
It is no longer useful (if it ever was) to debate whether 360 feedback can be used successfully to inform and improve personnel decisions. It has and it does. It’s not necessarily easy to do right, but the investment is worth the benefits.
©2013 David W. Bracken
One question that has been at the core of best practices in 360 Feedback since its inception relates to the conditions that are most likely to create sustained behavior change (at least for those of us that believe that behavior change is the ultimate goal). Many of us believe that behavior change is not a question of ability to change but primarily one of motivation. Motivation often begins with the creation of awareness that some change is necessary, the accepting the feedback, and then moving on to implementing the change.
One of the more interesting examples of creating behavior change began when seat belts were included as standard equipment in all passenger vehicles in 1964. I am old enough to remember when that happened and started driving not long thereafter. So using a seat belt was part of the driver education routine since I began driving and has not been a big deal for me.
The reasons for noncompliance with seatbelt usage are as varied as human nature. Some people see it as a civil rights issue, as in, “No one is going to tell me what to do.” There is also the notion that it protects against a low probability event, as in “It won’t happen to me. I’m a careful driver.” Living in Nebraska for a while, I learned that people growing up on a farm don’t “have the time” to buckle and unbuckle seatbelts in their trucks when they are learning to drive, so they don’t get into that habit. (I also found, to my annoyance, that they also never learned how to use turn signals.)
I remember back in the ‘60’s reading about a woman who wrote a car manufacturer to ask that they make the seat belts thinner because they were uncomfortable to sit on. Really.
Some people have internal motivation to comply, which can also be due to multiple factors such as personality, demographics, training, norms (e.g., parental modeling), and so on. This is also true when we are trying to create behavior change in leaders, but we will see that these factors are not primary determinants of compliance..
In thinking about seatbelt usage as a challenge in creating behavior change, I found study from 2008 by the Department of Transportation. It is titled “How States Achieve High Seat Belt Use Rates” (DOT HS 810 962). (Note: This is a 170 page report with lots of tables and statistical analyses, and if any of you geeks want a copy, let me know.)
The major finding of this in-depth study states:
The statistical analyses suggest that the most important difference between the high and low seat belt use States is enforcement, not demographics or funds spent on media.
This chart Seatbelt Usage in US, amongst the many in this report, seems to capture the messages fairly well to support their assertion. This chart plots seat belt usage by state, where we see a large spread ranging from just over 60% (Mississippi) to about 95% (Hawaii). It also shows whether each state has primary seatbelt laws (where seatbelt usage is a violation by itself), or secondary laws (where seatbelt usage can only be enforced if the driver is stopped for another purpose). Based on this table alone, one might argue causality but the study systematically shows that this data, along with others relating to law enforcement practices, are the best predictors of seatbelt usage.
One way of looking at this study is to view law enforcement as a form of external accountability, i.e., having consequences for your actions (or lack thereof). The primary versus secondary law factor largely shifts the probabilities of being caught, with the apparent desired effect on seatbelt usage.
So, back to 360 Feedback. I always have been, and continue to be, mystified as to how some implementers of 360 feedback processes believe that sustainable behavior change is going to occur in the vast majority of leaders without some form of external accountability. Processes that are supposedly “development only” (i.e., have no consequences) should not be expected to create change. In those processes, participants are often not required to, or even discouraged from, sharing their results with others, especially their manager. I have called these processes “parlor games” in the past because they are kind of fun, are all about “me,” and have no consequences.
How can we create external accountability in 360 processes? I believe that the most constructive way to create both motivation and alignment (ensuring behavior change is in synch with organizational needs/values) is to integrate the 360 feedback into Human Resource processes, such as leadership development, succession planning, high potential programs, staffing decisions, and performance management. All these uses involve some form of decision making that affects the individual (and the organization), which puts pressure on the 360 data to be reliable and valid. Note also that I include leadership development in this list as a form of decision making because it does affect the employee’s career as well as the investment (or not) of organization resources.
But external accountability can be created by other, more subtle ways as well. We all know from our kept and (more typically) unkept New Year’s resolutions about the power of going public with our commitments to change. Sharing your results and actions with your manager has many benefits, but can cause real and perceived unfairness if some people are doing it and others not. Discussing your results with your raters and engaging them in your development plans has multiple benefits.
Another source of accountability can (and should) come from your coach, if you are fortunate enough to have one. I have always believed that the finding in the Smither et al (2005) meta-analysis that the presence of a coach is one determinant of whether behavior change is observed is due to the accountability that coaches create by requiring the coachee to specifically state what they are going to do and to check back that the coachee has followed through on that commitment.
Over and over, we see evidence that, when human beings are not held accountable, more often than not they will stray from what is in their best interests and/or the interests of the group (organization, country, etc.). Whether it’s irrational (ignoring facts) or overly rational (finding ways to “get around” the system), we should not expect that people will do what is needed, and we should not rely on our friends, neighbors, peers or leaders to always do what is right if there are no consequences for inaction or bad behavior.
©2012 David W. Bracken
My good friend and collaborator, Dale Rose, dropped me a note regarding his plans to do another benchmarking study on 360 Feedback processes. His company, The 3D Group, has done a couple of these studies before and Dale has been generous in sharing his results with me, which I have cited in some of my workshops and webinars. The studies are conducted by interviewing coordinators of active 360 systems. Given that they are verbal, some of the results have appeared somewhat internally inconsistent and difficult to reconcile, though the general trends are useful and informative.
Many of the topics are useful for practitioners to gauge their program design, such as the type of instrument, number of items, rating scales, rater selection, and so on. For me, the most interesting data relates to the various uses of 360 results.
Respondents in the 2004 and 2009 studies report many uses. In both studies, “development” is the most frequent response, and that’s how it should be. In fact, I’m amazed that the responses weren’t 100% since a 360 process should be about development. The fact that in 2004 only 72% of answers included development as a purpose is troubling whether we take the answers as factual or if they didn’t understand the question. The issue at hand here is not whether 360’s should be used for development; it is what else they should, can, and are used for in addition to “development.”
In 2004, the next most frequent use was “career development;” that makes sense. In 2009, the next most frequent was “performance management,” and career development dropped way down. Other substantial uses include high potential identification, direct link to performance measurement, succession planning, and direct link to pay.
But when asked whether the feedback is used “for decision making or just for development”, about 2/3 of the respondents indicated “development only” and only 1/3 for “decision making.” I believe these numbers understate the actual use of 360 for “decision making” (perhaps by a wide margin), though (as I will propose), it can depend on how we define what a “decision” is.
To “decide” is “to select as a course of action,” according to Miriam Webster (in this context). I would build on that definition that one course of action is to do nothing, i.e., don’t change the status quo or don’t let someone do something. It is impossible to know what goes on in person’s mind when he/she speaks of development, but it seems reasonable to suppose that it involves doing something beyond just leaving the person alone, i.e., maintaining the status quo. But doing nothing is a decision. So almost any developmental use is making a decision as to what needs to be done, what personal (time) and organizational (money) resources are to be devoted to that person. Conversely, denying an employee access to developmental resources that another employee does get access to is a decision, with results that are clearly impactful but difficult to measure.
To further complicate the issues, it is one thing to say your process is for “development only,” and another to know how it is actually used. Every time my clients have looked behind the curtain of actual use of 360 data, they unfailingly find that managers are using it for purposes that are not supported. For example, in one client of mine, anecdotal evidence repeatedly surfaced that the “development only” participants were often asked to bring their reports with them to internal interviews for new jobs within the organization. The bad news was that this was outside of policy; the good news was that leaders saw the data as useful in making decisions, though (back to bad news) they may have been untrained to correctly interpret the reports.
Which brings us to why this is an important issue. There are legitimate “development only” 360 processes where the participant has no accountability for using the results and, in fact, is often actively discouraged from sharing the results with anyone else. Since there are not consequences, there are few, if any, consequential actions or decisions required. But most 360 processes (despite the benchmark results suggesting otherwise) do result in some decisions being made, which might include doing nothing by denying an employee access to certain types of development.
The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for Multisource Feedback When Used for Decision Making.” My sense is many designers and implementers of 360 (multisource) processes feel that these Guidelines don’t apply because their system isn’t used for decision making. Most of them are wrong about that. Their systems are being used for decision making, and, even if not, why would we design an invalid process? And any system that involves the manager of the participant (which it should) is creating the expectation of direct or indirect decision making to result.
So Dale’s question to me (remember Dale?) is how would I suggest wording a question in his new benchmarking study that would satisfy my curiosity regarding the use of 360 results. I proposed this wording:
“If we define a personnel decision as something that affects an employee’s access to development, training, jobs, promotions or rewards, is your 360 process used for personnel decisions?”
Dale hasn’t committed to using this question in his study. What do you think?
©2012 David W. Bracken
I don’t know why I feel compelled to respond to what I see are unreasonable positions (primarily in LinkedIn discussions). But I do, and this blog gives me a vehicle for doing so without taking up a disproportionate amount of air time on that forum.
So what got me going this time? A LinkedIn discussion (that I started on the topic of 360 validity) got diverted into the topic of “proper” use of 360 feedback (development vs decision making). The particular comment that got me going was, “I believe these assessments should be used for development – full stop.” (Virtually 100% of 360 processes are used for development, but the context indicates that he meant “development only.”) Having lived and worked in London for a while, I realized (or realised) that the “full stop” has the same meaning as “period,” implying end of sentence and, with emphasis, no more is worth saying. By the way, I am using this person only as an example of the many, many individuals who have expressed similar dogmatic views on this topic.
There are probably a few things that are appropriate to put a “full stop” on. That would be an interesting blog for someone, e.g., would we include the Ten Commandments? “Thou Shall Not Kill. Full stop.” Hmmm… but then we have Christians who believe in capital punishment, so maybe it’s only a partial stop (or pause)? Like I said, I will let someone else take that on.
Are the physical sciences a place for “full stops?” Like, “The world is flat. Full stop.” “ The Sun revolves around the Earth. Full stop.” Just this last week, we were presented with the possibility that another supposedly immutable law is under attack, i.e., “Nothing can go faster than the speed of light. Full stop.” Now we have European scientists who have observed neutrinos apparently traveling faster than the speed of light and are searching for ways to explain and confirm it. If found to be true, it would challenge many of the basics of physics, opening the door to time travel, for example. The fact that some scientists are apparently challenging the “full stop” nature of the Theory of Relativity is also fascinating, if only for the reason that they are open to exploring the supposedly impossible. And, by the way, are begging for others to challenge and/or replicate their findings.
I firmly believe that the social sciences have no place for “full stops.” To me, “full stop” means ceasing to explore and learn. It seems to indicate a lack of openness to considering new information or different perspectives.
I suspect there are many practitioners in the “hard” sciences who question whether what we do is a “science” at all. (I think I am running out of my quota of quotations marks.) Perhaps they see our work with understanding human behavior as a quest with no hope of ever having answers. That’s what I like about psychology. We will never fully know how to explain human behavior, and that’s a good thing. If we can explain it, then we probably could control it. I think that is a scary thought. BUT we do try to improve our understanding and increase the probabilities of predicting what people will do. That is one of the basic goals of industrial/organizational psychology.
(I have been known to contend that what we do is harder than rocket science because there are no answers to what we do, only probabilities. The truth is that even the hard sciences have fewer “full stops” than even they would like. I just finished reading a book about the Apollo space program, Rocket Men, and it is very interesting to know how many “hard stops” that used to exist were bashed (e.g., humans can’t live in weightlessness, the moon’s crust will collapse if we try to land on it. Insert “hard stops” appropriately), how much uncertainty there was, and how amazing the accomplishment really was. I also learned that one of the reasons the astronauts’ visors were mirrored was so that aliens couldn’t see their faces. Seriously.)
Increasing probabilities for predicting and influencing employee behavior requires that we also explore options. I can’t see how it is productive to assert that we know the answer to anything, and that we shouldn’t consider options that help us serve our clients, i.e., organizations, more effectively.
On top of all that, the most recent 3D Group benchmark study indicates that about one third of organizations DO use 360 data for some sort of administrative purpose, and that almost certainly understates the real numbers. What do we tell those organizations? That they should cease doing so since our collective wisdom says that there is no way they can actually be succeeding? That we cannot (or should not) learn from what they are doing to help their organizations make better decisions about their leaders? That a few opinions should outweigh these experiences?
I don’t get it. No stop.
©2011 David W. Bracken
I used my last blog (http://dwbracken.wordpress.com/2011/08/09/so-now-what/) to start LinkedIn discussions in the 360 Feedback and I/O Practitioners group, asking the question: What is a “valid” 360 process? The response from the 360 group was tepid, maybe because the group has a more general population that might not be that concerned with “classic” validity issues (which is basically why I wrote the blog in the first place). But the I/O community went nuts (45 entries so far) with comments running the gamut from constructive to dismissive to deconstructive.
Here is a sample of some of the “deconstructive” comments:
…I quickly came to conclusion it was a waste of good money…and only useful for people who could (or wanted to) get a little better.
It is all probably a waste of time and money. Good luck!!
There is nothing “valid” about so-called 360 degree feedback. Technically speaking, it isn’t even feedback. It is a thinly veiled means of exerting pressure on the individual who is the focal point.
My position regarding performance appraisal is the same as it has been for many years: Scrap It. Ditto for 360.
Actually, I generally agree with these statements in that many 360 processes are a waste of time and money. It’s not surprising that these sentiments are out there and probably quite prevalent. I wonder, though, if we are all on the same page. In another earlier blog, I suggested that discussions about the use and effectiveness of 360’s should be separated by those that are designed for feedback to a single individual (N=1) and those that are designed to be applied to groups (N>1).
But the fact is that HR professionals have to help their management make decisions about people, starting with hiring and then progressing through placement, staffing, promotions, compensation, rewards/recognition, succession planning, potential designation, development opportunities, and maybe even termination.
Nothing is perfect, especially so when it comes to matters that involve people. As an example, look to the U.S. Constitution, an endearing document that has withstood the test of time. Yet the Founding Fathers were the first to realize that they needed to make provisions for the addition of amendments to further make refinements. Of course, some of those amendments were imperfect themselves and were later rescinded.
But we haven’t thrown out the Constitution because it is imperfect. Nor do we find it easy to come to agreements what the revisions should be. But one of the many good things about humans is a seemingly natural desire to make things better.
Ever since I read Mark Edwards and Ann Ewen’s seminal book, 360 Degree Feedback, I have believed that 360 Feedback has the potential to improve personnel decision making when done well. The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for multisource feedback when used for decision making,” coauthored with Carol Timmreck, where we made a stab at defining what “done well” can mean.
In our profession, we have an obligation to constantly seek ways of improving personnel decision making. There are two major needs we are trying to meet, which sometimes cause tensions. One is to provide the organization with more accurate information on which to base these decisions, which we define as increased reliability (accurate measurement) and validity (relevant to job performance). Accurate decision making is good for both the organization and the individual.
The second need is to simultaneously use methods that promote fairness. This notion of fairness is particularly salient in the U.S. where we have “protected classes” (i.e., women, minorities, older workers), but hopefully fairness is a universal concept that applies in many cultures.
Beginning with the Edwards & Ewen book and progressing from there, we can find more and more evidence that 360 done well can provide decision makers with better information (i.e., valid and fair) than traditional sources (e.g., supervisory evaluations). I actually heard a lawyer state that organizations could be legally exposed for not using 360 feedback because is more valid and fair than methods currently in use.
I have quoted Smither, London and Reilly (2005) before, but here it is again:
We therefore think it is time for researchers and practitioners to ask “Under what conditions and for whom
is multisource feedback likely to be beneficial?” (rather than asking “Does multisource feedback work?”).
©2011 David W. Bracken
I have a few events coming up in the next 3 weeks or so that I would like to bring to your collective attention in case you have some interest. One is free, two are not (though I receive no remuneration). I also have an article out that I co-authored on 360 feedback.
In chronological order, on May 25 Allan Church, VP Global Talent Development at PepsiCo, and I will lead a seminar titled, “Integrating 360 & Upward Feedback into Performance and Rewards Systems” at the 2011 World at Work Conference in San Diego (www.worldatwork.org/sandiego2011). I will be offering some general observations on the appropriateness, challenges, and potential benefits of using 360 Feedback for decision making, such as performance management. The audience will be very interested in Allan’s descriptions of his experiences with past and current processes that have used 360 and Upward Feedback for both developmental and decision making purposes.
On June 8, I am looking forward to conducting a half day workshop for the Personnel Testing Council of Metropolitan Washington (PTCMW) in Arlington, VA, titled “360-Degree Assessments: Make the Right Decisions and Create Sustainable Change” (contact Training.PTCMW@GMAIL.COM or go to WWW.PTCMW.ORG). This workshop is open to the public and costs $50. I will be building from the workshop Carol Jenkins and I conducted at The Society for Industrial and Organizational Psychology. That said, the word “assessments” in the title is a foreshadowing of a greater emphasis on the use of 360 Feedback in a decision making context and an audience that is expected to have great interest in the questions of validity and measurement.
On the following day, June 9 (at 3:30 PM EDT), I will be part of an online virtual conference organized by the Institute of Human Resources and hr.com on performance management. My webinar is titled, “Using 360 Feedback in Performance Management: The Debate and Decisions,” where the “decisions” part has multiple meanings. Given the earlier two sessions I described, it should be clear that I am a proponent of using 360/Upward Feedback for decision making under the right conditions. The other take on “decisions” is the multitude of decisions that are required to create those “right conditions” in the design and implementation of a multisource process.
On that note, I am proud to say that Dale Rose and I have a new article in the Journal of Business and Psychology (June) titled, “When does 360-degree feedback create behavior change? And how would we know it when it does?” Our effort is largely an attempt to identify the critical design factors in creating 360 processes and the associated research needs.
This article is part of a special research issue (http://springerlink.com/content/w44772764751/) of JBP and you will have to pay for a copy unless you have a subscription. As a tease, here is the abstract:
360-degree feedback has great promise as a method for creating both behavior change and organization change, yet research demonstrating results to this effect has been mixed. The mixed results are, at least in part, because of the high degree of variation in design features across 360 processes. We identify four characteristics of a 360 process that are required to successfully create organization change, (1) relevant content, (2) credible data, (3) accountability, and (4) census participation, and cite the important research issues in each of those areas relative to design decisions. In addition, when behavior change is created, the data must be sufficiently reliable to detect it, and we highlight current and needed research in the measurement domain, using response scale research as a prime example.
Hope something here catches your eye/ear!
©2011 David W. Bracken
Us folks at OrgVitality have a view of the “vital” organization that includes concepts of ambidexterity, agility and resilience. These concepts can be operationalized to promote the creation of a culture that makes those characteristics a way of life in the organization.
I found a recent article (Lengnick-Hall, Beck and Lengnick-Hall, 2010) titled, “Developing a capacity for organizational resilience through strategic human resource management.” Their message of creating and sustaining a culture through human resource processes is a powerful concept.
These authors define resilience as:
“…a firm’s ability to effectively absorb, develop situation-specific responses to, and ultimately engage in transformative activities to capitalize on disruptive surprises that potentially threaten organization survival.” They go on to propose that resilience should be created through individual knowledge, skills, and abilities and organizational routines and processes.
This is good stuff but I think they have missed an opportunity to talk about creating a culture through behavior change. Culture has a lot of definitions, but a couple are consistent with this view of behavior being a key factor. I have been drawn to an observable and measurable definition of culture offered by Bossidy and Charan (2002) in their seminal book, Execution: The discipline of getting things done,:
“The culture of a company is the behavior of its leaders. Leaders get the behavior they exhibit and tolerate.”
While many traditionalists will argue with such a “superficial” treatment of culture, it was foreshadowed by Kotter and Heskett (1992) who refined their definition of culture with this statement: “…culture represents the behavior patterns or style of an organization that new employees are automatically encouraged to follow by their fellow employees.” (p. 4)
This definition is too limiting in not directly acknowledging that the “fellow employees” who have the most impact on creating the culture are the leaders of that organization.
Let’s return to the resilience article. I looked for statements of behaviors that might be useful for creating a culture of resilience, particularly defined in terms of leader behavior that could easily be fodder for a 360 or upward feedback process. Fortunately for me, there is a section called, “Behavioral elements of organizational resilience.” Their language is somewhat academic (e.g., “nonconforming strategic repertoires”), but here are some examples of behaviors that I propose support their conceptualization of resilience:
- Encourages new solutions to problems
- Finds new strategies that are different from the past and industry norms
- Takes the initiative and moves quickly to overcome challenges
- Ensures that new and creative solutions are consistent with organizational goals and values
- Challenges the status quo
- Encourages the discarding of obsolete information and practices
- Recognizes and rewards behaviors that demonstrate flexibility and resourcefulness
They list a whole raft of HR policies, principles and practices that can support the development of resilience, including things like after-action reviews, open architecture, broad job descriptions, employee suggestions, and cross-departmental task forces. They reference a need to include performance reviews (“results-based appraisals) that encourage the right activities.
But nowhere is 360 feedback mentioned as a potentially powerful tool to reinforce and create culture change. Here are a few ways that 360 processes can be integral parts of a culture change initiative:
- Defines the construct (e.g., resilience) in behavioral terms
- Communicates the construct as an organizational priority (i.e., is being measured)
- Potentially communicates to all employees (raters, ratees) on a repeated basis
- Creates a metric for tracking progress over time
- Creates a metric for identifying individual, team, and organizational gaps in performance
- Creates accountability for behavior consistent with organizational needs
- Supports aligned HR practices when integrated with other HR systems (e.g., development, staffing, succession planning, performance reviews, high potential development)
This list makes some assumptions about the design and implementation of 360 processes that support culture change. That is such a large topic that it would require an entire book. Stay tuned for that.
I am amazed and disappointed that a major treatise on what is in effect culture change would not include 360 feedback as at least worth consideration as a supporting HR practice. It makes me wonder why that is.
Bossidy, L, and Charan, R. (2002). Execution: The Discipline of Getting Things Done. New York: Crown Business.
Kotter, J.P., and Heskett, J.L. (1992). Corporate Culture and Performance. New York: Free Press.
Lengnick-Hall, C.A., et al. (2010). Developing a capacity for organizational resilience through strategic human resource management. Human Resource Management Review, doi:10.1016/j.hrmr.2010.07.001.
©2011 David W. Bracken
A number of (pre-recession) years ago, I belonged to a firm that was operating in the black and held some very nice off-site meetings for its consultants. At one such event, we had an evening reception that had some fun activities, one of which being a Tarot reader. I don’t even read horoscopes but there was no one waiting and I decided to give it a try (the first and last time). I obviously didn’t know much about Tarot but it seemed like the last card to be turned over was the most important. And, lo and behold, it was the Death card! I remember a pause from the Reader (perhaps an intake of breath?), and then a rapid clearing of the cards with some comment to the effect of, “That’s not important.” Session over.
Well, I guess the good news is that I am still here (most people would agree with that I think). My purpose for bringing this up is not to discuss superstitions and the occult, but to reflect on how people react to and use 360 feedback.
In fact, I have been known to call some 360 processes “parlor games, “which relates directly to my Tarot experience. That was a true “parlor game.” What is a parlor game? My definition, for this context, is an activity that is fun and has no consequences, where a person can be the focus of attention with low risk of embarrassment and effort. Since I strongly believe in self determination, I do my best to not let arbitrary events that I cannot control to affect my life. That would include a turn of a card, for starters.
So how do we ensure that 360 Feedback isn’t a parlor game and does matter? I propose that two important factors are Acceptance and Accountability.
Some of the design factors that promote Acceptance would include:
- Use a custom instrument (to create relevance)
- Have the rater select raters, with manager approval (to enhance credibility of feedback)
- Enhance rater honesty and reliability (to help credibility of data)
- Invite enough raters to enhance reliability and minimize effects of outliers
- Be totally transparent to purpose, goals, and use (not mystical, magic, inconsistent or arbitrary)
Factors that can help create Accountability (and increase the probability of behavior change) include:
- Require leaders to discuss results and development plans with raters (like going public with a New Year’s Resolution)
- Include results as a component of performance management, typically in the development planning section, to create consequences for follow through, or lack thereof
- Ensure that the leader’s manager is also held accountable for properly using results in managing and coaching
- Conduct follow-up measures such as mini-360’s and/or annual readministrations.
Some 360 processes appear to define success as just creating awareness in the participants, hoping that the leader will be self motivated to change. That does happen; some leaders do change, at least for a while, and maybe even in the right way. (Some people probably change based on Tarot readings too!). For those leaders who need to change the most, it usually doesn’t happen without Acceptance and Accountability.
Simply giving a feedback report to a leader and stopping there seems like a parlor game to me. A very expensive one.
©2011 David W. Bracken
I received an interesting comment to one of my blog entries via the OrgVitality LinkedIn discussion site:
David, I have been through the 360 process a few times and have always come away with a few ego-stroking facts and a few interesting but mostly unusable ones that point out need for improvement. The tools and methods were largely suited to quantitative research but the sample size too small and too varied for any kind of generalization.
So my general take-home from 360 was “interesting but useless”.
I am in total agreement that too many 360 processes are “interesting but useless.” I sometimes put those in the category of “parlor games,” which might be described the same way.
I do not personally know this author, but I do appreciate his perspectives. Interestingly, my most recent blog was about treating problems (such as 360’s being “useless”) as opportunities for exploring solutions. In fact, this person did offer up his own solution in his comment, but still feels that overall it is still useless.
So let’s consider some factors that might cause a 360 to be perceived as “useless,” and some possible solutions:
Problem: Feedback isn’t relevant
Solutions: Use a custom designed instrument derived from organizational priorities (e.g., strategies, leadership competency model, values). Keep it short (no more than 50 items). Have follow up mini surveys that cover development priorities.
Problem: Feedback isn’t reliable/credible
Solutions: Ratee picks raters, with manager approval. Have sufficient number of raters to create reliable data, including all direct reports.
Problem: Feedback isn’t a priority
Solutions: Gain and communicate senior leadership support (and participation). Integrate feedback into HR processes (e.g., performance management, leadership development, succession planning). Conduct on a regular basis, like other HR processes. Hold participants and their supervisors accountable.
One of the more difficult problems to solve is when awareness isn’t followed by acceptance. For some people, feedback is just wasted effort. Some people don’t want feedback (see our friend, the “hoss”). Some people don’t want to change and don’t see a need. If the system tolerates that attitude, then 360 probably is useless. But whether the “you” in useless is yourself or the organization, there are ways to solve that problem.
Any other observations??
©2010 David W. Bracken