Strategic 360s

360s for more than just development

Archive for the ‘360 Research’ Category

Big Data and Multisource Feedback

leave a comment »

Here’s another NYTimes Corner Office offering, featuring Laszlo Bock, SVP of People Operations at Google. (http://www.nytimes.com/2013/06/20/business/in-head-hunting-big-data-may-not-be-such-a-big-deal.html?pagewanted=1).  The first half is about hiring with some interesting observations (especially if you have responsibilities in that area).  The second half describes their Upward Feedback process, along with other HR systems. And, no, they are not a client.

I offer these observations for your consideration:

  • Big Data is the new fad, but many of us have been using large data bases to understand the impact of our change processes for a long time, whether at the organizational level (employee surveys) or the individual level (360 Feedback).
  • Your organization is not using “Big Data” (at least in the way Laszlo is describing) if you are using external norms.  Note that Google is using internal norms very aggressively, tracking progress in moving the norm over time AND giving percentile rankings for each leader.
  • The challenges he describes regarding hiring practices are very interesting, and it appears they are making some progress in implementing processes that are more predictive and more consistent. That said, hiring is always a challenge, and emphasizes the importance of using processes such as multisource (360) feedback to identify and either improve or weed out poor managers.
  • He speaks to the importance of consistency in leaders.  360 Feedback promotes consistency in a number of ways.  First, it defines the behaviors that describe successful leaders, a form of alignment. One of the behaviors can relate to consistency itself, i.e., providing feedback to the leader about whether he/she is consistent.  In addition, an organization-wide 360 process that is administered and used in a consistent manner can only help in reinforcing the views of employees that decisions are being made on a fair basis. Organization-wide implementation is the key to success in creating change, acceptance and sustainability.
  • Back to the percentile rankings.  I have found organizations strangely averse to this practice of letting the leader know where he/she ranks against peers.  As Laszlo notes, the challenge is to give the leader a realistic view of how he/she is perceived, and to create some motivation to change.  By the way, these rankings are one “solution” to leniency trends, that is, saying to the leader, “You may think you are hot stuff because you got a 4.0 rating (out of 5)  on that behavior, but you are still lower than 80% of your peers.”  That scenario is common in areas such as Integrity where we expect high scores from our leaders.
  • I am a little surprised that he believes that the managers can “self-motivate” in the way he describes. I am usually skeptical that leaders will change without accountability. I would like to know more about that.  I have already noted the use of percentile rankings that most organizations dismiss, and are seen are powerful motivators in this process.  Laszlo also describes a dialog of sorts with the leader at the 8th percentile. Who is that conversation with? If it is with another person (boss, coach, HR manager), that alone creates a form of accountability and an implied consequence if improvement isn’t seen. If the conversation is just in the leader’s head, it speaks to the power of the information provided by the percentile score.  Creating awareness is one thing. Awareness with context (e.g., comparison to others) is much more powerful.  (Maybe like, “That’s a nice pair of pants!  If it were the 60’s.”)
  • Lastly, Laszlo  speaks to the uniqueness of his and other organizations regarding what the organization needs from its leaders and how an individual employee might fit in and contribute. This clearly speaks to the need for custom designed content for hiring practices and then internal assessments once an employee is onboard.

Google is doing some very interesting research regarding leadership.  Go back and look at their work on leadership competencies that they publicized a couple years ago. http://www.nytimes.com/2011/03/13/business/13hire.html?pagewanted=all

Beyond the research, Google is actually using their Big Data to create a culture, define the leaders they require, and putting some teeth into the theory with upward feedback at the forefront.  Yet, at the end, he notes that all the measurement must be viewed through the lens of human insight.  The context is deeper than just organization; it is also moderated by the current version of strategy, the team requirements, the job requirements, and the personal situation, all of which are in a constant state of flux.

©2013 David W. Bracken

Pay Attention to That Leader Behind the Curtain

leave a comment »

One of my early posts was titled “Snakes in Suits”   (http://dwbracken.wordpress.com/2010/10/12/snakes-in-suits/), which is also the title of a book about psychopaths in industry, specifically in leadership positions, and how skilled they are (because they are psychopaths) in escaping detection until the damage has been done.  The blog post highlighted a 360 process whose primary purpose is to identify the bottom tail of the performance distribution, essentially managing the quality of the leadership cadre by fixing or removing the poorest performers/behaviors. The metaphor is pulling back the curtain on the pretender/offender, like Toto does in “The Wizard of Oz,” who has escaped discovery for many years through cleverness and deception. Of course, he cries out, “Pay no attention to that man behind the curtain.”

I got to thinking about this topic recently (no, not because of the new Wizard of Oz movie) when I got an update from Bill Gentry at the Center for Creative Leadership regarding his evolving thinking and research on the topic of Integrity (see his YouTube video, http://www.youtube.com/watch?v=4d7yQHHUL-Q&list=UU9ulOx1rJK5FMlC5gbS91cQ&index=1).

One of the possible reasons that the “Snakes in Suits” book didn’t get more traction in our field is the fact that true psychopaths are relatively rare in our society (maybe 3-5% of the population by some estimates), though their “cousins” (bullies, jerks, add your own adjectives) are much more prevalent and all can cause substantial damage.  By expanding the definition of inappropriate behavior to include integrity (or lack thereof) as Dr. Gentry highlights, we now have a behavioral requirement that hopefully applies to every leader, and every employee for that matter.

One of Bill’s research articles uncovers a finding where integrity is identified as a critical trait for senior executives but much less so for mid-level executives. His hypothesis is that success in mid-management is much more on the “what” that is achieved (e.g., revenues, sales, budgets) than the “how” (e.g., adherence to the values of the organization).  This de-emphasis on the “how” side of performance measurement causes organizations to promote leaders to the most senior levels without sufficient scrutiny of their character, resulting in some flawed leadership at the top of companies where integrity is essential (including some very high profile examples that Bill enumerates as part of his publications).

While I’m at it, I found another piece of research that relates to the significant impact that abusive management can have across large swaths of the organization. This article (cited below) suggests that employees partly attribute abusive supervision to negative valuation by the organization and, consequently, behave negatively toward and withhold positive contributions to it. In other words, employees may believe that abusive supervisors are condoned by the company, and then lose commitment and engagement to said organization.  And there is probably a lot of truth in that logic.

Organizations have a responsibility to identify and to address situations where leaders are behaving badly, and the research cited above strongly suggests that it is in the best interests of organizations to do so.  So how is that done?  Many organizations rely on anonymous processes that encourage employees to “speak up” without fear of retribution.  That is such a passive approach as to almost be amusing if it weren’t so important.

Of course, you know where I am going with this.  A 360 Degree Feedback process that is consistently administered across the organization AND has provisions for the results being shared with the organization (e.g., Human Resources) is about the only way I can think of where this systemic problem can be addressed.  This should be a critical aspect of Talent Management systems in organizations, and as common and ubiquitous as performance management.  As the authors of “Snakes in Suits” point out, 360 feedback can be a powerful way to identify the “snakes” early in their careers. One problem is that these snakes are very skilled at avoiding detection by finding loopholes in inconsistently administered 360’s so that they don’t have to participate, or don’t have to share their feedback with anyone.

Who is that leader behind the curtain? It may be a wizard. It may be a jerk. It may be a hero to be honored.  But we won’t know unless we have our Toto to pull back the curtain, hopefully before it’s too late.

Reference

Blaming the organization for abusive supervision: The roles of perceived organizational support and supervisor’s organizational embodiment.  Shoss, Mindy K.; Eisenberger, Robert; Restubog, Simon Lloyd D.; Zagenczyk, Thomas J.  Journal of Applied Psychology, Vol 98(1), Jan 2013, 158-168. doi: 10.1037/a0030687

©2013 David W. Bracken

The Debate is Over

with 2 comments

I have recently had the opportunity to read two large benchmarking reports that relate to talent management, leadership development and, specifically, how 360 Feedback is being used to support those disciplines.

The first is the U.S. Office of Personnel Management “Executive Development Best Practices Guide” (November, 2012), in which includes both a compilation of best practices across 17 major organizations and a survey of Federal Government members of the Senior Executive Services, which was in turn a follow up to a similar survey in 2008.

The second report was created by The 3D Group as the third benchmark study specifically related to practices in 360 Degree Feedback. This year’s study differed from the past versions by being conducted online, which had the immediate benefit of expanding the sample to over 200 organizations. This change in methodology, sample and content makes interpretation of trend scores a little dicey, but the results are compelling nonetheless. Thank you to Dale Rose and his team at 3D Group for sharing the report with me once again.

These studies have many interesting results that relate to the practice of 360 Feedback, and I want to grab the low hanging fruit for the purposes of this blog entry.

As the title teases, the debate is over, with the “debate” being whether 360 Feedback can and should be used for decision making purposes.  Let me once again acknowledge that 1) all 360 Feedback should be used for leadership development, 2) some 360 processes are solely for leadership development, often one leader at time, and 3) these development-only focused 360 processes should not be used for decision making.

But these studies demonstrate that 360 Feedback continues to be used for decision making, at a growing rate, and evidently successfully since their use is projected to increase (more on this later).  The 3D report goes to some length to try to pin down what “decision making” really means so that we can guide respondents in answering how their 360 data are used.  For example, is leadership development training a “decision?” I would say yes since some people get it and some don’t based on 360’s, and that affects both the individual’s career as well as how the organization uses its resources (e.g., people, time and dollars).

But let’s make it clearer and look at just a few of the reported uses for 360 results.  In the 3D Group report, one of the most striking numbers is the 47% of organizations that indicate they use 360’s for performance management (despite on 31% saying in another question that they use it for personnel decisions).  It may well be that “performance management” use means integrating 360 results into the development planning aspect of a PM process, which is a great way to create accountability without overdoing the measurement focus. This type of linkage of development to performance plans is also reinforced as a best practice in the highlights of the OPM study.

In the OPM study, we 56% of the surveyed leaders report participating in a 360 process (up from 41% in 2008), though the purpose is not specified.  360’s are positioned as one of several assessment tools available to these leaders, and an integrated assessment strategy is encouraged in the report.

Two other messages that come out of both of these studies are 1) use of coaches (and/or managers as coaches) for post assessment follow up continues to gain momentum as a key factor in success, and 2) the 360 processes must be linked to organizational objectives, strategies and values in order to have impact and sustainability.

Finally, in the 3D study, 73% of the organizations report that their use of 360’s in the next year will either continue at the same level or increase.

These studies are extremely helpful in gauging the trends within the area of leadership development and assessment, and, to this observer, it appears that some of the research that has promoted certain best practices, such as follow up and coaching, is being considered in the design and implementation of 360 feedback processes.  But it is most heartening to see some indications that organizations are also realizing the value that 360 data can bring to talent management and the decisions about leaders that are inherent in managing that critical resource.

It is no longer useful (if it ever was) to debate whether 360 feedback can be used successfully to inform and improve personnel decisions. It has and it does. It’s not necessarily easy to do right, but the investment is worth the benefits.

©2013 David W. Bracken

The Manager-Coach

leave a comment »

A recent posting on the 360 Degree Feedback group in LinkedIn posed this question to the group:

I have been interested in more fine-grained detail about what it is that manager-coaches actually do that leads to perceptions on the part of the coachee that their managers are effective and supportive coaches. I see a lot of speculation and ‘armchair’ theorizing, but I cannot find specific, rigorous empirical research. Have I overlooked some references?

I have been similarly interested in this topic, largely due to my bias that 360 Feedback is most effective when the manager (boss) is involved in the use of the results, contrary to some practitioner who advise against it.

To that end, the work of Dr. Brodie Gregory caught my eye, particularly the instrument she developed and researched as part of her doctoral dissertation under the direction of Dr. Paul Levy at the University of Akron. Brodie has made a major contribution in identifying four constructs that she believes define the effective manager-coach:

  • Genuineness of the Relationship
  • Effective Communication
  • Comfort with the Relationship
  • Facilitating Development

Dr. Gregory’s research, though, may not fully answer the LinkedIn questioner since she doesn’t as yet have performance data on managers and the relationship to effective coaching.

I am posting two publications by Dr. Gregory in order to provide easy access to those of you who are interested in this topic:

Employee coaching relationships: enhancing construct clarity and measurement

IT’S NOT ME, IT’S YOU: A MULTILEVEL EXAMINATION OF VARIABLES THAT IMPACT EMPLOYEE COACHING RELATIONSHIPS

I have also developed a workshop called The ManagerCoach©, designed to be delivered for organizations who wish to make their managers better coaches. The workshop integrates a feedback instrument that includes, with Dr. Gregory’s permission (the instrument is copyrighted), the item content she has developed along with some other constructs.

For your information, I will be giving a webinar that describes the concept of The ManagerCoach and introduces the content of the workshop. I will deliver the webinar next on September 6 at 12:30 EDT. Let me know if you would like to register (free). Or check at http://www.orgvitality.com.

I consistently see, when organizations have the nerve to ask, that the lowest scores managers receive on both surveys and 360 feedback are often related to employee development and/or, more specifically, coaching abilities. This is a fixable and measurable area of leadership development.

©2012 David W. Bracken

It’s Human Nature

leave a comment »

One question that has been at the core of best practices in 360 Feedback since its inception relates to the conditions that are most likely to create sustained behavior change (at least for those of us that believe that behavior change is the ultimate goal).  Many of us believe that behavior change is not a question of ability to change but primarily one of motivation. Motivation often begins with the creation of awareness that some change is necessary, the accepting the feedback, and then moving on to implementing the change.

One of the more interesting examples of creating behavior change began when seat belts were included as standard equipment in all passenger vehicles in 1964.  I am old enough to remember when that happened and started driving not long thereafter. So using a seat belt was part of the driver education routine since I began driving and has not been a big deal for me.

The reasons for noncompliance with seatbelt usage are as varied as human nature. Some people see it as a civil rights issue, as in, “No one is going to tell me what to do.” There is also the notion that it protects against a low probability event, as in “It won’t happen to me. I’m a careful driver.” Living in Nebraska for a while, I learned that people growing up on a farm don’t “have the time” to buckle and unbuckle seatbelts in their trucks when they are learning to drive, so they don’t get into that habit. (I also found, to my annoyance, that they also never learned how to use turn signals.)

I remember back in the ‘60’s reading about a woman who wrote a car manufacturer to ask that they make the seat belts thinner because they were uncomfortable to sit on.  Really.

Some people have internal motivation to comply, which can also be due to multiple factors such as personality, demographics, training, norms (e.g., parental modeling), and so on. This is also true when we are trying to create behavior change in leaders, but we will see that these factors are not primary determinants of compliance..

In thinking about seatbelt usage as a challenge in creating behavior change, I found study from 2008 by the Department of Transportation. It is titled “How States Achieve High Seat Belt Use Rates” (DOT HS 810 962).  (Note: This is a 170 page report with lots of tables and statistical analyses, and if any of you geeks want a copy, let me know.)

The major finding of this in-depth study states:

The statistical analyses suggest that the most important difference between the high and low seat belt use States is enforcement, not demographics or funds spent on media.

This chart Seatbelt Usage in US, amongst the many in this report, seems to capture the messages fairly well to support their assertion.  This chart plots seat belt usage by state, where we see a large spread ranging from just over 60% (Mississippi) to about 95% (Hawaii).  It also shows whether each state has primary seatbelt laws (where seatbelt usage is a violation by itself), or secondary laws (where seatbelt usage can only be enforced if the driver is stopped for another purpose). Based on this table alone, one might argue causality but the study systematically shows that this data, along with others relating to law enforcement practices, are the best predictors of seatbelt usage.

One way of looking at this study is to view law enforcement as a form of external accountability, i.e., having consequences for your actions (or lack thereof). The primary versus secondary law factor largely shifts the probabilities of being caught, with the apparent desired effect on seatbelt usage.

So, back to 360 Feedback. I always have been, and continue to be, mystified as to how some implementers of 360 feedback processes believe that sustainable behavior change is going to occur in the vast majority of leaders without some form of external accountability. Processes that are supposedly “development only” (i.e., have no consequences) should not be expected to create change. In those processes, participants are often not required to, or even discouraged from, sharing their results with others, especially their manager. I have called these processes “parlor games” in the past because they are kind of fun, are all about “me,” and have no consequences.

How can we create external accountability in 360 processes?  I believe that the most constructive way to create both motivation and alignment (ensuring behavior change is in synch with organizational needs/values) is to integrate the 360 feedback into Human Resource processes, such as leadership development, succession planning, high potential programs, staffing decisions, and performance management.  All these uses involve some form of decision making that affects the individual (and the organization), which puts pressure on the 360 data to be reliable and valid. Note also that I include leadership development in this list as a form of decision making because it does affect the employee’s career as well as the investment (or not) of organization resources.

But external accountability can be created by other, more subtle ways as well. We all know from our kept and (more typically) unkept New Year’s resolutions about the power of going public with our commitments to change. Sharing your results and actions with your manager has many benefits, but can cause real and perceived unfairness if some people are doing it and others not. Discussing your results with your raters and engaging them in your development plans has multiple benefits.

Another source of accountability can (and should) come from your coach, if you are fortunate enough to have one.  I have always believed that the finding in the Smither et al (2005) meta-analysis that the presence of a coach is one determinant of whether behavior change is observed is due to the accountability that coaches create by requiring the coachee to specifically state what they are going to do and to check back that the coachee has followed through on that commitment.

Over and over, we see evidence that, when human beings are not held accountable, more often than not they will stray from what is in their best interests and/or the interests of the group (organization, country, etc.).  Whether it’s irrational (ignoring facts) or overly rational (finding ways to “get around” the system), we should not expect that people will do what is needed, and we should not rely on our friends, neighbors, peers or leaders to always do what is right if there are no consequences for inaction or bad behavior.

©2012 David W. Bracken

I Need an Exorcism

leave a comment »

Being the 360 Feedback nerd I am, I love it when some new folks get active on the LinkedIn 360 discussion group. One discussion emerged recently that caught my eye, and I have been watching it with interest, mulling over the perspectives and knowing I had to get my two cents in at some point.

Here is the question:

How many raters are too many raters?

We normally recommend 20 as a soft limit. With too many, we find the feedback gets diluted and you have too many people that don’t work closely enough with you to provide good feedback. I’d be curious if there are any suggestions for exceptions.

This is an important decision amongst the dozens that need to be made in the course of designing and implementing 360 processes. The question motivated me to pull out The Handbook of Multisource Feedback and find the excellent chapter on this topic by James Farr and Daniel Newman (2001), which reminded me of the complexity of this decision. Let me also reiterate that this is another decision that has different implications for “N=1” 360 processes (i.e., feedback for a single leader on an ad hoc basis) versus “N>1” systems (i.e., feedback for a group of participants); this blog and discussion is focused on the latter.

Usually people argue that too many surveys will cause disruption in the organization and unnecessary “soft costs” (i.e., time). The author of this question poses a different argument for limiting the rater population, which he calls “dilution” due to inviting unknowledgeable raters.  For me, one of the givens of any 360 system is that the raters must have sufficient experience with the ratee to give reliable feedback. One operationalization of that concept is to require that an employee must have worked with/for the ratee for some minimum amount of time (e.g., 6 months or even 1 year), even if he/she is a direct report. Having the ratee select the raters (with manager approval) is another practice that is designed to help get quality raters that then also facilitate the acceptance of the feedback by the ratee. So “dilution” due to unfamiliarity can be combated with that requirement, at least to some extent.

One respondent to this question offers this perspective:

The number of raters depends on the number of people that deal with this individual through important business interactions and can pass valuable feedback based on real experience. There is no one set answer.

I agree with that statement. Though, while there is no one set answer, some answers are better than others (see below).

In contrast, someone else states:

We have found effective to use minimum 3 and maximum 5 for any one rater category.

The minimum of 3 is standard practice these days as a “necessary but not sufficient” answer to the number of raters. As for the maximum of 5, this is also not uncommon but seems to ignore the science that supports larger numbers.  When clients seek my advice on this question of number of raters, I am swayed by the research published by Greguras and Robie (1998) who collected and researched the question of the reliability of various rater sources (i.e., subordinates, peers and managers). They came to the conclusion that different rater groups provide differing levels of reliable feedback, probably because the number of “agendas” lurking within the various types of raters. The least reliable are the subordinates, followed by the peers, and then the managers, the most reliable rater group.

One way to address rater unreliability is to increase the size of the group (another might be rater training, for example). Usually there is only one manager and best practice is to invite all direct reports (who meet the tenure guidelines), so the main question is the number of peers. This research suggests that 7-9 is where we need to aim, noting also that that is the number of returns needed, so inviting more is probably a good idea if you expect less than a 100% response rate.

Another potential rater group is external customers. Recently I was invited to participate in a forum convened by the American Board of Internal Medicine (ABIM) to discuss the use of multisource feedback in physician recertification processes. ABIM is one of 24 member Boards of the American Board of Medical Specialties (ABMS), which has directed that some sort of multisource (or 360) feedback be integrated into recertification.

The participants in this forum included many knowledgable, interesting researchers on the use of 360 in the context of medicine (a whole new world for me, which was very energizing). I was invited to represent the industry (“outside) perspective. One of the presenters spoke to the challenge of collecting input from their customers (i.e., patients), a requirement for them. She offered up the number of 25 as the number of patients needed to create a reliable result, using very similar rationale as Greguras and Robie regarding the many individual agendas of raters.

Back to LinkedIn, there was then this opinion:

I agree that having too many raters in any one rater group does dilute the feedback and make it much harder to see subtleties. There is also a risk that too many raters may ‘drown out’ key feedback.

This is when my head started spinning like Linda Blair in The Exorcist.  This perspective is SO contrary to my 25 years of experience in this field that I had to prevent myself from discounting it as my head continued to rotate.  I have often said that a good day for me includes times when I have said, “Gee, I have never thought of (insert topic) in that way.” I really do like hearing new and different views, but it’s difficult when they challenge some foundational belief.

For me, maybe THE most central tenet of 360 Feedback is the reliance on rater anonymity in the expectation (or hope) that it will promote honesty. This goes back to the first book on 360 Feedback by Edwards and Ewen (1996) where 360’s were designed with this need for anonymity being in the forefront. That is why we use the artificial form of communication of using anonymous questionnaires and usually don’t report in groups of less than 3. We know that violations of the anonymity promise result in less honesty and reduced response rates, with the grapevine (and/or social media) spreading violated trust throughout the organization.

The notion that too many raters will “drown out key feedback” seems to me to be a total reversal of this philosophy of protecting anonymity. It also seems to place an incredible amount of emphasis on the report itself where the numbers become the sole source of insight. Other blog entries of mine have proposed that the report is just the conversation starter, and that true insight is achieved in the post-survey discussions with raters and manager.

I recall that in past articles (see Bracken, Timmreck, Fleenor and Summers, 2001) we made the point that every decision requires what should be a conscious value judgment as to who the most important “customer” is for that decision, whether it be the rater, ratee, or the organization. For example, limiting the number of raters to a small number (e.g., 5 per group or not all Direct Reports) indicates that the raters and organization are more important than the ratee, that is, that we believe it is more important to minimize the time required of raters than it is to provide reliable feedback for the ratee. In most cases, my values cause me to lobby on behalf of the ratee as the most important customer in design decisions.  The time that I will rally to the defense of the rater as the most important customer in a decision is when anonymity (again, real or perceived) is threatened. And I see these arguments for creating more “insight” by keeping rater groups small or subdivided are misguided IF these practitioners share the common belief that anonymity is critical.

Finally (yes, it’s time to wrap this up), Larry Cipolla, an extremely experienced and respected practitioner in this field, offers some sage advice with some comments, including the folly of increasing rater group size by combining rater groups. As he says, that is pure folly. But I do take issue with one of his practices:

We recommend including all 10 raters (or whatever the n-count is) and have the participant create two groups–Direct Reports A and Direct Reports B.

This seems to me to be a variation on the theme of breaking out groups and reducing group size with the risk of creating suspicions and problems with perceived (or real) anonymity. Larry, you need to show that doing this kind of subdividing creates higher reliability in a statistical sense that can overcome the threats to reliability created by using smaller N’s.

Someone please stop my head from spinning. Do I just need to get over this fixation with anonymity in 360 processes?

References

Bracken, D.W., Timmreck, C.W., and Church, A.H. (2001). The Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Bracken, D.W., Timmreck, C.W., Fleenor, J.W., and Summers, L. (2001). 360 feedback from another angle. Human Resource Management, 1, 3-20.

Edwards, M. R., and Ewen, A.J.  (1996). 360° Feedback: The powerful new model for employee assessment and performance improvement. New York: AMACOM.

Farr, J.L., and Newman, D.A. (2001). Rater selection: Sources of feedback. In Bracken, D.W., Timmreck, C.W., and Church, A.H. (eds.), The Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Greguras, G.J., and Robie, C. (1998).  A new look at within-source interrater reliability of 360-degree feedback ratings. Journal of Applied Psychology, 83, 960-968.

©2012 David W. Bracken

Full Stops, Neutrinos and Rocket Science

with one comment

I don’t know why I feel compelled to respond to what I see are unreasonable positions (primarily in LinkedIn discussions). But I do, and this blog gives me a vehicle for doing so without taking up a disproportionate amount of air time on that forum.

So what got me going this time? A LinkedIn discussion (that I started on the topic of 360 validity) got diverted into the topic of “proper” use of 360 feedback (development vs decision making).  The particular comment that got me going was, “I believe these assessments should be used for development – full stop.”  (Virtually 100% of 360 processes are used for development, but the context indicates that he meant “development only.”) Having lived and worked in London for a while, I realized (or realised) that the “full stop” has the same meaning as “period,” implying end of sentence and, with emphasis, no more is worth saying.  By the way, I am using this person only as an example of the many, many individuals who have expressed similar dogmatic views on this topic.

There are probably a few things that are appropriate to put a “full stop” on. That would be an interesting blog for someone, e.g., would we include the Ten Commandments? “Thou Shall Not Kill. Full stop.”  Hmmm… but then we have Christians who believe in capital punishment, so maybe it’s only a partial stop (or pause)?  Like I said, I will let someone else take that on.

Are the physical sciences a place for “full stops?”   Like, “The world is flat. Full stop.”  “ The Sun revolves around the Earth. Full stop.” Just this last week, we were presented with the possibility that another supposedly immutable law is under attack, i.e., “Nothing can go faster than the speed of light. Full stop.”  Now we have European scientists who have observed neutrinos apparently traveling faster than the speed of light and are searching for ways to explain and confirm it. If found to be true, it would challenge many of the basics of physics, opening the door to time travel, for example.  The fact that some scientists are apparently challenging the “full stop” nature of the Theory of Relativity is also fascinating, if only for the reason that they are open to exploring the supposedly impossible. And, by the way, are begging for others to challenge and/or replicate their findings.

I firmly believe that the social sciences have no place for “full stops.”  To me, “full stop” means ceasing to explore and learn. It seems to indicate a lack of openness to considering new information or different perspectives.

I suspect there are many practitioners in the “hard” sciences who question whether what we do is a “science” at all. (I think I am running out of my quota of quotations marks.)  Perhaps they see our work with understanding human behavior as a quest with no hope of ever having answers. That’s what I like about psychology. We will never fully know how to explain human behavior, and that’s a good thing. If we can explain it, then we probably could control it. I think that is a scary thought. BUT we do try to improve our understanding and increase the probabilities of predicting what people will do. That is one of the basic goals of industrial/organizational psychology.

(I have been known to contend that what we do is harder than rocket science because there are no answers to what we do, only probabilities.  The truth is that even the hard sciences have fewer “full stops” than even they would like. I just finished reading a book about the Apollo space program, Rocket Men, and it is very interesting to know how many “hard stops” that used to exist were bashed (e.g., humans can’t live in weightlessness, the moon’s crust will collapse if we try to land on it. Insert “hard stops” appropriately), how much uncertainty there was, and how amazing the accomplishment really was.  I also learned that one of the reasons the astronauts’ visors were mirrored was so that aliens couldn’t see their faces. Seriously.)

Increasing probabilities for predicting and influencing employee behavior requires that we also explore options.  I can’t see how it is productive to assert that we know the answer to anything, and that we shouldn’t consider options that help us serve our clients, i.e., organizations, more effectively.

On top of all that, the most recent 3D Group benchmark study indicates that about one third of organizations DO use 360 data for some sort of administrative purpose, and that almost certainly understates the real numbers. What do we tell those organizations? That they should cease doing so since our collective wisdom says that there is no way they can actually be succeeding? That we cannot (or should not) learn from what they are doing to help their organizations make better decisions about their leaders? That a few opinions should outweigh these experiences?

I don’t get it. No stop.

©2011 David W. Bracken

I Don’t Care

leave a comment »

Last week I led a workshop for the Personnel Testing Council of Metropolitan Washington that was a modified reprise of the workshop Carol Jenkins and I did at the Society for Industrial and Organizational Psychology in April. I really enjoy these workshops and the opportunity to interact face-to-face with practitioners in the field of 360 degree feedback.

I do wish that participants in these workshops would engage me in a little more debate, and, to that end, I sometimes throw out comments in the hope of raising some hackles. For example, at the PTCMW session, I twice said “I don’t care” regarding two topics that I will explain below. Unfortunately, no one took the bait in the workshop, but maybe I can lure some of you into the discussion using this blog as a vehicle.

So here are the two areas where a ton of research is being done but where, as a practitioner, I don’t care:

1)      The personality of the participant. I don’t care. Everyone seems to want to know how the personality of the participant is going to affect his/her reaction to the feedback.  In past blogs, I have fessed up to being a behaviorist, and in that respect all I really “care” about is getting the person to accept the feedback and to change, whether they want to or not. In my last blog, I used the examples of people’s apparent reluctance to do simple things like apologize for mistake and/or to say “thank you.”  Behaviorally, those are pretty easy things to do, but evidently some internal force (e.g., personality) makes it difficult.  In fact, those internal forces vary greatly across people, and I find chasing them down to not be a very fruitful use of time for the participant or for myself. If the organization and feedback tells you that you need to modify your behavior, just do it!

Sometimes what is going on inside the person’s head is more an issue of awareness than of personality, and awareness is something we can change through 360’s. Occasionally the journey from awareness to acceptance is difficult due to personality factors. It is our job to design the 360 process to make it difficult to not accept the feedback, including ensuring that raters are knowledgeable, reliable, motivated and in sufficient quantity.

On a practical level, when many 360 processes involve dozens or hundreds of participants, it becomes very challenging to integrate personality assessment, for example, into the mix. Not to say it can’t be done. Carol Jenkins does some of that in her practice with groups of feedback recipients. But part of my “I don’t care” mentality has come from a need to get large numbers of people to use the feedback productively without being able to “get inside their head.”

2)      The gap between self-ratings and “other” ratings. I don’t care. As a psychologist, I do find it interesting to see how ratees approach self-ratings, especially the first time around. And they usually change their self-ratings once they see how they are perceived by others. But I am increasingly convinced that self-ratings are more a reflection of the ratee’s agenda than any real self-assessment. (All raters are susceptible to using their ratings to this kind of error.) One memorable instance for me was in working with a Chief Legal Officer who gave himself all 5’s and stated, “do you think I would be crazy enough to actually document less than optimal performance?”

I DO think that participants should complete the rating process, but for other reasons. One is to ensure that they are familiar with the content and how he/she is expected to behave as defined by the organization. Secondly, it is some evidence of at least minimal commitment to the process.

In general, I am not very interested in why a ratee behaves in a certain way if it needs to change. It is highly unlikely that we can change the “why” part of behavior (i.e., personality) other than to affect their awareness of how they are perceived and the importance of accepting that feedback on the way to behaving differently.What is going on in the person’s head is fun for psychologists to research, but doesn’t necessarily help achieve sustainable behavior change.

©2011 David W. Bracken

On the Road… and Web and Print

leave a comment »

I have a few events coming up in the next 3 weeks or so that I would like to bring to your collective attention in case you have some interest.  One is free, two are not (though I receive no remuneration). I also have an article out that I co-authored on 360 feedback.

In chronological order, on May 25 Allan Church, VP Global Talent Development at PepsiCo, and I will lead a seminar titled, “Integrating 360 & Upward Feedback into Performance and Rewards Systems” at the 2011 World at Work Conference in San Diego (www.worldatwork.org/sandiego2011).  I will be offering some general observations on the appropriateness, challenges, and potential benefits of using 360 Feedback for decision making, such as performance management. The audience will be very interested in Allan’s descriptions of his experiences with past and current processes that have used 360 and Upward Feedback for both developmental and decision making purposes.

On June 8, I am looking forward to conducting a half day workshop for the Personnel Testing Council of Metropolitan Washington (PTCMW) in Arlington, VA, titled “360-Degree Assessments: Make the Right Decisions and Create Sustainable Change” (contact Training.PTCMW@GMAIL.COM or go to WWW.PTCMW.ORG). This workshop is open to the public and costs $50.  I will be building from the workshop Carol Jenkins and I conducted at The Society for Industrial and Organizational Psychology. That said, the word “assessments” in the title is a foreshadowing of a greater emphasis on the use of 360 Feedback in a decision making context and an audience that is expected to have great interest in the questions of validity and measurement.

On the following day, June 9 (at 3:30 PM EDT), I will be part of an online virtual conference organized by the Institute of Human Resources and hr.com on performance management. My webinar is titled, “Using 360 Feedback in Performance Management: The Debate and Decisions,” where the “decisions” part has multiple meanings. Given the earlier two sessions I described, it should be clear that I am a proponent of using 360/Upward Feedback for decision making under the right conditions. The other take on “decisions” is the multitude of decisions that are required to create those “right conditions” in the design and implementation of a multisource process.

On that note, I am proud to say that Dale Rose and I have a new article in the Journal of Business and Psychology (June) titled, “When does 360-degree feedback create behavior change? And how would we know it when it does?” Our effort is largely an attempt to identify the critical design factors in creating 360 processes and the associated research needs.

This article is part of a special research issue (http://springerlink.com/content/w44772764751/) of JBP and you will have to pay for a copy unless you have a subscription. As a tease, here is the abstract:

360-degree feedback has great promise as a method for creating both behavior change and organization change, yet research demonstrating results to this effect has been mixed. The mixed results are, at least in part, because of the high degree of variation in design features across 360 processes. We identify four characteristics of a 360 process that are required to successfully create organization change, (1) relevant content, (2) credible data, (3) accountability, and (4) census participation, and cite the important research issues in each of those areas relative to design decisions. In addition, when behavior change is created, the data must be sufficiently reliable to detect it, and we highlight current and needed research in the measurement domain, using response scale research as a prime example.

Hope something here catches your eye/ear!

©2011 David W. Bracken

What is the ROI for 360′s?

with 2 comments

Tracy Maylett recently started a LinkedIn discussion in the 360 Feedback Surveys group by asking, “Can you show ROI on 360-degree feedback processes?” To date, no one has offered up any examples, and this causes me to reflect on this topic. It will also be part of our (Carol Jenkins and myself) discussion at the Society for Industrial and Organizational Psychology (SIOP) Pre-Conference Workshop on 360 Feedback (April 13 in Chicago; see www.siop.org).

Here are some thoughts on the challenges in demonstrating ROI with 360 processes:

1)      It is almost impossible to assess the value of behavior change. Whether we use actual measurements (e.g., test-retest) or just observer estimations of ratee change, assigning a dollar value is extremely difficult. My experience is that, no matter what methodology you use, the results are often large and cause consumers (e.g., senior management) to question and discount the findings.

2)      The targets for change are limited, by design. A commonly accepted best practice for 360’s is to guide participants in using the data to focus on 2-3 behaviors/competencies. If some overall measure of behavior change is used (e.g., the average of all items in the model/questionnaire), then we should expect negligible results since the vast majority of behaviors have not been addressed in the action planning (development) process.

3)      The diversity of behaviors/competencies will mean that they have differential ease of change (e.g., short vs. long term change) and different value to the organization. For example, what might be the ROI for significant change (positive or negative) in ethical behavior compared to communication? Each is very important but with very different implications for measuring ROI.

4)      Measurable change is dependent on design characteristics of each 360 process.  I have suggested in earlier blogs that there are design decisions that are potentially so powerful as to promote or negate behavior change. One source for that statement is the article by Goldsmith and Morgan called, “Leadership is a contact sport,” which can be found on www.marshallgoldsmith.com.  In this article  (that I have also mentioned before), they share results from hundreds of global companies and thousands of leaders that strongly support the conclusion that follow up with raters may be the single best predictor of observed behavior change.

Dale Rose and I have an article in press with the Journal of Business and Psychology titled, “When does 360-degree Feedback create behavior change?  And would we know it when it does?” One of our major objectives in that article is to challenge blanket statements about the effectiveness of 360 processes since there are so many factors that will directly impact the power of the system to create the desired outcomes. The article covers some of those design factors and the research (or lack thereof) associated with them.

If anyone says, for example, that a 360 process (or a cluster, such as in a meta analysis) shows minimal or no impact, my first question would be, “Were the participants required to follow up with their raters?” I would also ask about things like reliability of the instrument, training of raters, and accountability as a starter list of factors that can result in unsuccessful ability to cause and/or measure behavior change.

Tracy’s question regarding ROI is an excellent one, and we should be held accountable for producing results. That said, we should not be held accountable for ROI when the process has fatal flaws in design that almost certainly will result in failure and even negative ROI.

©2011 David W. Bracken

Follow

Get every new post delivered to your Inbox.

Join 36 other followers