Strategic 360s

360s for more than just development

Archive for the ‘Designing 360 processes’ Category

Strategic 360 Forum

with one comment

I am organizing a one day event in New York City on July 24 for organizations that are using 360 Feedback processes for purposes beyond just leadership development.   There is no cost. Any organization wishing to be considered for attendance should contact me. The representative must be a senior leader who has responsibility both for implementation and influence regarding its strategic use in the company.

Strategic 360 Forum

July 24, 2013

Description

One day meeting, coordinated by David Bracken (OrgVitality), of organizations using 360 Assessments for strategic purposes, including support of human resource processes (e.g., talent management, staffing, performance management, succession planning, high potential programs).   Attendees will be senior leaders with responsibilities for both process implementation as well as strategic applications. Larger organizations (5000+ employees) will be given priority consideration for inclusion.

If there is sufficient interest and support from the participating companies, the Forum would continue to meet on a semi-annual basis.

Location and Date:  July 24 at KPMG, 345 Park Avenue, New York, NY 10154-0102

Tentative Participant Organizations:  Guardian, Bank of America, GlaxoSmithKline, KPMG, Starwood, PepsiCo, Federal Reserve NY, JP Morgan Chase

Benefits for Participants

  • Learn of best practices in the use of 360 Assessments in progressive organizations
  • Discover ways that 360 Assessments support human resource initiatives, including problems and solutions
  • Create personal networks for future situations
  • Create opportunities for future professional contributions, including 2014 SIOP Conference

NOTE: The specific process and agenda will evolve as the organizers interact with the participants and discover their expectations and ways that they can best contribute to the event.

Cost

There is no cost for participants beyond their active contribution. Lunch is provided.

Content

The core content will consist of brief presentations by select attendees.  For attendees interested in participating in a submission for the 2014 SIOP Conference, we will use the format and content of their presentations to feed a proposal submission.

Presentations will be followed by a group discussion where questions can be asked of the presenter and alternative viewpoints shared.

Depending on the programs and interests of the participating organizations, we will explore select  theme topics of high relevance relating to use of 360 Feedback. These topics may include:

  • Performance Management
  • Succession Planning
  • High Potential Identification and Development
  • Staffing/Promotions
  • Coaching Programs
  • Sustainability

The Forum meeting will also include a presentation by Bracken and Church based on their People & Strategy (HRPS) article on the use of 360 Assessments in support of (or replacing) performance management processes, followed by discussion.

Outputs

1)      As noted above, the content will be the basis for a proposal for inclusion in the 2014 SIOP Conference.

2)      The presentations and discussions will be organized and reported to participants.

Contact Information

Interested organizations should email me with a brief description of the 360 process(es) you wish to highlight/share (purpose, size, longevity, innovations), and your personal role/responsibilities.

David W. Bracken, Ph.D.

Vice President, Leadership Development and Assessment

OrgVitality, LLC

402-617-5152 (cell)

david.bracken@orgvitality.com

Pay Attention to That Leader Behind the Curtain

leave a comment »

One of my early posts was titled “Snakes in Suits”   (http://dwbracken.wordpress.com/2010/10/12/snakes-in-suits/), which is also the title of a book about psychopaths in industry, specifically in leadership positions, and how skilled they are (because they are psychopaths) in escaping detection until the damage has been done.  The blog post highlighted a 360 process whose primary purpose is to identify the bottom tail of the performance distribution, essentially managing the quality of the leadership cadre by fixing or removing the poorest performers/behaviors. The metaphor is pulling back the curtain on the pretender/offender, like Toto does in “The Wizard of Oz,” who has escaped discovery for many years through cleverness and deception. Of course, he cries out, “Pay no attention to that man behind the curtain.”

I got to thinking about this topic recently (no, not because of the new Wizard of Oz movie) when I got an update from Bill Gentry at the Center for Creative Leadership regarding his evolving thinking and research on the topic of Integrity (see his YouTube video, http://www.youtube.com/watch?v=4d7yQHHUL-Q&list=UU9ulOx1rJK5FMlC5gbS91cQ&index=1).

One of the possible reasons that the “Snakes in Suits” book didn’t get more traction in our field is the fact that true psychopaths are relatively rare in our society (maybe 3-5% of the population by some estimates), though their “cousins” (bullies, jerks, add your own adjectives) are much more prevalent and all can cause substantial damage.  By expanding the definition of inappropriate behavior to include integrity (or lack thereof) as Dr. Gentry highlights, we now have a behavioral requirement that hopefully applies to every leader, and every employee for that matter.

One of Bill’s research articles uncovers a finding where integrity is identified as a critical trait for senior executives but much less so for mid-level executives. His hypothesis is that success in mid-management is much more on the “what” that is achieved (e.g., revenues, sales, budgets) than the “how” (e.g., adherence to the values of the organization).  This de-emphasis on the “how” side of performance measurement causes organizations to promote leaders to the most senior levels without sufficient scrutiny of their character, resulting in some flawed leadership at the top of companies where integrity is essential (including some very high profile examples that Bill enumerates as part of his publications).

While I’m at it, I found another piece of research that relates to the significant impact that abusive management can have across large swaths of the organization. This article (cited below) suggests that employees partly attribute abusive supervision to negative valuation by the organization and, consequently, behave negatively toward and withhold positive contributions to it. In other words, employees may believe that abusive supervisors are condoned by the company, and then lose commitment and engagement to said organization.  And there is probably a lot of truth in that logic.

Organizations have a responsibility to identify and to address situations where leaders are behaving badly, and the research cited above strongly suggests that it is in the best interests of organizations to do so.  So how is that done?  Many organizations rely on anonymous processes that encourage employees to “speak up” without fear of retribution.  That is such a passive approach as to almost be amusing if it weren’t so important.

Of course, you know where I am going with this.  A 360 Degree Feedback process that is consistently administered across the organization AND has provisions for the results being shared with the organization (e.g., Human Resources) is about the only way I can think of where this systemic problem can be addressed.  This should be a critical aspect of Talent Management systems in organizations, and as common and ubiquitous as performance management.  As the authors of “Snakes in Suits” point out, 360 feedback can be a powerful way to identify the “snakes” early in their careers. One problem is that these snakes are very skilled at avoiding detection by finding loopholes in inconsistently administered 360’s so that they don’t have to participate, or don’t have to share their feedback with anyone.

Who is that leader behind the curtain? It may be a wizard. It may be a jerk. It may be a hero to be honored.  But we won’t know unless we have our Toto to pull back the curtain, hopefully before it’s too late.

Reference

Blaming the organization for abusive supervision: The roles of perceived organizational support and supervisor’s organizational embodiment.  Shoss, Mindy K.; Eisenberger, Robert; Restubog, Simon Lloyd D.; Zagenczyk, Thomas J.  Journal of Applied Psychology, Vol 98(1), Jan 2013, 158-168. doi: 10.1037/a0030687

©2013 David W. Bracken

It’s Human Nature

leave a comment »

One question that has been at the core of best practices in 360 Feedback since its inception relates to the conditions that are most likely to create sustained behavior change (at least for those of us that believe that behavior change is the ultimate goal).  Many of us believe that behavior change is not a question of ability to change but primarily one of motivation. Motivation often begins with the creation of awareness that some change is necessary, the accepting the feedback, and then moving on to implementing the change.

One of the more interesting examples of creating behavior change began when seat belts were included as standard equipment in all passenger vehicles in 1964.  I am old enough to remember when that happened and started driving not long thereafter. So using a seat belt was part of the driver education routine since I began driving and has not been a big deal for me.

The reasons for noncompliance with seatbelt usage are as varied as human nature. Some people see it as a civil rights issue, as in, “No one is going to tell me what to do.” There is also the notion that it protects against a low probability event, as in “It won’t happen to me. I’m a careful driver.” Living in Nebraska for a while, I learned that people growing up on a farm don’t “have the time” to buckle and unbuckle seatbelts in their trucks when they are learning to drive, so they don’t get into that habit. (I also found, to my annoyance, that they also never learned how to use turn signals.)

I remember back in the ‘60’s reading about a woman who wrote a car manufacturer to ask that they make the seat belts thinner because they were uncomfortable to sit on.  Really.

Some people have internal motivation to comply, which can also be due to multiple factors such as personality, demographics, training, norms (e.g., parental modeling), and so on. This is also true when we are trying to create behavior change in leaders, but we will see that these factors are not primary determinants of compliance..

In thinking about seatbelt usage as a challenge in creating behavior change, I found study from 2008 by the Department of Transportation. It is titled “How States Achieve High Seat Belt Use Rates” (DOT HS 810 962).  (Note: This is a 170 page report with lots of tables and statistical analyses, and if any of you geeks want a copy, let me know.)

The major finding of this in-depth study states:

The statistical analyses suggest that the most important difference between the high and low seat belt use States is enforcement, not demographics or funds spent on media.

This chart Seatbelt Usage in US, amongst the many in this report, seems to capture the messages fairly well to support their assertion.  This chart plots seat belt usage by state, where we see a large spread ranging from just over 60% (Mississippi) to about 95% (Hawaii).  It also shows whether each state has primary seatbelt laws (where seatbelt usage is a violation by itself), or secondary laws (where seatbelt usage can only be enforced if the driver is stopped for another purpose). Based on this table alone, one might argue causality but the study systematically shows that this data, along with others relating to law enforcement practices, are the best predictors of seatbelt usage.

One way of looking at this study is to view law enforcement as a form of external accountability, i.e., having consequences for your actions (or lack thereof). The primary versus secondary law factor largely shifts the probabilities of being caught, with the apparent desired effect on seatbelt usage.

So, back to 360 Feedback. I always have been, and continue to be, mystified as to how some implementers of 360 feedback processes believe that sustainable behavior change is going to occur in the vast majority of leaders without some form of external accountability. Processes that are supposedly “development only” (i.e., have no consequences) should not be expected to create change. In those processes, participants are often not required to, or even discouraged from, sharing their results with others, especially their manager. I have called these processes “parlor games” in the past because they are kind of fun, are all about “me,” and have no consequences.

How can we create external accountability in 360 processes?  I believe that the most constructive way to create both motivation and alignment (ensuring behavior change is in synch with organizational needs/values) is to integrate the 360 feedback into Human Resource processes, such as leadership development, succession planning, high potential programs, staffing decisions, and performance management.  All these uses involve some form of decision making that affects the individual (and the organization), which puts pressure on the 360 data to be reliable and valid. Note also that I include leadership development in this list as a form of decision making because it does affect the employee’s career as well as the investment (or not) of organization resources.

But external accountability can be created by other, more subtle ways as well. We all know from our kept and (more typically) unkept New Year’s resolutions about the power of going public with our commitments to change. Sharing your results and actions with your manager has many benefits, but can cause real and perceived unfairness if some people are doing it and others not. Discussing your results with your raters and engaging them in your development plans has multiple benefits.

Another source of accountability can (and should) come from your coach, if you are fortunate enough to have one.  I have always believed that the finding in the Smither et al (2005) meta-analysis that the presence of a coach is one determinant of whether behavior change is observed is due to the accountability that coaches create by requiring the coachee to specifically state what they are going to do and to check back that the coachee has followed through on that commitment.

Over and over, we see evidence that, when human beings are not held accountable, more often than not they will stray from what is in their best interests and/or the interests of the group (organization, country, etc.).  Whether it’s irrational (ignoring facts) or overly rational (finding ways to “get around” the system), we should not expect that people will do what is needed, and we should not rely on our friends, neighbors, peers or leaders to always do what is right if there are no consequences for inaction or bad behavior.

©2012 David W. Bracken

What Is a “Decision”?

with one comment

My good friend and collaborator, Dale Rose, dropped me a note regarding his plans to do another benchmarking study on 360 Feedback processes. His company, The 3D Group, has done a couple of these studies before and Dale has been generous in sharing his results with me, which I have cited in some of my workshops and webinars. The studies are conducted by interviewing coordinators of active 360 systems.  Given that they are verbal, some of the results have appeared somewhat internally inconsistent and difficult to reconcile, though the general trends are useful and informative.

Many of the topics are useful for practitioners to gauge their program design, such as the type of instrument, number of items, rating scales, rater selection, and so on. For me, the most interesting data relates to the various uses of 360 results.

Respondents in the 2004 and 2009 studies report many uses. In both studies, “development” is the most frequent response, and that’s how it should be.  In fact, I’m amazed that the responses weren’t 100% since a 360 process should be about development. The fact that in 2004 only 72% of answers included development as a purpose is troubling whether we take the answers as factual or if they didn’t understand the question. The issue at hand here is not whether 360’s should be used for development; it is what else they should, can, and are used for in addition to “development.”

In 2004, the next most frequent use was “career development;” that makes sense. In 2009, the next most frequent was “performance management,” and career development dropped way down. Other substantial uses include high potential identification, direct link to performance measurement, succession planning, and direct link to pay.

But when asked whether the feedback is used “for decision making or just for development”, about 2/3 of the respondents indicated “development only” and only 1/3 for “decision making.” I believe these numbers understate the actual use of 360 for “decision making” (perhaps by a wide margin), though (as I will propose), it can depend on how we define what a “decision” is.

To “decide” is “to select as a course of action,” according to Miriam Webster (in this context). I would build on that definition that one course of action is to do nothing, i.e., don’t change the status quo or don’t let someone do something. It is impossible to know what goes on in person’s mind when he/she speaks of development, but it seems reasonable to suppose that it involves doing something beyond just leaving the person alone, i.e., maintaining the status quo.  But doing nothing is a decision. So almost any developmental use is making a decision as to what needs to be done, what personal (time) and organizational (money) resources are to be devoted to that person. Conversely, denying an employee access to developmental resources that another employee does get access to is a decision, with results that are clearly impactful but difficult to measure.

To further complicate the issues, it is one thing to say your process is for “development only,” and another to know how it is actually used.  Every time my clients have looked behind the curtain of actual use of 360 data, they unfailingly find that managers are using it for purposes that are not supported. For example, in one client of mine, anecdotal evidence repeatedly surfaced that the “development only” participants were often asked to bring their reports with them to internal interviews for new jobs within the organization. The bad news was that this was outside of policy; the good news was that leaders saw the data as useful in making decisions, though (back to bad news) they may have been untrained to correctly interpret the reports.

Which brings us to why this is an important issue. There are legitimate “development only” 360 processes where the participant has no accountability for using the results and, in fact, is often actively discouraged from sharing the results with anyone else. Since there are not consequences, there are few, if any, consequential actions or decisions required. But most 360 processes (despite the benchmark results suggesting otherwise) do result in some decisions being made, which might include doing nothing by denying an employee access to certain types of development.

The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for Multisource Feedback When Used for Decision Making.”  My sense is many designers and implementers of 360 (multisource) processes feel that these Guidelines don’t apply because their system isn’t used for decision making. Most of them are wrong about that. Their systems are being used for decision making, and, even if not, why would we design an invalid process? And any system that involves the manager of the participant (which it should) is creating the expectation of direct or indirect decision making to result.

So Dale’s question to me (remember Dale?) is how would I suggest wording a question in his new benchmarking study that would satisfy my curiosity regarding the use of 360 results. I proposed this wording:

“If we define a personnel decision as something that affects an employee’s access to development, training, jobs, promotions or rewards, is your 360 process used for personnel decisions?” 

Dale hasn’t committed to using this question in his study. What do you think?

©2012 David W. Bracken

I Need an Exorcism

leave a comment »

Being the 360 Feedback nerd I am, I love it when some new folks get active on the LinkedIn 360 discussion group. One discussion emerged recently that caught my eye, and I have been watching it with interest, mulling over the perspectives and knowing I had to get my two cents in at some point.

Here is the question:

How many raters are too many raters?

We normally recommend 20 as a soft limit. With too many, we find the feedback gets diluted and you have too many people that don’t work closely enough with you to provide good feedback. I’d be curious if there are any suggestions for exceptions.

This is an important decision amongst the dozens that need to be made in the course of designing and implementing 360 processes. The question motivated me to pull out The Handbook of Multisource Feedback and find the excellent chapter on this topic by James Farr and Daniel Newman (2001), which reminded me of the complexity of this decision. Let me also reiterate that this is another decision that has different implications for “N=1” 360 processes (i.e., feedback for a single leader on an ad hoc basis) versus “N>1” systems (i.e., feedback for a group of participants); this blog and discussion is focused on the latter.

Usually people argue that too many surveys will cause disruption in the organization and unnecessary “soft costs” (i.e., time). The author of this question poses a different argument for limiting the rater population, which he calls “dilution” due to inviting unknowledgeable raters.  For me, one of the givens of any 360 system is that the raters must have sufficient experience with the ratee to give reliable feedback. One operationalization of that concept is to require that an employee must have worked with/for the ratee for some minimum amount of time (e.g., 6 months or even 1 year), even if he/she is a direct report. Having the ratee select the raters (with manager approval) is another practice that is designed to help get quality raters that then also facilitate the acceptance of the feedback by the ratee. So “dilution” due to unfamiliarity can be combated with that requirement, at least to some extent.

One respondent to this question offers this perspective:

The number of raters depends on the number of people that deal with this individual through important business interactions and can pass valuable feedback based on real experience. There is no one set answer.

I agree with that statement. Though, while there is no one set answer, some answers are better than others (see below).

In contrast, someone else states:

We have found effective to use minimum 3 and maximum 5 for any one rater category.

The minimum of 3 is standard practice these days as a “necessary but not sufficient” answer to the number of raters. As for the maximum of 5, this is also not uncommon but seems to ignore the science that supports larger numbers.  When clients seek my advice on this question of number of raters, I am swayed by the research published by Greguras and Robie (1998) who collected and researched the question of the reliability of various rater sources (i.e., subordinates, peers and managers). They came to the conclusion that different rater groups provide differing levels of reliable feedback, probably because the number of “agendas” lurking within the various types of raters. The least reliable are the subordinates, followed by the peers, and then the managers, the most reliable rater group.

One way to address rater unreliability is to increase the size of the group (another might be rater training, for example). Usually there is only one manager and best practice is to invite all direct reports (who meet the tenure guidelines), so the main question is the number of peers. This research suggests that 7-9 is where we need to aim, noting also that that is the number of returns needed, so inviting more is probably a good idea if you expect less than a 100% response rate.

Another potential rater group is external customers. Recently I was invited to participate in a forum convened by the American Board of Internal Medicine (ABIM) to discuss the use of multisource feedback in physician recertification processes. ABIM is one of 24 member Boards of the American Board of Medical Specialties (ABMS), which has directed that some sort of multisource (or 360) feedback be integrated into recertification.

The participants in this forum included many knowledgable, interesting researchers on the use of 360 in the context of medicine (a whole new world for me, which was very energizing). I was invited to represent the industry (“outside) perspective. One of the presenters spoke to the challenge of collecting input from their customers (i.e., patients), a requirement for them. She offered up the number of 25 as the number of patients needed to create a reliable result, using very similar rationale as Greguras and Robie regarding the many individual agendas of raters.

Back to LinkedIn, there was then this opinion:

I agree that having too many raters in any one rater group does dilute the feedback and make it much harder to see subtleties. There is also a risk that too many raters may ‘drown out’ key feedback.

This is when my head started spinning like Linda Blair in The Exorcist.  This perspective is SO contrary to my 25 years of experience in this field that I had to prevent myself from discounting it as my head continued to rotate.  I have often said that a good day for me includes times when I have said, “Gee, I have never thought of (insert topic) in that way.” I really do like hearing new and different views, but it’s difficult when they challenge some foundational belief.

For me, maybe THE most central tenet of 360 Feedback is the reliance on rater anonymity in the expectation (or hope) that it will promote honesty. This goes back to the first book on 360 Feedback by Edwards and Ewen (1996) where 360’s were designed with this need for anonymity being in the forefront. That is why we use the artificial form of communication of using anonymous questionnaires and usually don’t report in groups of less than 3. We know that violations of the anonymity promise result in less honesty and reduced response rates, with the grapevine (and/or social media) spreading violated trust throughout the organization.

The notion that too many raters will “drown out key feedback” seems to me to be a total reversal of this philosophy of protecting anonymity. It also seems to place an incredible amount of emphasis on the report itself where the numbers become the sole source of insight. Other blog entries of mine have proposed that the report is just the conversation starter, and that true insight is achieved in the post-survey discussions with raters and manager.

I recall that in past articles (see Bracken, Timmreck, Fleenor and Summers, 2001) we made the point that every decision requires what should be a conscious value judgment as to who the most important “customer” is for that decision, whether it be the rater, ratee, or the organization. For example, limiting the number of raters to a small number (e.g., 5 per group or not all Direct Reports) indicates that the raters and organization are more important than the ratee, that is, that we believe it is more important to minimize the time required of raters than it is to provide reliable feedback for the ratee. In most cases, my values cause me to lobby on behalf of the ratee as the most important customer in design decisions.  The time that I will rally to the defense of the rater as the most important customer in a decision is when anonymity (again, real or perceived) is threatened. And I see these arguments for creating more “insight” by keeping rater groups small or subdivided are misguided IF these practitioners share the common belief that anonymity is critical.

Finally (yes, it’s time to wrap this up), Larry Cipolla, an extremely experienced and respected practitioner in this field, offers some sage advice with some comments, including the folly of increasing rater group size by combining rater groups. As he says, that is pure folly. But I do take issue with one of his practices:

We recommend including all 10 raters (or whatever the n-count is) and have the participant create two groups–Direct Reports A and Direct Reports B.

This seems to me to be a variation on the theme of breaking out groups and reducing group size with the risk of creating suspicions and problems with perceived (or real) anonymity. Larry, you need to show that doing this kind of subdividing creates higher reliability in a statistical sense that can overcome the threats to reliability created by using smaller N’s.

Someone please stop my head from spinning. Do I just need to get over this fixation with anonymity in 360 processes?

References

Bracken, D.W., Timmreck, C.W., and Church, A.H. (2001). The Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Bracken, D.W., Timmreck, C.W., Fleenor, J.W., and Summers, L. (2001). 360 feedback from another angle. Human Resource Management, 1, 3-20.

Edwards, M. R., and Ewen, A.J.  (1996). 360° Feedback: The powerful new model for employee assessment and performance improvement. New York: AMACOM.

Farr, J.L., and Newman, D.A. (2001). Rater selection: Sources of feedback. In Bracken, D.W., Timmreck, C.W., and Church, A.H. (eds.), The Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Greguras, G.J., and Robie, C. (1998).  A new look at within-source interrater reliability of 360-degree feedback ratings. Journal of Applied Psychology, 83, 960-968.

©2012 David W. Bracken

A Dangerous Place

with 2 comments

The world is a dangerous place to live; not because of the people who are evil, but because of the people who don’t do anything about it.

-Albert Einstein
I hadn’t heard this quote before this weekend. It happened to be on a sign carried by a lone protester outside the entrance to the Penn State football game, standing by the Joe Paterno statue. Needless to say, his presence and message wasn’t appreciated by some of the PSU faithful, though he stated that he was once “one of them” but now had a different perspective as a family man in the wake of these recent events.

Another article in today’s (Nov 12) NY Times also caught my eye in what I felt had a related message, titled “For Disabled Care Complaints, Vow of Anonymity Was False.” (http://www.nytimes.com/2011/11/12/nyregion/ombudsmen-gave-whistle-blowers-names-to-state-agency.html?src=me&ref=nyregion).  A spokesman for the agency, Travis Proulx, said in an interview in August that “there is no confidentiality for any employee who is reporting abuse or neglect, even to the ombudsman.” Is it any wonder that people are afraid to step forward?

Organizations, including universities, are in many ways closed systems with their own methods for defining and living values (http://www.nytimes.com/2011/11/12/us/on-college-campuses-athletes-often-get-off-easy.html?ref=ncaafootball).   See also the recent news story about the Texas judge who has been exposed via YouTube of his own brand of values inside his “organization,”, i.e., his family (http://www.nytimes.com/2011/11/13/us/ruling-against-judge-seen-beating-daughter.html?scp=1&sq=texas%20judge%20belt&st=cse).  Without getting into legalities and regulation and the such, let us focus on the fact that organizations (of any kind) need some sort of internal processes, formal and/or informal, to define proper behavior and to rectify instances of wrongdoing.

Whatever the unit of analysis, the definition of “evil” is a very subjective process. In an earlier blog, I pointed to some research that suggested that some questionable practices are more acceptable in some industries than in others (http://dwbracken.wordpress.com/2011/03/15/what-is-normal/).  And I do believe that organizations have the right to define their values and to hold employees accountable for behaving in ways consistent with those values. Some actions are egregious that they are universally rejected (at least within certain cultures), including those exhibited by psychopaths as described in the book, Snakes in Suits.

One of the many benefits of doing a system-wide (e.g., company, department) 360 feedback process is the opportunity it creates for miscreants to be identified through anonymous input from coworkers. If system-wide, they hopefully also detect psychopaths and the such who are also very skilled at escaping detection. Unlike other “whistle blowing,” 360’s rely on a consensus from feedback providers that theoretically protects both the raters and the ratees. The data generated by 360’s is reported in aggregate form, usually requiring a minimum of three respondents to create a mean score. Assuming the organization has access to these scores, they can be analyzed to detect particularly low mean scores that indicate that the leader in question is being cited by multiple coworkers as being out of synch with the rest of the organization.

So what do we need to do to make our 360 processes useful for detecting misbehavior and protecting both the raters and the ratees alike? Some suggestions include:

  • Be clear as to the purpose of the process
  • Require participation by all organizational leaders
  • Give access to results to the organization (including HR and management)
  • Strictly adhere to minimum group size requirements for reporting results (e.g., minimum of 3)
  • Use a well designed behavioral model to form the basis for the content.
  • Include write in comments.
  • Train users (managers, HR) on the proper interpretation and use of results
  • Administer on a regular (annual) basis
  • Immediately address instances of leaders seeking retribution against raters (real or inferred)

Any other suggestions?

©2011 David W. Bracken

What is a “5″ ?

leave a comment »

The Sunday New York Times Business section typically has a feature called “Corner Office” where a CEO is interviewed. These CEO’s seem to often be from small businesses. The one today (October 16, 2011), for example, is the CEO of a 25-person, $6 million consulting firm. The questions are often the same, having to do with the process of becoming a leader, lessons learned, hiring and promotion strategies. I have referenced these in a couple earlier blogs since they touch on topics relative to behavior change, leadership development and culture change that are relevant to aspects of 360 processes.

In today’s column, the CEO was asked about how he hires. He notes that part of the interview often includes asking the applicant to rate him/herself on a five point scale regarding various areas of knowledge and experience from their resume. If the applicant rates themselves as a “5,” he then asks if there is nothing else that they could learn about whatever it is. Of course, they say, “Oh, no, no. There is.” To which the CEO asks, “then why did you rate yourself a five?”  And he goes on to say he has never hired someone who rates themself as a five.

While this CEO’s decision not to hire these high self raters may seem arbitrary, think of the culture he is trying to create by the people he selects and the message this communicates to new hires during the process. He says that he appreciates humbleness and their understanding that they don’t know everything.  (In an earlier column, the CEO asks applicants if they are “nice,” and then explains what “nice” means in their culture, i.e., a good team player.)

(Someone told me of a senior executive who greeted applicants in the lobby and asked them whether they should take the stairs or elevator. If they said elevator, he didn’t hire them. That seems less job related, but is our “5″ CEO doing a similar thing? Food for thought.)

We don’t use 360’s to hire people (though I have heard of multisource reference checks  from  past coworkers that are being positioned as 360’s), but we do have an opportunity with 360’s to create or support a culture when they involve many people. But we also know that 360’s are notorious for having severe leniency, i.e., mostly 4’s and 5’s on a five point scale.

Do all these 5’s that we collect mean that our leaders can’t get any better at what they do? Of course not. But that seems to be the message that we allow and even reward (even if not tangibly).

The vast majority of 360 processes use an Agree/Disagree (Likert) scale where “Strongly Agree” is scored as a “5” (scales that score it as a “1” seem confusing and counterintuitive to me).  The VAST majority of processes also do not include rater training that could be used to help raters (and ratees for that matter) to stop attaching any meaning to “Strongly Agree” that they wish. Which they currently do.

I have used a rating scale where “5” is defined as “role model, in the top 5-10%” that attempts to create a frame of reference for raters (and ratees) that does help reduce leniency effects.

What if we defined “5” as “can’t get any better” or something equivalent to that. I think “role model” implies that this person can be a teacher as well as example to others, and perhaps doesn’t need to get better (i.e., focus on other areas of improvement).  Some raters will undoubtedly ignore those directions, but rater training can help drill in the need for everyone to reconfigure their conceptualization of what optimal behavior is and, by the way, foster the learning and development culture that our CEO seems to be nurturing.

A recalibration of rating scales is badly needed in this field. We need to stop raters from giving all “5’s” and from ratees giving self ratings of all “5’s”.  With our current mentality on rating scales, there is really nothing to stop rating inflation. It should be no surprise that senior leaders find it to be difficult to use and support our 360 programs.

©2011 David W. Bracken

What does “beneficial” mean?

with one comment

My friend, Joan Glaman, dropped me a note after my last blog, (http://dwbracken.wordpress.com/2011/08/30/thats-why-we-have-amendments/ ) with this suggestion:

“I think your closing question below would be a great next topic for general discussion: ‘Under what conditions and for whom is multisource feedback likely to be beneficial?’”

To refresh (or create) your memory, that question that Joan cites is from the Smither, London and Reilly (2005) meta analysis. The article abstract states:

“…improvement is most likely to occur when feedback indicates that change is necessary, recipients have a positive feedback orientation, perceive a need to change their behavior, react positively to the feedback, believe change is feasible, set appropriate goals to regulate their behavior, and take actions that lead to skill and performance improvement.”

Before we answer Joan’s question, we should have a firm grasp on what we mean by “beneficial.” I don’t think we all would agree on that in this context.  Clearly, Smither et al. define it as “improvement,” i.e., positive behavior change. That is the criterion (outcome) measure that they use in their aggregation of 360 studies. I am in total agreement that behavior change is the primary use for 360 feedback, and we (Bracken, Timmreck, Fleenor and Summers, 2001) defined a valid 360 process as one that creates sustainable behavior change in behaviors valued by the organization.

Not everyone will agree that behavior change is the primary goal of a 360 process. Some practitioners seem to believe that creating awareness alone is a sufficient outcome since they do not support any activity or accountability, proposing that simply giving the report to the leader is going far enough and in fact discourage the sharing of results with anyone else.

If you will permit a digression, I will bring to your attention a recent blog by Sandra Mashihi (http://results.envisialearning.com/5-criteria-a-360-degree-feedback-must-meet-to-be-valid-and-reliable/) where one of her lists of “musts” (arrrgh!) is criterion related validity, which she defines as, …does the customized instrument actually predict anything meaningful like performance?” Evidently she would define “beneficial” then to not be behavior change but to be able to measure performance to make decisions about people.  This testing mentality just doesn’t work for me since 360’s are not tests (http://dwbracken.wordpress.com/2010/08/31/this-is-not-a-test/) and it is not realistic to expect them to predict behavior, especially if we hope to actually change behavior.

Let’s get back to Joan’s question (finally). I want to make a couple comments and then hopefully others will weigh in. The list of characteristics that Smither et al provide in the abstract is indeed an accumulation of individual and organizational factors. This is not an “and” list that says that a “beneficial” process will have all these things. It an “or” list where each characteristic can have benefits.  The last two, (set goals and take actions) can be built into the process as requirements regardless of whether the individual reacts positively and/or perceives the need to change. Research shows that follow up and taking action are powerful predictors of behavior change, and I don’t believe that it is important (or matters) to know if the leader wants to change or not. What if he/she doesn’t want to change? Do they get a pass? Some practitioners would probably say, yes, and point to this study as an indication that it is not worth the effort to try to get them to change.

I suggest that this list of factors that lead to behavior change are not independent of each other. In our profession, we speak of “covariates”, i.e., things that are likely to occur together across a population. A simple example is gender and weight, where men are, on average, heavier than women. But we don’t conclude that men as a gender manage their weight less well than women, it’s due to being taller (and other factors, like bone structure).

My daughter, Anne, mentioned in passing an article she read about people who don’t brush their teeth twice a day having a shorter life expectancy than those who do.  So the obvious conclusion is that brushing teeth more often will make us live longer.  There is certainly some benefit to regularly brushing teeth, but it’s more likely that there are covariates of behavior for people that don’t have good dental hygiene that have a more direct impact on health.  While I don’t have data to support it, it seems likely that people who don’t brush regularly also don’t go to the dentist regularly for starters.  It seems reasonable to surmise that, on average, those same people don’t go to their doctor for a regular checkup.

My hypothesis is that 360 participants who aren’t open to feedback, don’t perceive a need to change, don’t feel that they can change, etc., are also the people who are less likely to set goals and take action (follow up) if given the option to not do those things.  In other words, it’s not necessarily their attitudes that “cause” lack of behavior change, but the lower likelihood that they will do what is necessary, i.e., set goals and follow through, in order to be perceived as changing their behavior. Those “behaviors” can be modified/changed while their attitudes are likely to be less modifiable, at least until they have had a positive experience with change and its benefits.

One last point of view about “beneficial.” Another definition could be change that helps the entire organization. That is the focus of the recent publication by Dale Rose and myself, where (in answer to Joan’s question) we state:

“…four characteristics of a 360 process that are required to successfully create organization

change, (1) relevant content, (2) credible data, (3) accountability, and (4) census participation…”

We go on to offer the existing research that supports that position, and the wish list for future research. One way of looking at this view of what is “beneficial” is to extrapolate what works for the individual and apply it across the organization (which is where the census (i.e., whole population) part comes into play.)

I will stop there, and then also post this on LinkedIn to see if we can get some other perspectives.

Thanks, Joan!

©2011 David W. Bracken

That’s Why We Have Amendments

with one comment

I used my last blog (http://dwbracken.wordpress.com/2011/08/09/so-now-what/)  to start LinkedIn discussions in the 360 Feedback and I/O Practitioners group, asking the question: What is a “valid” 360 process?  The response from the 360 group was tepid, maybe because the group has a more general population that might not be that concerned with “classic” validity issues (which is basically why I wrote the blog in the first place).  But the I/O community went nuts (45 entries so far) with comments running the gamut from constructive to dismissive to deconstructive.

Here is a sample of some of the “deconstructive” comments:

…I quickly came to conclusion it was a waste of good money…and only useful for people who could (or wanted to) get a little better.

It is all probably a waste of time and money. Good luck!!

There is nothing “valid” about so-called 360 degree feedback. Technically speaking, it isn’t even feedback. It is a thinly veiled means of exerting pressure on the individual who is the focal point.

My position regarding performance appraisal is the same as it has been for many years: Scrap It. Ditto for 360.

Actually, I generally agree with these statements in that many 360 processes are a waste of time and money. It’s not surprising that these sentiments are out there and probably quite prevalent. I wonder, though, if we are all on the same page. In another earlier blog, I suggested that discussions about the use and effectiveness of 360’s should be separated by those that are designed for feedback to a single individual (N=1) and those that are designed to be applied to groups (N>1).

But the fact is that HR professionals have to help their management make decisions about people, starting with hiring and then progressing through placement, staffing, promotions, compensation, rewards/recognition, succession planning, potential designation, development opportunities, and maybe even termination.

Nothing is perfect, especially so when it comes to matters that involve people. As an example, look to the U.S. Constitution, an endearing document that has withstood the test of time. Yet the Founding Fathers were the first to realize that they needed to make provisions for the addition of amendments to further make refinements. Of course, some of those amendments were imperfect themselves and were later rescinded.

But we haven’t thrown out the Constitution because it is imperfect.  Nor do we find it easy to come to agreements what the revisions should be.  But one of the many good things about humans is a seemingly natural desire to make things better.

Ever since I read Mark Edwards and Ann Ewen’s seminal book, 360 Degree Feedback, I have believed that 360 Feedback has the potential to improve personnel decision making when done well. The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for multisource feedback when used for decision making,” coauthored with Carol Timmreck, where we made a stab at defining what “done well” can mean.

In our profession, we have an obligation to constantly seek ways of improving personnel decision making. There are two major needs we are trying to meet, which sometimes cause tensions. One is to provide the organization with more accurate information on which to base these decisions, which we define as increased reliability (accurate measurement) and validity (relevant to job performance). Accurate decision making is good for both the organization and the individual.

The second need is to simultaneously use methods that promote fairness. This notion of fairness is particularly salient in the U.S. where we have “protected classes” (i.e., women, minorities, older workers), but hopefully fairness is a universal concept that applies in many cultures.

Beginning with the Edwards & Ewen book and progressing from there, we can find more and more evidence that 360 done well can provide decision makers with better information (i.e., valid and fair) than traditional sources (e.g., supervisory evaluations).  I actually heard a lawyer state that organizations could be legally exposed for not using 360 feedback because is more valid and fair than methods currently in use.

I have quoted Smither, London and Reilly (2005) before, but here it is again:

We therefore think it is time for researchers and practitioners to ask “Under what conditions and for whom

is multisource feedback likely to be beneficial?” (rather than asking “Does multisource feedback work?”).

©2011 David W. Bracken

So Now What?

with 7 comments

This is the one year anniversary of this blog. This is the 44th post.  We have had 2,026 views, though the biggest day was the first with 38 views.  I have had fewer comments than I had hoped (only 30), though some LinkedIn discussion have resulted. Here is my question: Where to go from here? Are there topics that are of interest to readers?

Meanwhile, here is my pet peeve(s) of the week/month/year:  I was recently having an exchange with colleagues regarding a 360 topic on my personal Gmail account and up pops ads on the margin for various 360 vendors (which is interesting in itself), the first of which is from Qualtrics (www.qualtrics.com) with the heading, “Create 360s in Minutes.”

The topic of technology run amok has been covered before here (When Computers Go Too Far, http://wp.me/p10Xjf-3G), my peevery was piqued (piqued peevery?) when I explored their website and saw this claim:  USE VALIDATED QUESTIONS, FORMS and REPORTS.”

What the heck does that mean?  What are “validated” forms and reports, for starters?

The bigger question is, what is “validity” in a 360 process?  Colleagues and I (Bracken, Timmreck, Fleenor and Summers, 2001; contact me if you want a copy) have offered up a definition of validity for 360’s that holds that it consists of creating sustainable change in behaviors valued by the organization.  Reliable items, user friendly forms and sensible reports certainly help to achieve that goal, but certainly cannot be said to be “valid” as standalone steps in the process.

The Qualtrics people don’t share much about who they are. Evidently their founder is named Scott and teaches MBA’s.  They appear to have a successful enterprise, so kudos!  I would like to know how technology vendors claim to have “valid” tools and what definition of validity they are using.

Hey maybe I will get my 31st comment?

©2011 David W. Bracken

Follow

Get every new post delivered to your Inbox.

Join 36 other followers