Strategic 360s

360s for more than just development

Archive for the ‘Validity of 360 Processes’ Category

It’s Human Nature

leave a comment »

One question that has been at the core of best practices in 360 Feedback since its inception relates to the conditions that are most likely to create sustained behavior change (at least for those of us that believe that behavior change is the ultimate goal).  Many of us believe that behavior change is not a question of ability to change but primarily one of motivation. Motivation often begins with the creation of awareness that some change is necessary, the accepting the feedback, and then moving on to implementing the change.

One of the more interesting examples of creating behavior change began when seat belts were included as standard equipment in all passenger vehicles in 1964.  I am old enough to remember when that happened and started driving not long thereafter. So using a seat belt was part of the driver education routine since I began driving and has not been a big deal for me.

The reasons for noncompliance with seatbelt usage are as varied as human nature. Some people see it as a civil rights issue, as in, “No one is going to tell me what to do.” There is also the notion that it protects against a low probability event, as in “It won’t happen to me. I’m a careful driver.” Living in Nebraska for a while, I learned that people growing up on a farm don’t “have the time” to buckle and unbuckle seatbelts in their trucks when they are learning to drive, so they don’t get into that habit. (I also found, to my annoyance, that they also never learned how to use turn signals.)

I remember back in the ‘60’s reading about a woman who wrote a car manufacturer to ask that they make the seat belts thinner because they were uncomfortable to sit on.  Really.

Some people have internal motivation to comply, which can also be due to multiple factors such as personality, demographics, training, norms (e.g., parental modeling), and so on. This is also true when we are trying to create behavior change in leaders, but we will see that these factors are not primary determinants of compliance..

In thinking about seatbelt usage as a challenge in creating behavior change, I found study from 2008 by the Department of Transportation. It is titled “How States Achieve High Seat Belt Use Rates” (DOT HS 810 962).  (Note: This is a 170 page report with lots of tables and statistical analyses, and if any of you geeks want a copy, let me know.)

The major finding of this in-depth study states:

The statistical analyses suggest that the most important difference between the high and low seat belt use States is enforcement, not demographics or funds spent on media.

This chart Seatbelt Usage in US, amongst the many in this report, seems to capture the messages fairly well to support their assertion.  This chart plots seat belt usage by state, where we see a large spread ranging from just over 60% (Mississippi) to about 95% (Hawaii).  It also shows whether each state has primary seatbelt laws (where seatbelt usage is a violation by itself), or secondary laws (where seatbelt usage can only be enforced if the driver is stopped for another purpose). Based on this table alone, one might argue causality but the study systematically shows that this data, along with others relating to law enforcement practices, are the best predictors of seatbelt usage.

One way of looking at this study is to view law enforcement as a form of external accountability, i.e., having consequences for your actions (or lack thereof). The primary versus secondary law factor largely shifts the probabilities of being caught, with the apparent desired effect on seatbelt usage.

So, back to 360 Feedback. I always have been, and continue to be, mystified as to how some implementers of 360 feedback processes believe that sustainable behavior change is going to occur in the vast majority of leaders without some form of external accountability. Processes that are supposedly “development only” (i.e., have no consequences) should not be expected to create change. In those processes, participants are often not required to, or even discouraged from, sharing their results with others, especially their manager. I have called these processes “parlor games” in the past because they are kind of fun, are all about “me,” and have no consequences.

How can we create external accountability in 360 processes?  I believe that the most constructive way to create both motivation and alignment (ensuring behavior change is in synch with organizational needs/values) is to integrate the 360 feedback into Human Resource processes, such as leadership development, succession planning, high potential programs, staffing decisions, and performance management.  All these uses involve some form of decision making that affects the individual (and the organization), which puts pressure on the 360 data to be reliable and valid. Note also that I include leadership development in this list as a form of decision making because it does affect the employee’s career as well as the investment (or not) of organization resources.

But external accountability can be created by other, more subtle ways as well. We all know from our kept and (more typically) unkept New Year’s resolutions about the power of going public with our commitments to change. Sharing your results and actions with your manager has many benefits, but can cause real and perceived unfairness if some people are doing it and others not. Discussing your results with your raters and engaging them in your development plans has multiple benefits.

Another source of accountability can (and should) come from your coach, if you are fortunate enough to have one.  I have always believed that the finding in the Smither et al (2005) meta-analysis that the presence of a coach is one determinant of whether behavior change is observed is due to the accountability that coaches create by requiring the coachee to specifically state what they are going to do and to check back that the coachee has followed through on that commitment.

Over and over, we see evidence that, when human beings are not held accountable, more often than not they will stray from what is in their best interests and/or the interests of the group (organization, country, etc.).  Whether it’s irrational (ignoring facts) or overly rational (finding ways to “get around” the system), we should not expect that people will do what is needed, and we should not rely on our friends, neighbors, peers or leaders to always do what is right if there are no consequences for inaction or bad behavior.

©2012 David W. Bracken

What Is a “Decision”?

with one comment

My good friend and collaborator, Dale Rose, dropped me a note regarding his plans to do another benchmarking study on 360 Feedback processes. His company, The 3D Group, has done a couple of these studies before and Dale has been generous in sharing his results with me, which I have cited in some of my workshops and webinars. The studies are conducted by interviewing coordinators of active 360 systems.  Given that they are verbal, some of the results have appeared somewhat internally inconsistent and difficult to reconcile, though the general trends are useful and informative.

Many of the topics are useful for practitioners to gauge their program design, such as the type of instrument, number of items, rating scales, rater selection, and so on. For me, the most interesting data relates to the various uses of 360 results.

Respondents in the 2004 and 2009 studies report many uses. In both studies, “development” is the most frequent response, and that’s how it should be.  In fact, I’m amazed that the responses weren’t 100% since a 360 process should be about development. The fact that in 2004 only 72% of answers included development as a purpose is troubling whether we take the answers as factual or if they didn’t understand the question. The issue at hand here is not whether 360’s should be used for development; it is what else they should, can, and are used for in addition to “development.”

In 2004, the next most frequent use was “career development;” that makes sense. In 2009, the next most frequent was “performance management,” and career development dropped way down. Other substantial uses include high potential identification, direct link to performance measurement, succession planning, and direct link to pay.

But when asked whether the feedback is used “for decision making or just for development”, about 2/3 of the respondents indicated “development only” and only 1/3 for “decision making.” I believe these numbers understate the actual use of 360 for “decision making” (perhaps by a wide margin), though (as I will propose), it can depend on how we define what a “decision” is.

To “decide” is “to select as a course of action,” according to Miriam Webster (in this context). I would build on that definition that one course of action is to do nothing, i.e., don’t change the status quo or don’t let someone do something. It is impossible to know what goes on in person’s mind when he/she speaks of development, but it seems reasonable to suppose that it involves doing something beyond just leaving the person alone, i.e., maintaining the status quo.  But doing nothing is a decision. So almost any developmental use is making a decision as to what needs to be done, what personal (time) and organizational (money) resources are to be devoted to that person. Conversely, denying an employee access to developmental resources that another employee does get access to is a decision, with results that are clearly impactful but difficult to measure.

To further complicate the issues, it is one thing to say your process is for “development only,” and another to know how it is actually used.  Every time my clients have looked behind the curtain of actual use of 360 data, they unfailingly find that managers are using it for purposes that are not supported. For example, in one client of mine, anecdotal evidence repeatedly surfaced that the “development only” participants were often asked to bring their reports with them to internal interviews for new jobs within the organization. The bad news was that this was outside of policy; the good news was that leaders saw the data as useful in making decisions, though (back to bad news) they may have been untrained to correctly interpret the reports.

Which brings us to why this is an important issue. There are legitimate “development only” 360 processes where the participant has no accountability for using the results and, in fact, is often actively discouraged from sharing the results with anyone else. Since there are not consequences, there are few, if any, consequential actions or decisions required. But most 360 processes (despite the benchmark results suggesting otherwise) do result in some decisions being made, which might include doing nothing by denying an employee access to certain types of development.

The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for Multisource Feedback When Used for Decision Making.”  My sense is many designers and implementers of 360 (multisource) processes feel that these Guidelines don’t apply because their system isn’t used for decision making. Most of them are wrong about that. Their systems are being used for decision making, and, even if not, why would we design an invalid process? And any system that involves the manager of the participant (which it should) is creating the expectation of direct or indirect decision making to result.

So Dale’s question to me (remember Dale?) is how would I suggest wording a question in his new benchmarking study that would satisfy my curiosity regarding the use of 360 results. I proposed this wording:

“If we define a personnel decision as something that affects an employee’s access to development, training, jobs, promotions or rewards, is your 360 process used for personnel decisions?” 

Dale hasn’t committed to using this question in his study. What do you think?

©2012 David W. Bracken

Full Stops, Neutrinos and Rocket Science

with one comment

I don’t know why I feel compelled to respond to what I see are unreasonable positions (primarily in LinkedIn discussions). But I do, and this blog gives me a vehicle for doing so without taking up a disproportionate amount of air time on that forum.

So what got me going this time? A LinkedIn discussion (that I started on the topic of 360 validity) got diverted into the topic of “proper” use of 360 feedback (development vs decision making).  The particular comment that got me going was, “I believe these assessments should be used for development – full stop.”  (Virtually 100% of 360 processes are used for development, but the context indicates that he meant “development only.”) Having lived and worked in London for a while, I realized (or realised) that the “full stop” has the same meaning as “period,” implying end of sentence and, with emphasis, no more is worth saying.  By the way, I am using this person only as an example of the many, many individuals who have expressed similar dogmatic views on this topic.

There are probably a few things that are appropriate to put a “full stop” on. That would be an interesting blog for someone, e.g., would we include the Ten Commandments? “Thou Shall Not Kill. Full stop.”  Hmmm… but then we have Christians who believe in capital punishment, so maybe it’s only a partial stop (or pause)?  Like I said, I will let someone else take that on.

Are the physical sciences a place for “full stops?”   Like, “The world is flat. Full stop.”  “ The Sun revolves around the Earth. Full stop.” Just this last week, we were presented with the possibility that another supposedly immutable law is under attack, i.e., “Nothing can go faster than the speed of light. Full stop.”  Now we have European scientists who have observed neutrinos apparently traveling faster than the speed of light and are searching for ways to explain and confirm it. If found to be true, it would challenge many of the basics of physics, opening the door to time travel, for example.  The fact that some scientists are apparently challenging the “full stop” nature of the Theory of Relativity is also fascinating, if only for the reason that they are open to exploring the supposedly impossible. And, by the way, are begging for others to challenge and/or replicate their findings.

I firmly believe that the social sciences have no place for “full stops.”  To me, “full stop” means ceasing to explore and learn. It seems to indicate a lack of openness to considering new information or different perspectives.

I suspect there are many practitioners in the “hard” sciences who question whether what we do is a “science” at all. (I think I am running out of my quota of quotations marks.)  Perhaps they see our work with understanding human behavior as a quest with no hope of ever having answers. That’s what I like about psychology. We will never fully know how to explain human behavior, and that’s a good thing. If we can explain it, then we probably could control it. I think that is a scary thought. BUT we do try to improve our understanding and increase the probabilities of predicting what people will do. That is one of the basic goals of industrial/organizational psychology.

(I have been known to contend that what we do is harder than rocket science because there are no answers to what we do, only probabilities.  The truth is that even the hard sciences have fewer “full stops” than even they would like. I just finished reading a book about the Apollo space program, Rocket Men, and it is very interesting to know how many “hard stops” that used to exist were bashed (e.g., humans can’t live in weightlessness, the moon’s crust will collapse if we try to land on it. Insert “hard stops” appropriately), how much uncertainty there was, and how amazing the accomplishment really was.  I also learned that one of the reasons the astronauts’ visors were mirrored was so that aliens couldn’t see their faces. Seriously.)

Increasing probabilities for predicting and influencing employee behavior requires that we also explore options.  I can’t see how it is productive to assert that we know the answer to anything, and that we shouldn’t consider options that help us serve our clients, i.e., organizations, more effectively.

On top of all that, the most recent 3D Group benchmark study indicates that about one third of organizations DO use 360 data for some sort of administrative purpose, and that almost certainly understates the real numbers. What do we tell those organizations? That they should cease doing so since our collective wisdom says that there is no way they can actually be succeeding? That we cannot (or should not) learn from what they are doing to help their organizations make better decisions about their leaders? That a few opinions should outweigh these experiences?

I don’t get it. No stop.

©2011 David W. Bracken

What does “beneficial” mean?

with one comment

My friend, Joan Glaman, dropped me a note after my last blog, (http://dwbracken.wordpress.com/2011/08/30/thats-why-we-have-amendments/ ) with this suggestion:

“I think your closing question below would be a great next topic for general discussion: ‘Under what conditions and for whom is multisource feedback likely to be beneficial?’”

To refresh (or create) your memory, that question that Joan cites is from the Smither, London and Reilly (2005) meta analysis. The article abstract states:

“…improvement is most likely to occur when feedback indicates that change is necessary, recipients have a positive feedback orientation, perceive a need to change their behavior, react positively to the feedback, believe change is feasible, set appropriate goals to regulate their behavior, and take actions that lead to skill and performance improvement.”

Before we answer Joan’s question, we should have a firm grasp on what we mean by “beneficial.” I don’t think we all would agree on that in this context.  Clearly, Smither et al. define it as “improvement,” i.e., positive behavior change. That is the criterion (outcome) measure that they use in their aggregation of 360 studies. I am in total agreement that behavior change is the primary use for 360 feedback, and we (Bracken, Timmreck, Fleenor and Summers, 2001) defined a valid 360 process as one that creates sustainable behavior change in behaviors valued by the organization.

Not everyone will agree that behavior change is the primary goal of a 360 process. Some practitioners seem to believe that creating awareness alone is a sufficient outcome since they do not support any activity or accountability, proposing that simply giving the report to the leader is going far enough and in fact discourage the sharing of results with anyone else.

If you will permit a digression, I will bring to your attention a recent blog by Sandra Mashihi (http://results.envisialearning.com/5-criteria-a-360-degree-feedback-must-meet-to-be-valid-and-reliable/) where one of her lists of “musts” (arrrgh!) is criterion related validity, which she defines as, …does the customized instrument actually predict anything meaningful like performance?” Evidently she would define “beneficial” then to not be behavior change but to be able to measure performance to make decisions about people.  This testing mentality just doesn’t work for me since 360’s are not tests (http://dwbracken.wordpress.com/2010/08/31/this-is-not-a-test/) and it is not realistic to expect them to predict behavior, especially if we hope to actually change behavior.

Let’s get back to Joan’s question (finally). I want to make a couple comments and then hopefully others will weigh in. The list of characteristics that Smither et al provide in the abstract is indeed an accumulation of individual and organizational factors. This is not an “and” list that says that a “beneficial” process will have all these things. It an “or” list where each characteristic can have benefits.  The last two, (set goals and take actions) can be built into the process as requirements regardless of whether the individual reacts positively and/or perceives the need to change. Research shows that follow up and taking action are powerful predictors of behavior change, and I don’t believe that it is important (or matters) to know if the leader wants to change or not. What if he/she doesn’t want to change? Do they get a pass? Some practitioners would probably say, yes, and point to this study as an indication that it is not worth the effort to try to get them to change.

I suggest that this list of factors that lead to behavior change are not independent of each other. In our profession, we speak of “covariates”, i.e., things that are likely to occur together across a population. A simple example is gender and weight, where men are, on average, heavier than women. But we don’t conclude that men as a gender manage their weight less well than women, it’s due to being taller (and other factors, like bone structure).

My daughter, Anne, mentioned in passing an article she read about people who don’t brush their teeth twice a day having a shorter life expectancy than those who do.  So the obvious conclusion is that brushing teeth more often will make us live longer.  There is certainly some benefit to regularly brushing teeth, but it’s more likely that there are covariates of behavior for people that don’t have good dental hygiene that have a more direct impact on health.  While I don’t have data to support it, it seems likely that people who don’t brush regularly also don’t go to the dentist regularly for starters.  It seems reasonable to surmise that, on average, those same people don’t go to their doctor for a regular checkup.

My hypothesis is that 360 participants who aren’t open to feedback, don’t perceive a need to change, don’t feel that they can change, etc., are also the people who are less likely to set goals and take action (follow up) if given the option to not do those things.  In other words, it’s not necessarily their attitudes that “cause” lack of behavior change, but the lower likelihood that they will do what is necessary, i.e., set goals and follow through, in order to be perceived as changing their behavior. Those “behaviors” can be modified/changed while their attitudes are likely to be less modifiable, at least until they have had a positive experience with change and its benefits.

One last point of view about “beneficial.” Another definition could be change that helps the entire organization. That is the focus of the recent publication by Dale Rose and myself, where (in answer to Joan’s question) we state:

“…four characteristics of a 360 process that are required to successfully create organization

change, (1) relevant content, (2) credible data, (3) accountability, and (4) census participation…”

We go on to offer the existing research that supports that position, and the wish list for future research. One way of looking at this view of what is “beneficial” is to extrapolate what works for the individual and apply it across the organization (which is where the census (i.e., whole population) part comes into play.)

I will stop there, and then also post this on LinkedIn to see if we can get some other perspectives.

Thanks, Joan!

©2011 David W. Bracken

That’s Why We Have Amendments

with one comment

I used my last blog (http://dwbracken.wordpress.com/2011/08/09/so-now-what/)  to start LinkedIn discussions in the 360 Feedback and I/O Practitioners group, asking the question: What is a “valid” 360 process?  The response from the 360 group was tepid, maybe because the group has a more general population that might not be that concerned with “classic” validity issues (which is basically why I wrote the blog in the first place).  But the I/O community went nuts (45 entries so far) with comments running the gamut from constructive to dismissive to deconstructive.

Here is a sample of some of the “deconstructive” comments:

…I quickly came to conclusion it was a waste of good money…and only useful for people who could (or wanted to) get a little better.

It is all probably a waste of time and money. Good luck!!

There is nothing “valid” about so-called 360 degree feedback. Technically speaking, it isn’t even feedback. It is a thinly veiled means of exerting pressure on the individual who is the focal point.

My position regarding performance appraisal is the same as it has been for many years: Scrap It. Ditto for 360.

Actually, I generally agree with these statements in that many 360 processes are a waste of time and money. It’s not surprising that these sentiments are out there and probably quite prevalent. I wonder, though, if we are all on the same page. In another earlier blog, I suggested that discussions about the use and effectiveness of 360’s should be separated by those that are designed for feedback to a single individual (N=1) and those that are designed to be applied to groups (N>1).

But the fact is that HR professionals have to help their management make decisions about people, starting with hiring and then progressing through placement, staffing, promotions, compensation, rewards/recognition, succession planning, potential designation, development opportunities, and maybe even termination.

Nothing is perfect, especially so when it comes to matters that involve people. As an example, look to the U.S. Constitution, an endearing document that has withstood the test of time. Yet the Founding Fathers were the first to realize that they needed to make provisions for the addition of amendments to further make refinements. Of course, some of those amendments were imperfect themselves and were later rescinded.

But we haven’t thrown out the Constitution because it is imperfect.  Nor do we find it easy to come to agreements what the revisions should be.  But one of the many good things about humans is a seemingly natural desire to make things better.

Ever since I read Mark Edwards and Ann Ewen’s seminal book, 360 Degree Feedback, I have believed that 360 Feedback has the potential to improve personnel decision making when done well. The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for multisource feedback when used for decision making,” coauthored with Carol Timmreck, where we made a stab at defining what “done well” can mean.

In our profession, we have an obligation to constantly seek ways of improving personnel decision making. There are two major needs we are trying to meet, which sometimes cause tensions. One is to provide the organization with more accurate information on which to base these decisions, which we define as increased reliability (accurate measurement) and validity (relevant to job performance). Accurate decision making is good for both the organization and the individual.

The second need is to simultaneously use methods that promote fairness. This notion of fairness is particularly salient in the U.S. where we have “protected classes” (i.e., women, minorities, older workers), but hopefully fairness is a universal concept that applies in many cultures.

Beginning with the Edwards & Ewen book and progressing from there, we can find more and more evidence that 360 done well can provide decision makers with better information (i.e., valid and fair) than traditional sources (e.g., supervisory evaluations).  I actually heard a lawyer state that organizations could be legally exposed for not using 360 feedback because is more valid and fair than methods currently in use.

I have quoted Smither, London and Reilly (2005) before, but here it is again:

We therefore think it is time for researchers and practitioners to ask “Under what conditions and for whom

is multisource feedback likely to be beneficial?” (rather than asking “Does multisource feedback work?”).

©2011 David W. Bracken

So Now What?

with 7 comments

This is the one year anniversary of this blog. This is the 44th post.  We have had 2,026 views, though the biggest day was the first with 38 views.  I have had fewer comments than I had hoped (only 30), though some LinkedIn discussion have resulted. Here is my question: Where to go from here? Are there topics that are of interest to readers?

Meanwhile, here is my pet peeve(s) of the week/month/year:  I was recently having an exchange with colleagues regarding a 360 topic on my personal Gmail account and up pops ads on the margin for various 360 vendors (which is interesting in itself), the first of which is from Qualtrics (www.qualtrics.com) with the heading, “Create 360s in Minutes.”

The topic of technology run amok has been covered before here (When Computers Go Too Far, http://wp.me/p10Xjf-3G), my peevery was piqued (piqued peevery?) when I explored their website and saw this claim:  USE VALIDATED QUESTIONS, FORMS and REPORTS.”

What the heck does that mean?  What are “validated” forms and reports, for starters?

The bigger question is, what is “validity” in a 360 process?  Colleagues and I (Bracken, Timmreck, Fleenor and Summers, 2001; contact me if you want a copy) have offered up a definition of validity for 360’s that holds that it consists of creating sustainable change in behaviors valued by the organization.  Reliable items, user friendly forms and sensible reports certainly help to achieve that goal, but certainly cannot be said to be “valid” as standalone steps in the process.

The Qualtrics people don’t share much about who they are. Evidently their founder is named Scott and teaches MBA’s.  They appear to have a successful enterprise, so kudos!  I would like to know how technology vendors claim to have “valid” tools and what definition of validity they are using.

Hey maybe I will get my 31st comment?

©2011 David W. Bracken

On the Road… and Web and Print

leave a comment »

I have a few events coming up in the next 3 weeks or so that I would like to bring to your collective attention in case you have some interest.  One is free, two are not (though I receive no remuneration). I also have an article out that I co-authored on 360 feedback.

In chronological order, on May 25 Allan Church, VP Global Talent Development at PepsiCo, and I will lead a seminar titled, “Integrating 360 & Upward Feedback into Performance and Rewards Systems” at the 2011 World at Work Conference in San Diego (www.worldatwork.org/sandiego2011).  I will be offering some general observations on the appropriateness, challenges, and potential benefits of using 360 Feedback for decision making, such as performance management. The audience will be very interested in Allan’s descriptions of his experiences with past and current processes that have used 360 and Upward Feedback for both developmental and decision making purposes.

On June 8, I am looking forward to conducting a half day workshop for the Personnel Testing Council of Metropolitan Washington (PTCMW) in Arlington, VA, titled “360-Degree Assessments: Make the Right Decisions and Create Sustainable Change” (contact Training.PTCMW@GMAIL.COM or go to WWW.PTCMW.ORG). This workshop is open to the public and costs $50.  I will be building from the workshop Carol Jenkins and I conducted at The Society for Industrial and Organizational Psychology. That said, the word “assessments” in the title is a foreshadowing of a greater emphasis on the use of 360 Feedback in a decision making context and an audience that is expected to have great interest in the questions of validity and measurement.

On the following day, June 9 (at 3:30 PM EDT), I will be part of an online virtual conference organized by the Institute of Human Resources and hr.com on performance management. My webinar is titled, “Using 360 Feedback in Performance Management: The Debate and Decisions,” where the “decisions” part has multiple meanings. Given the earlier two sessions I described, it should be clear that I am a proponent of using 360/Upward Feedback for decision making under the right conditions. The other take on “decisions” is the multitude of decisions that are required to create those “right conditions” in the design and implementation of a multisource process.

On that note, I am proud to say that Dale Rose and I have a new article in the Journal of Business and Psychology (June) titled, “When does 360-degree feedback create behavior change? And how would we know it when it does?” Our effort is largely an attempt to identify the critical design factors in creating 360 processes and the associated research needs.

This article is part of a special research issue (http://springerlink.com/content/w44772764751/) of JBP and you will have to pay for a copy unless you have a subscription. As a tease, here is the abstract:

360-degree feedback has great promise as a method for creating both behavior change and organization change, yet research demonstrating results to this effect has been mixed. The mixed results are, at least in part, because of the high degree of variation in design features across 360 processes. We identify four characteristics of a 360 process that are required to successfully create organization change, (1) relevant content, (2) credible data, (3) accountability, and (4) census participation, and cite the important research issues in each of those areas relative to design decisions. In addition, when behavior change is created, the data must be sufficiently reliable to detect it, and we highlight current and needed research in the measurement domain, using response scale research as a prime example.

Hope something here catches your eye/ear!

©2011 David W. Bracken

Who’s in charge here?

leave a comment »

In my last blog, I touched on an issue that I would like to give a little more attention, namely the question of who our leaders (i.e., the recipients of 360 Feedback) should be listening to when prioritizing their leadership skills and behaviors: the organization, their coworkers, and/or some combination?  This is a topic that has cultural implications that I alluded to in that earlier blog, and I will post this out on the Society for Industrial/Organizational Psychology (SIOP) Going Global group on LinkedIn to see if I can get a response from someone there. But it also has broad implications for how we approach leadership selection, assessment and development in general.

One end of the continuum is to state that an organization has the right (and need) to define leadership competencies/behaviors, ideally derived from strategy, to support its initiatives and create a unique competitive advantage (see the treatment of leadership as an intangible asset in the book, The Invisible Advantage).  Organizations regularly create leadership models that are used to align HR systems in creating the type of leader needed to succeed at the individual and organization level.  This includes values statements that often are operationalized through behavioral items in 360’s that hopefully apply to leaders and line employees alike.

In the context of 360’s, I have long maintained that 360’s can draw their relevance (read “validity”) from a direct line of sight from strategy to leadership models to 360 content, and that it is not necessary to conduct predictive studies to demonstrate the validity of the 360 process. The content of a 360 instrument can define success, and that leaders that behave in ways consistent with the model are successful by definition.  Those who do not conform to those expectations should be give the choice of changing or to find alternative employment.

That is the “top down” version of “who’s in charge here?”  I was influenced in my thinking many years ago by some monographs by Bill Byham at DDI on this topic that were often in the context of assessment center processes but certainly no less applicable. Dr. Byham has written a number of papers on aligning HR systems with competency models, and you can find much of that on their web site.

The other end of the continuum of “who’s in charge here?” is the “bottoms up” view of leadership effectiveness that suggests that the leader’s behavior should be directed by coworkers, particularly direct reports. This view came into greater clarity for me at SIOP during a symposium that I briefly described in my last blog on implicit leadership models.

For decades, this “bottoms up” view of leadership has been implicit in the use of importance ratings. I have been opposed to importance ratings for as long as I can remember, partially because of the extra burden to raters, but, more importantly, because raters are not in a good position to understand the needs of the leader and the organization. I still believe that importance is best determined by the ratee and his/her boss in conjunction.  Asking raters for importance ratings feels like a customer survey. If your 360 treats raters as “customers” of leader behavior, then use a satisfaction scale and design the whole system accordingly.

The SIOP symposium had a number of interesting research studies that explicitly state that an effective leader should understand and react to the needs of coworkers based on the coworkers’ expectations of how an “effective” leader should behave which is, in turn, derived from their cultural backgrounds (i.e., nationality). This is very interesting and cultural awareness is an important issue in our global community.  As some of these papers pointed out, leaders now (especially with virtual teams) can have coworkers and direct reports from multiple nations and cultures, which creates a requirement that somehow the leader has to understand and adapt to the needs of each of these people. My head began to spin!

By the way, I first learned about the concept of Situational Leadership back in the 80’s and I still believe that philosophically it makes a lot of sense to not treat every subordinate the same way. But if you know Situational Leadership, it focuses exclusively on the person’s “maturity” (ability to perform a task) and the need to adapt leadership style depending on an assessment of that maturity level. It is very task oriented, and has little (if anything) to do with the needs and expectations of the follower.

There was some sentiment on the SIOP panel for asking the leader to negotiate or compromise between the “bottoms up” and “top down” views of leadership. I’m not sure how that would work, but it probably compounds one of the main problems I cited as discussant, namely the overload we are creating for leaders by inundating them with all this information in the form of job expectations. I have to believe that leaders are asking, or will ask, this question of “who is in charge?”, the organization or their coworkers?

I will take a stance. I believe that the organization is “in charge.” I did some consulting for a company of about 3500 people in Dubai that had employees with 80 different passports. They were run by South Africans, and ran the company that way. In a nutshell, they expected employees to conform to a common set of values and expectations, effectively leaving their cultural backgrounds at the door. Or at least the company “culture” should take precedence in defining effective leadership. I believe that this aligned focus on organization needs is a necessity, and that we need to make it clear to our leaders “who is in charge” when it comes to deciding how the company will leverage one of its most powerful intangible assets.

©2011 David W. Bracken

What is normal?

leave a comment »

My good friend, Jon Low (http://thelowdownblog.blogspot.com/), brought a WSJ article to my attention that delves into the question of what behaviors are “normal” (i.e., tolerated or even encouraged) across different organizations, in this case sorted by industry.  Here are a couple brief excerpts from the article and a link to access it:

Fuld & Co., a competitive-intelligence consultant based in Cambridge, Mass., presented 104 business executives with hypothetical scenarios that would give the executive an opportunity to collect intel about a competitor, but straddled the ethical line. Participants could rate the scenario as “normal,” “aggressive,” “unethical,” or “illegal.”

“Companies have different senses of what’s right and wrong,” said Fuld & Co. President Leonard Fuld.

Executives in financial services and technology are the most cutthroat in collecting intelligence about competitors, while pharmaceutical executives and government officials are the most trepid, according to a recent survey.

http://online.wsj.com/article/SB10001424052748704728004576176711042012064.html?KEYWORDS=joe+light

What came to my mind in reading this interesting study was the question of the utility of comparing leadership behavior across organizations as we use the results of 360 Feedback processes to guide the development priorities and, sometimes, make other decisions based on the data as well.  Specifically, how useful are external norms as part of 360 reports?

First a brief digression. I have proposed lately that many discussions of best practices in the 360 arena should be divided into two categories, i.e., processes where “N=1” (i.e., ad hoc, single person administrations) vs. “N>1” (i.e., where more than one leader is going through the 360 experience at the same time).  For “N=1” situations, using an off-the-shelf instrument usually makes sense, and usually those instruments have external norms since the content is held constant across all users. Usually there are no internal norms as well. That said, the points I make below highlight the need for caution in using external norms in any setting. End of digression.

I frequently use a quote taken from the book Execution (Bossidy and Charan, 2003) that reads:

The culture of a company is the behavior of its leaders. Leaders get the behavior they exhibit and tolerate.”

I do not recall ever hearing anyone refute the notion that every organization has its own culture. Since that is an accepted axiom, then it would follow from Bossidy and Charan that the definition of successful leadership behaviors should vary across organizations as well.

If you agree with that train of thought, then it seems to follow that using external norms in 360’s makes no sense. For starters, wanting external norms severely constrains the content of the instrument since the organization will be required to use the exact wording and response scale from the standard questionnaire.

Using external norms also flies in the face of the argument that the uniqueness of an organization (and its culture) is a source of competitive advantage.  I (and others) also argue that uniquely relevant 360 content greatly helps in creating motivation for the raters and for helping ratees accept the feedback as being important and relevant to their success.

360’s can be a powerful way to create culture change. Especially when used across the whole organization, the behavioral descriptors “bring the culture to life” and communicate to all employees what it takes to be a successful member of organization and what they should expect from their leaders. Administration across the organization will quickly generate internal norms that can be an extremely powerful tool in helping leaders understand how they compare to their peers, and, in some organizations, help identify outliers at the low end who may require special attention. Conversely, leaders at the top 5th percentile may be used as role models.

Returning to the study cited in the WSJ, I might have thought that ethical behavior might be one area where there would be some consistency across organizations, at least within the same culture (e.g., Western culture). Silly me. If we have significant variance in behavioral expectations across organizations and/or industries in something as basic as ethical behavior, then we similarly should not be surprised if we find differences in other categories of behavior as well.

Do you believe that each organization has a unique culture? If so, using external norms in your 360 probably doesn’t make sense.

©2011 David W. Bracken

Not Funny

leave a comment »

I seem to be in a bit of a rut with themes around humor and now commercials. Despite trying to bypass as many commercials as possible with my DVR, occasionally I do see one and sometimes even for the better.

One that caught my eye/ear is one by IBM that starts with a snippet of a Groucho Marx (whom I also like very much) where he states, “This morning I shot an elephant in my pajamas.”  Of course, the fun part is when he follows, “How he got in my pajamas, I will never know.”  Ba bump.

The commercial goes on to talk about a computer called Watson that has been developed by IBM with capabilities that will be used to compete on Jeopardy (another favorite). The point is that language has subtle meanings, euphemisms, metaphors, nuances and unexpected twists that are difficult for machines to correctly comprehend.

In the context of 360 Feedback, the problem is that we humans are sometimes not so good at picking up the subtleties of language as well. We need to do everything we can to remove ambiguity in our survey content, acknowledging that we can never be 100% successful.

We have all learned, sometimes the hard way, about how our attempts to communicate with others. How often have we had to come to grips with how our seemingly clear directions have been misunderstood by others?

I became sensitized to this question of ambiguity in language during the quality movement of the 80’s and the work of Peter Senge as embodied in The Fifth Discipline and the accompanying Fifth Discipline Fieldbook. (Writing this blog has spurred me to pull out this book; if you youngsters are not aware of Senge’s writings, it is still worth digging out. There is a 2006 Edition which I confess I have not read yet.)

There are many lessons in these books regarding the need to raise awareness about our natural tendencies as humans to fall back on assumptions, beliefs, values, etc., often unconsciously, in making decisions, trying to influence, and taking actions. One lesson that has particularly stuck with me in the context of 360’s is the concept of mental models, which Senge defines as, “deeply ingrained assumptions, generalizations, or even pictures or images that influence how we understand the world and how we take action.”  In the Fieldbook, he uses an example of the word “chair” and how that simple word will conjure up vastly different mental images of what a “chair” is, from very austere, simple seats to very lush, padded recliners and beyond. (In fact, it might even create an image of someone running a meeting if we are to take it even farther.)

So Groucho created a “mental model” (or assumed one) of us visualizing him in his pajamas with a gun chasing an elephant. Then he smashes that “assumption” we made by telling us that the elephant was wearing the pajamas. That is funny in many ways.

Sometimes we are amused when we find we have made an incorrect assumption about what someone has told us. I have told the story before of the leader who made assumptions about his low score on “Listens Effectively.” He unexpectedly found that his assumptions were unfounded and the raters were simply telling him to put down his PDA. That could be amusing and also a relief since it is an easy thing to act on.

360 Feedback is a very artificial form of communication where we rely on questionnaires to allow raters to “tell” the ratee something while protecting their anonymity. This also has the potential benefit of allowing us to easily quantify the responses which, in turn, can be used to measure gaps (between rater groups, for example) and track progress over time.

Of course this artificial communication creates many opportunities for raters to misunderstand or honestly misuse the intent of the items and, in turn, for ratees to misinterpret the intended message from the raters. We need to do our best to keep language simple and direct, though we can never prevent raters applying different “mental models.”

Take an item like, “Ensures the team has adequate resources.” Not a bad question. But, like “chair,” “resources” can create all sorts of mental images such as people (staff), money (budget), equipment (e.g., computers), access to the leader, and who knows what else! We could create a different item for each type of resource if we had an unlimited item budget, which we don’t.

This potential problem is heightened if there will be multiple languages used, creating all sorts of issues with translations, cultural perspectives, language nuances, and so on.

In the spirit of “every problem has a solution,” I can think of at least four basic recommendations.

First, be diligent in item writing to keep confusion to a minimum.  For example:

  • Use simple words/language
  • Don’t use euphemisms (“does a good job”)
  • Don’t use metaphors (“thinks outside the box”)
  • Don’t use sports language (“creates benchstrength”)
  • Keep all wording positive (or cluster negatively phrased items such as derailers in one dimension with clear instructions)

Second, conduct pilot tests with live raters who can give the facilitator immediate feedback on wording in terms of clarity and inferred meaning.

Third, conduct rater training. Some companies tell me that certain language is “ingrained” in their culture, such as “think outside the box.” (I really wonder how many people really know the origins of that metaphor. Look it up in Wikipedia if you don’t.)  I usually have to defer to their wishes, but still believe that their beliefs may be more aspirational than factual. Including a review of company-specific language (which does have some value in demonstrating the uniqueness of the 360 content) during rater training will have multiple benefits.

Fourth, acknowledge and communicate that it is impossible to prevent misinterpretations by the senders (raters) and the receivers (ratees). This will require that the ratee discuss results with the raters and ensure that they are all “on the same page”. (metaphor intended with tongue in cheek).

I bet that some ratees do actually laugh (or at least chuckle) if/when they hear how some raters interpret the questions.  But more typically it is not funny. And it is REALLY not funny if the ratee invests time and effort (and organizational resources) taking action on false issues due to miscommunication.

(Note: For those interested, Carol Jenkins and I will be talking about these issues in our SIOP Pre-Conference workshop on 360 Feedback on April 13 in Chicago.)

©2011 David W. Bracken

Follow

Get every new post delivered to your Inbox.

Join 35 other followers