Strategic 360s

360s for more than just development

Posts Tagged ‘Dale Rose

The Debate is Over

with 2 comments

I have recently had the opportunity to read two large benchmarking reports that relate to talent management, leadership development and, specifically, how 360 Feedback is being used to support those disciplines.

The first is the U.S. Office of Personnel Management “Executive Development Best Practices Guide” (November, 2012), in which includes both a compilation of best practices across 17 major organizations and a survey of Federal Government members of the Senior Executive Services, which was in turn a follow up to a similar survey in 2008.

The second report was created by The 3D Group as the third benchmark study specifically related to practices in 360 Degree Feedback. This year’s study differed from the past versions by being conducted online, which had the immediate benefit of expanding the sample to over 200 organizations. This change in methodology, sample and content makes interpretation of trend scores a little dicey, but the results are compelling nonetheless. Thank you to Dale Rose and his team at 3D Group for sharing the report with me once again.

These studies have many interesting results that relate to the practice of 360 Feedback, and I want to grab the low hanging fruit for the purposes of this blog entry.

As the title teases, the debate is over, with the “debate” being whether 360 Feedback can and should be used for decision making purposes.  Let me once again acknowledge that 1) all 360 Feedback should be used for leadership development, 2) some 360 processes are solely for leadership development, often one leader at time, and 3) these development-only focused 360 processes should not be used for decision making.

But these studies demonstrate that 360 Feedback continues to be used for decision making, at a growing rate, and evidently successfully since their use is projected to increase (more on this later).  The 3D report goes to some length to try to pin down what “decision making” really means so that we can guide respondents in answering how their 360 data are used.  For example, is leadership development training a “decision?” I would say yes since some people get it and some don’t based on 360’s, and that affects both the individual’s career as well as how the organization uses its resources (e.g., people, time and dollars).

But let’s make it clearer and look at just a few of the reported uses for 360 results.  In the 3D Group report, one of the most striking numbers is the 47% of organizations that indicate they use 360’s for performance management (despite on 31% saying in another question that they use it for personnel decisions).  It may well be that “performance management” use means integrating 360 results into the development planning aspect of a PM process, which is a great way to create accountability without overdoing the measurement focus. This type of linkage of development to performance plans is also reinforced as a best practice in the highlights of the OPM study.

In the OPM study, we 56% of the surveyed leaders report participating in a 360 process (up from 41% in 2008), though the purpose is not specified.  360’s are positioned as one of several assessment tools available to these leaders, and an integrated assessment strategy is encouraged in the report.

Two other messages that come out of both of these studies are 1) use of coaches (and/or managers as coaches) for post assessment follow up continues to gain momentum as a key factor in success, and 2) the 360 processes must be linked to organizational objectives, strategies and values in order to have impact and sustainability.

Finally, in the 3D study, 73% of the organizations report that their use of 360’s in the next year will either continue at the same level or increase.

These studies are extremely helpful in gauging the trends within the area of leadership development and assessment, and, to this observer, it appears that some of the research that has promoted certain best practices, such as follow up and coaching, is being considered in the design and implementation of 360 feedback processes.  But it is most heartening to see some indications that organizations are also realizing the value that 360 data can bring to talent management and the decisions about leaders that are inherent in managing that critical resource.

It is no longer useful (if it ever was) to debate whether 360 feedback can be used successfully to inform and improve personnel decisions. It has and it does. It’s not necessarily easy to do right, but the investment is worth the benefits.

©2013 David W. Bracken

What Is a “Decision”?

with one comment

My good friend and collaborator, Dale Rose, dropped me a note regarding his plans to do another benchmarking study on 360 Feedback processes. His company, The 3D Group, has done a couple of these studies before and Dale has been generous in sharing his results with me, which I have cited in some of my workshops and webinars. The studies are conducted by interviewing coordinators of active 360 systems.  Given that they are verbal, some of the results have appeared somewhat internally inconsistent and difficult to reconcile, though the general trends are useful and informative.

Many of the topics are useful for practitioners to gauge their program design, such as the type of instrument, number of items, rating scales, rater selection, and so on. For me, the most interesting data relates to the various uses of 360 results.

Respondents in the 2004 and 2009 studies report many uses. In both studies, “development” is the most frequent response, and that’s how it should be.  In fact, I’m amazed that the responses weren’t 100% since a 360 process should be about development. The fact that in 2004 only 72% of answers included development as a purpose is troubling whether we take the answers as factual or if they didn’t understand the question. The issue at hand here is not whether 360’s should be used for development; it is what else they should, can, and are used for in addition to “development.”

In 2004, the next most frequent use was “career development;” that makes sense. In 2009, the next most frequent was “performance management,” and career development dropped way down. Other substantial uses include high potential identification, direct link to performance measurement, succession planning, and direct link to pay.

But when asked whether the feedback is used “for decision making or just for development”, about 2/3 of the respondents indicated “development only” and only 1/3 for “decision making.” I believe these numbers understate the actual use of 360 for “decision making” (perhaps by a wide margin), though (as I will propose), it can depend on how we define what a “decision” is.

To “decide” is “to select as a course of action,” according to Miriam Webster (in this context). I would build on that definition that one course of action is to do nothing, i.e., don’t change the status quo or don’t let someone do something. It is impossible to know what goes on in person’s mind when he/she speaks of development, but it seems reasonable to suppose that it involves doing something beyond just leaving the person alone, i.e., maintaining the status quo.  But doing nothing is a decision. So almost any developmental use is making a decision as to what needs to be done, what personal (time) and organizational (money) resources are to be devoted to that person. Conversely, denying an employee access to developmental resources that another employee does get access to is a decision, with results that are clearly impactful but difficult to measure.

To further complicate the issues, it is one thing to say your process is for “development only,” and another to know how it is actually used.  Every time my clients have looked behind the curtain of actual use of 360 data, they unfailingly find that managers are using it for purposes that are not supported. For example, in one client of mine, anecdotal evidence repeatedly surfaced that the “development only” participants were often asked to bring their reports with them to internal interviews for new jobs within the organization. The bad news was that this was outside of policy; the good news was that leaders saw the data as useful in making decisions, though (back to bad news) they may have been untrained to correctly interpret the reports.

Which brings us to why this is an important issue. There are legitimate “development only” 360 processes where the participant has no accountability for using the results and, in fact, is often actively discouraged from sharing the results with anyone else. Since there are not consequences, there are few, if any, consequential actions or decisions required. But most 360 processes (despite the benchmark results suggesting otherwise) do result in some decisions being made, which might include doing nothing by denying an employee access to certain types of development.

The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for Multisource Feedback When Used for Decision Making.”  My sense is many designers and implementers of 360 (multisource) processes feel that these Guidelines don’t apply because their system isn’t used for decision making. Most of them are wrong about that. Their systems are being used for decision making, and, even if not, why would we design an invalid process? And any system that involves the manager of the participant (which it should) is creating the expectation of direct or indirect decision making to result.

So Dale’s question to me (remember Dale?) is how would I suggest wording a question in his new benchmarking study that would satisfy my curiosity regarding the use of 360 results. I proposed this wording:

“If we define a personnel decision as something that affects an employee’s access to development, training, jobs, promotions or rewards, is your 360 process used for personnel decisions?” 

Dale hasn’t committed to using this question in his study. What do you think?

©2012 David W. Bracken

What does “beneficial” mean?

with one comment

My friend, Joan Glaman, dropped me a note after my last blog, (http://dwbracken.wordpress.com/2011/08/30/thats-why-we-have-amendments/ ) with this suggestion:

“I think your closing question below would be a great next topic for general discussion: ‘Under what conditions and for whom is multisource feedback likely to be beneficial?’”

To refresh (or create) your memory, that question that Joan cites is from the Smither, London and Reilly (2005) meta analysis. The article abstract states:

“…improvement is most likely to occur when feedback indicates that change is necessary, recipients have a positive feedback orientation, perceive a need to change their behavior, react positively to the feedback, believe change is feasible, set appropriate goals to regulate their behavior, and take actions that lead to skill and performance improvement.”

Before we answer Joan’s question, we should have a firm grasp on what we mean by “beneficial.” I don’t think we all would agree on that in this context.  Clearly, Smither et al. define it as “improvement,” i.e., positive behavior change. That is the criterion (outcome) measure that they use in their aggregation of 360 studies. I am in total agreement that behavior change is the primary use for 360 feedback, and we (Bracken, Timmreck, Fleenor and Summers, 2001) defined a valid 360 process as one that creates sustainable behavior change in behaviors valued by the organization.

Not everyone will agree that behavior change is the primary goal of a 360 process. Some practitioners seem to believe that creating awareness alone is a sufficient outcome since they do not support any activity or accountability, proposing that simply giving the report to the leader is going far enough and in fact discourage the sharing of results with anyone else.

If you will permit a digression, I will bring to your attention a recent blog by Sandra Mashihi (http://results.envisialearning.com/5-criteria-a-360-degree-feedback-must-meet-to-be-valid-and-reliable/) where one of her lists of “musts” (arrrgh!) is criterion related validity, which she defines as, …does the customized instrument actually predict anything meaningful like performance?” Evidently she would define “beneficial” then to not be behavior change but to be able to measure performance to make decisions about people.  This testing mentality just doesn’t work for me since 360’s are not tests (http://dwbracken.wordpress.com/2010/08/31/this-is-not-a-test/) and it is not realistic to expect them to predict behavior, especially if we hope to actually change behavior.

Let’s get back to Joan’s question (finally). I want to make a couple comments and then hopefully others will weigh in. The list of characteristics that Smither et al provide in the abstract is indeed an accumulation of individual and organizational factors. This is not an “and” list that says that a “beneficial” process will have all these things. It an “or” list where each characteristic can have benefits.  The last two, (set goals and take actions) can be built into the process as requirements regardless of whether the individual reacts positively and/or perceives the need to change. Research shows that follow up and taking action are powerful predictors of behavior change, and I don’t believe that it is important (or matters) to know if the leader wants to change or not. What if he/she doesn’t want to change? Do they get a pass? Some practitioners would probably say, yes, and point to this study as an indication that it is not worth the effort to try to get them to change.

I suggest that this list of factors that lead to behavior change are not independent of each other. In our profession, we speak of “covariates”, i.e., things that are likely to occur together across a population. A simple example is gender and weight, where men are, on average, heavier than women. But we don’t conclude that men as a gender manage their weight less well than women, it’s due to being taller (and other factors, like bone structure).

My daughter, Anne, mentioned in passing an article she read about people who don’t brush their teeth twice a day having a shorter life expectancy than those who do.  So the obvious conclusion is that brushing teeth more often will make us live longer.  There is certainly some benefit to regularly brushing teeth, but it’s more likely that there are covariates of behavior for people that don’t have good dental hygiene that have a more direct impact on health.  While I don’t have data to support it, it seems likely that people who don’t brush regularly also don’t go to the dentist regularly for starters.  It seems reasonable to surmise that, on average, those same people don’t go to their doctor for a regular checkup.

My hypothesis is that 360 participants who aren’t open to feedback, don’t perceive a need to change, don’t feel that they can change, etc., are also the people who are less likely to set goals and take action (follow up) if given the option to not do those things.  In other words, it’s not necessarily their attitudes that “cause” lack of behavior change, but the lower likelihood that they will do what is necessary, i.e., set goals and follow through, in order to be perceived as changing their behavior. Those “behaviors” can be modified/changed while their attitudes are likely to be less modifiable, at least until they have had a positive experience with change and its benefits.

One last point of view about “beneficial.” Another definition could be change that helps the entire organization. That is the focus of the recent publication by Dale Rose and myself, where (in answer to Joan’s question) we state:

“…four characteristics of a 360 process that are required to successfully create organization

change, (1) relevant content, (2) credible data, (3) accountability, and (4) census participation…”

We go on to offer the existing research that supports that position, and the wish list for future research. One way of looking at this view of what is “beneficial” is to extrapolate what works for the individual and apply it across the organization (which is where the census (i.e., whole population) part comes into play.)

I will stop there, and then also post this on LinkedIn to see if we can get some other perspectives.

Thanks, Joan!

©2011 David W. Bracken

Built to Fail/Don’t Let Me Fail

leave a comment »

This is a “two sided” blog entry, like those old 45 rpm records that had hit songs on both sides (think  “We Can Work It Out”/”Daytripper” by the Beatles),though my popularity may not be quite at their level.  This is precipitated by a recent blog (and LinkedIn discussion entry) coming from the Envisia people. The blog entry is called, “Does 360-degree feedback even work?” by Sandra Mashihi and can be found at http://results.envisialearning.com/.   It would be helpful if you read it first, but not necessary.

Sandra begins by citing some useful research regarding the effectiveness of 360 processes. And she concludes that sometimes 360’s “work” and sometimes not.  Her quote is, “Obviously, the research demonstrates varied results in terms of its effectiveness.”

What is frustrating for some of us are the blanket statement about failures (and using terms like “obvious”) without acknowledging that many 360’s are “built to fail.” This is the main thesis of the article Dale Rose and I just published in the Journal of Business and Psychology. http://www.springerlink.com/content/85tp6nt57ru7x522/

http://www.ioatwork.com/ioatwork/2011/06/practical-advice-for-designing-a-360-degree-feedback-process-.html

Dale and I propose four features needed in a 360 process if it is likely to create sustainable behavior change:

1)      Reliable measurement: Professionally developed, custom designed instruments

2)      Credible data: Collecting input from trained, motivated raters with knowledge of ratees

3)      Accountability: Methods to motivate raters and ratees to fulfill their obligations

4)      Census participation: Requiring all leaders in an organizational unit to get feedback

We go on to cite research that demonstrates how the failure to build these features into 360 can, in some cases, almost guarantee failure and/or the ability to detect behavior change when it does occur. One such feature, for example, is whether the ratee follows up with raters (which I have mentioned in multiple prior blogs). If/when a 360 (or a collection of 360’s, such as in a meta analysis) is deemed a “failure”, I always want to know things such as whether raters were trained and whether follow up was required, for starters.

We are leaning more and more about the facets that increase the probability that behavior change will occur as a result of 360 feedback. Yet all too often these features are not built into many processes, and practitioners are surprised (“shocked, I’m shocked”) when it doesn’t produce desired results.

Sandra then goes on to state: “I have found 360-degree feedback worked best when the person being rated was open to the process, when the company communicated its purpose clearly, and used it for development purposes.” I assume that she means “development only” since all 360’s are developmental.  I definitely disagree with that feature. 360’s for “development (only) purposes” usually violate one or more of the 4 features Dale and I propose, particularly the accountability one. They often do not generate credible data because too few raters are used, even the best practice of including all direct reports.

The part about “being open to the process” is where I get the flip side of my record, i.e., don’t hurt my feelings.  In one (and only one) way, this makes sense. If the ratee doesn’t want to be in a development-only process, then by all means don’t force them. It is a waste of time and money. On the other hand, all development only processes are a waste of money in my opinion for most people. (And, by the way, development only is very rare if that means that no decisions are being made as a result.)

But if we ARE expecting to get some ROI (such as sustained behavior change) from our 360’s, then letting some people to opt out so their feelings aren’t hurt is totally contrary to helping the organization manage its leadership cadre. Intuitively, we should expect that those who opt out are the leaders that need it the most, who know that they are not effective and/or are afraid to be “discovered” as the bullies, jerks, and downright psychopaths that we know exist out there.

I have some fear that this fear of telling leaders that they are less than perfect is stemming from this troubling trend in our culture where everyone  has to succeed. I think that the whole “strengths” movement is a sign of that.

Over the last couple of weeks, I have seen a few things that further sensitized me to this phenomenon. One big one is this article in The Atlantic: http://www.theatlantic.com/magazine/archive/2011/07/how-to-land-your-kid-in-therapy/8555/1/.  Protecting our children from failure is not working. Protecting our leaders from failure is also dooming your organization.

I swear I never watch America’s Funniest Videos, but during a rain delay of a baseball game recently, I did stumble upon it and succumbed. AFV is all about failure, and I’m not so sure that people always learn from these failures. But one video I enjoyed showing a 2 year old boy trying to pour apple juice from a BIG bottle into a cup. He put the cup on the floor and totally missed the first two times (with the corresponding huge mess). As a parent and grandparent, I was quite amazed that the person behind the camera just let it happen. But on the third try, the task was accomplished successfully, followed by applause and smiles! There was a huge amount of learning that occurred in just a minute or two because the adults allowed it to happen, with a bit of a mess to clean up.

How many of us would have just poured the juice for him? His learning isn’t over; he will make more mistakes and miss the cup occasionally. But don’t we all.

As a parting note, Dale and I support census participation for a number of reasons, one of which is the point I have already made about otherwise missing the leaders that need it most. We also see 360’s as a powerful tool for organizational change, and changing some leaders and not others does not support that objective. Having all leaders participate is tangible evidence that the process has organization support and is valued. Finally, it creates a level playing field for all leaders for both evaluation and development, communicating to ALL employees what the organization expects from its leaders.

©2011 David W. Bracken

On the Road… and Web and Print

leave a comment »

I have a few events coming up in the next 3 weeks or so that I would like to bring to your collective attention in case you have some interest.  One is free, two are not (though I receive no remuneration). I also have an article out that I co-authored on 360 feedback.

In chronological order, on May 25 Allan Church, VP Global Talent Development at PepsiCo, and I will lead a seminar titled, “Integrating 360 & Upward Feedback into Performance and Rewards Systems” at the 2011 World at Work Conference in San Diego (www.worldatwork.org/sandiego2011).  I will be offering some general observations on the appropriateness, challenges, and potential benefits of using 360 Feedback for decision making, such as performance management. The audience will be very interested in Allan’s descriptions of his experiences with past and current processes that have used 360 and Upward Feedback for both developmental and decision making purposes.

On June 8, I am looking forward to conducting a half day workshop for the Personnel Testing Council of Metropolitan Washington (PTCMW) in Arlington, VA, titled “360-Degree Assessments: Make the Right Decisions and Create Sustainable Change” (contact Training.PTCMW@GMAIL.COM or go to WWW.PTCMW.ORG). This workshop is open to the public and costs $50.  I will be building from the workshop Carol Jenkins and I conducted at The Society for Industrial and Organizational Psychology. That said, the word “assessments” in the title is a foreshadowing of a greater emphasis on the use of 360 Feedback in a decision making context and an audience that is expected to have great interest in the questions of validity and measurement.

On the following day, June 9 (at 3:30 PM EDT), I will be part of an online virtual conference organized by the Institute of Human Resources and hr.com on performance management. My webinar is titled, “Using 360 Feedback in Performance Management: The Debate and Decisions,” where the “decisions” part has multiple meanings. Given the earlier two sessions I described, it should be clear that I am a proponent of using 360/Upward Feedback for decision making under the right conditions. The other take on “decisions” is the multitude of decisions that are required to create those “right conditions” in the design and implementation of a multisource process.

On that note, I am proud to say that Dale Rose and I have a new article in the Journal of Business and Psychology (June) titled, “When does 360-degree feedback create behavior change? And how would we know it when it does?” Our effort is largely an attempt to identify the critical design factors in creating 360 processes and the associated research needs.

This article is part of a special research issue (http://springerlink.com/content/w44772764751/) of JBP and you will have to pay for a copy unless you have a subscription. As a tease, here is the abstract:

360-degree feedback has great promise as a method for creating both behavior change and organization change, yet research demonstrating results to this effect has been mixed. The mixed results are, at least in part, because of the high degree of variation in design features across 360 processes. We identify four characteristics of a 360 process that are required to successfully create organization change, (1) relevant content, (2) credible data, (3) accountability, and (4) census participation, and cite the important research issues in each of those areas relative to design decisions. In addition, when behavior change is created, the data must be sufficiently reliable to detect it, and we highlight current and needed research in the measurement domain, using response scale research as a prime example.

Hope something here catches your eye/ear!

©2011 David W. Bracken

What is the ROI for 360’s?

with 2 comments

Tracy Maylett recently started a LinkedIn discussion in the 360 Feedback Surveys group by asking, “Can you show ROI on 360-degree feedback processes?” To date, no one has offered up any examples, and this causes me to reflect on this topic. It will also be part of our (Carol Jenkins and myself) discussion at the Society for Industrial and Organizational Psychology (SIOP) Pre-Conference Workshop on 360 Feedback (April 13 in Chicago; see www.siop.org).

Here are some thoughts on the challenges in demonstrating ROI with 360 processes:

1)      It is almost impossible to assess the value of behavior change. Whether we use actual measurements (e.g., test-retest) or just observer estimations of ratee change, assigning a dollar value is extremely difficult. My experience is that, no matter what methodology you use, the results are often large and cause consumers (e.g., senior management) to question and discount the findings.

2)      The targets for change are limited, by design. A commonly accepted best practice for 360’s is to guide participants in using the data to focus on 2-3 behaviors/competencies. If some overall measure of behavior change is used (e.g., the average of all items in the model/questionnaire), then we should expect negligible results since the vast majority of behaviors have not been addressed in the action planning (development) process.

3)      The diversity of behaviors/competencies will mean that they have differential ease of change (e.g., short vs. long term change) and different value to the organization. For example, what might be the ROI for significant change (positive or negative) in ethical behavior compared to communication? Each is very important but with very different implications for measuring ROI.

4)      Measurable change is dependent on design characteristics of each 360 process.  I have suggested in earlier blogs that there are design decisions that are potentially so powerful as to promote or negate behavior change. One source for that statement is the article by Goldsmith and Morgan called, “Leadership is a contact sport,” which can be found on www.marshallgoldsmith.com.  In this article  (that I have also mentioned before), they share results from hundreds of global companies and thousands of leaders that strongly support the conclusion that follow up with raters may be the single best predictor of observed behavior change.

Dale Rose and I have an article in press with the Journal of Business and Psychology titled, “When does 360-degree Feedback create behavior change?  And would we know it when it does?” One of our major objectives in that article is to challenge blanket statements about the effectiveness of 360 processes since there are so many factors that will directly impact the power of the system to create the desired outcomes. The article covers some of those design factors and the research (or lack thereof) associated with them.

If anyone says, for example, that a 360 process (or a cluster, such as in a meta analysis) shows minimal or no impact, my first question would be, “Were the participants required to follow up with their raters?” I would also ask about things like reliability of the instrument, training of raters, and accountability as a starter list of factors that can result in unsuccessful ability to cause and/or measure behavior change.

Tracy’s question regarding ROI is an excellent one, and we should be held accountable for producing results. That said, we should not be held accountable for ROI when the process has fatal flaws in design that almost certainly will result in failure and even negative ROI.

©2011 David W. Bracken

When Computers Go Too Far

leave a comment »

In my last blog entry, I highlighted a recent article by Dale Rose and colleagues that largely focuses on the use of technology in supporting 360 Feedback processes; I hope you had a chance to read it. Here is the link again: http://www.siop.org/tip/jan11/04rose.aspx

One of the points the article made was how technology can be used to basically bypass some important  steps in a 360 process. One that blows my mind is offering raters a list of prefab write-in comments. Give me a break.

Rose et al speak to data overkill that can result in overly complex and lengthy reports.  The inverse of that is a capability they don’t mention in the article, namely to create an extremely short report (a page or two)  that is supposed to “tell” the participant what the bottom line is, i.e., what the report says is most important to work on. In other words, users want to bypass reading the report at all, and just “give me the answer.”  Now, we do need to help people use the report to help distill the few messages that are relevant for their situation that will ideally lead to some development plans. But, in the end, no computer (or rater or coach, for that matter) can do that for them. (As I noted in an earlier blog, I believe it is misguided to ask raters or coaches to identify the most important development needs; that responsibility lies with the people who best know the ratee’s situation, which is usually the ratee and their manager.)

Another example Rose et al  did mention, and the one that motivated to write this blog entry, was how technology can be used to create development plans. I recall being in a sales meeting some time back at a consulting firm that will be left nameless. A senior manager said, “Hey wouldn’t it be great to create the capability for the computer to analyze the report and tell the leader what actions to take. People would love it!”  I think my response was something like, “Are you nuts?”  Maybe that’s one reason why I’m not there anymore.

From a sales perspective, in my defense we were trying to create a coaching practice to help leaders to do just that, i.e., properly read and use their feedback, partnering with the participant’s manager (boss) to create accountability and sustainable behavior change. Speaking of accountability and sustainability, how committed will a person be to a computer generated suggestion? It might as well be a horoscope or fortune cookie.

Speaking of computers, GPS devices have become very popular. They can be very helpful and I use mine often.  They also have some problems, one of which is that they are sometimes wrong. There are a couple of entertaining commercials on TV right now that use that theme to show the consequences of a GPS gone awry. Part of what is amusing about these commercials is how the drivers are totally reliant on the “data” coming from the computer and ignore reality.

If commercials are funny because drivers follow a GPS mindlessly, why isn’t it funny when we do the same with  computer-generated 360 reports that tell us what to do?  If anyone is laughing, it is the firms that are selling that stuff. (Let me be clear that I am not dissing development guides that list resources to choose from. I am criticizing being told, either by a computer or even a coach, which of those resources are the “silver bullet.”)

One capability that my GPS has is to display both the current (supposed) speed limit and my actual speed. Again, sometimes it is just plain wrong, maybe out of date or not able to know about construction zones, for example. In a 360 analogy, a speed limit is a kind of norm, that is, a comparison number. I am of the opinion that most (if not all) external 360 norms are likely to be irrelevant to the target organization and, therefore, each employee as well. In other words, they are often an inaccurate comparison. An internal norm, on the other hand, is a much more reliable comparison number that takes into consideration the uniqueness of the environment (terrain, weather, traffic?) and special situations (e.g., construction zones, speed traps).

My GPS actually goes beyond just displaying the two numbers (speed limit and actual speed) and makes a value judgment by flashing red when I exceed 3 mph faster than the limit. REALLY? Three miles per hour? And sometimes against an erroneous number to start with? There is a not-so-fine line between providing people with data to consider in regard to changing behavior versus telling them that what they have to do without considering the data accuracy and the context.

I have always maintained that 360’s should not be about the numbers. There ALWAYS needs to be a good dose of judgment used in interpreting and using the results. Yes, I am a proponent of using 360’s to help us make decisions. “Help” is the operative word; not to “make” decisions but to “help” us make better decisions along with other data, observations, history, values, and common sense.

No computer should do those things for us. And if/when we do ask computers to do the work for us, it is a sign of lack of commitment to maximizing the usefulness (read “validity”) of the 360 process.

©2011 David W. Bracken

Making Mistakes Faster

leave a comment »

The primary purpose of this brief blog entry is to bring to your awareness a new article by Dale Rose, Andrew English, and Christine Thomas in The Industrial/Organizational Psychologist (TIP). I assume that a minority of readers of this blog receive TIP, or, if they do, have not had a chance to read this article. (The title would not immediately draw attention to the fact that the majority of the content is about 360 Feedback for starters.)

The article can be accessed at http://www.siop.org/tip/jan11/04rose.aspx.

As you will see, Dale and colleagues focus primarily on how technology has affected 360 Feedback processes, for good and bad. This should be required reading for practitioners in this field.

They reference a discussion Dale and I had on this blog about the “silly” rating format where raters can rate multiple ratees at the same time using a kind of spread sheet layout. They are correct that there is no research that we are aware of that studies the effects of rating formats like this on the quality of ratings and user reactions (including response rates, for example). We won’t rehash the debate here, but suffice to say that it is one area where Dale and I are in disagreement.

Other than that, I endorse his viewpoints about the pitfalls of technology. I recall when computers first became available to us to support our research. As we all struggled to use technology effectively, I remember saying that computers allow us to make mistakes even faster.

I will use my next blog to talk about, “When Computers Go Too Far,” which builds on some of Dale’s observations. Hope you will tune in!

©2011 David W. Bracken

Silly Survey Formats?

with one comment

My recent webinar, “Make Your 360 Matter” led to a blog entry called “Snakes in Suits” that was primarily about 360 processes being true to their objectives. Dale Rose, a highly experienced consultant and good friend (and collaborator) was motivated to submit a comment, part of which included this thought:

This also raises one of the problems with using that silly survey format where you can list all the ratees together while answering the survey. If raters are comparing across people while rating, then they are not thinking closely about what is going on specific to that person because a bunch of their attention is focused on comparing them to someone else. What happens when the context changes and I’m rating them compared to two different people? At best, if ratees have a professional helping to interpret the data they may actually think about the implications and draw reasonable conclusions. At worst, the shift in context messes with the data so much that no one knows what the differences mean.

In communicating with Dale, I learned that he had been unable to listen in to the webinar, which had included a brief discussion of the rating format that he references. To bring everyone up to speed, the multiratee format we are addressing is (or can be) a spreadsheet with names of ratees on one axis and the competencies on the other. The cells are where ratings are entered; the version I shared had a drop down list of response alternatives (e.g., strongly agree to strongly disagree). The instructions have the rater work across the ratees, which encourages comparisons. Some users do not like the idea of comparisons, and that is one of a number of reasons (besides Dale’s) that it might not make sense to implement.

I have successfully used this format on a few occasions. One was with a group of anesthesiologists who wanted to give feedback to each other, and also get feedback from nurses they regularly worked with. This format worked very well since the ratees were all of relatively equal status, and there was a large number of them (19).  I have used it with other groups where raters have had to give multiple ratings.

Part of my original motivation for trying this format came from comments with raters who had to complete many forms. I remember one manager who told me he took his forms home, spread them out on his deck and tried to consider all of the ratees at the same time. Another manager told me how she had wanted to go back and redo some of her ratings when she got to the 8th or 10th and realized that her own internal calibration had changed as she completed the ratings. In other words, she was saying that she was a different person (rater) when she did the first one compared to when she had more experience and perspective in doing later questionnaires.

Another way that raters become “different” as they fill out forms is simple fatigue which undoubtedly affects both the quality and quantity (i.e., response rate) of feedback. This becomes an issue of fairness where, by luck of the draw, the ratees later in the queue are penalized in terms of the feedback they receive.

If (and I emphasize “if”) your process supports comparisons, this multiratee format seems to solve many problems. Some users have commented on the potential problem of having the list of ratees not being comparable in position, level, etc., and indeed there should be care to include ratees that have similar levels of responsibility.

Now let’s consider Dale’s view that this whole notion is “silly.”  Let me start by saying that Dale is very experienced, and his opinions carry a lot of weight with me and others. He and I have collaborated often and we agree more often than not, but not always. This topic is one where we don’t agree, and where this is no “right” answer but more a perspective on how to treat raters and what we can/should expect of them.

His main point seems to be that raters should be considering the context of the ratee when providing feedback (i.e., giving a rating).  This suggests that the rater should muse over the ratee’s situation (however that is defined) before making each evaluation. I would assume and hope that raters are explicitly instructed to consider this context factor so that there is some semblance of consistency in communicating our expectations for the role of rater. But then it promotes inconsistency by asking raters to consider a complex situational variable and probably apply it in unpredictable ways.

In contrast, I am an advocate of making the rater’s task as simple and straightforward as possible. In past blogs, I have positioned that thought as attempting to minimize the individual differences in raters that can create rater error (or inconsistency). Adding a “context” instruction can only make the ratings that much more complex to both give and interpret.

My position is that the “context” discussion should happen after the data is in, not during its collection. I absolutely believe (and it appears Dale agrees ) that 360 results need to be couched in the ratee’s situation, whether that is by the ratee’s manager and/or coach, and especially be any other users (e.g., human resources).

In the final tabulation, I believe that this “silly” rating format has many more benefits than problems. It can be an effective solution to the rater overload issue that some consultants try to solve by making instruments shorter and shorter at the expense of providing quality information to the ratee. It also solves some of the problems that occur when raters are asked to complete multiple ratings that penalize the ratees at the end of the queue.

I am quite sure that we will be hearing from Dale.

©2010 David W. Bracken

Follow

Get every new post delivered to your Inbox.

Join 38 other followers