Strategic 360s

Making feedback matter

Posts Tagged ‘employee surveys

No Fighting in The War Room!

leave a comment »

[tweetmeme source=”strategic360s”]

My apologies (or sympathies) to those of you who have not seen the black satire, “Dr. Strangelove: or How I Learned to Stop Worrying and Love the Bomb,” which contains the line, “No fighting in the War Room!”  I was reminded of this purposively humorous contradiction in reading an otherwise very insightful summary of the state of feedback tools by Josh Bersin that I hope you can access via LinkedIn here:  https://www.linkedin.com/pulse/employee-feedback-killer-app-new-market-emerges-josh-bersin.

Mr. Bersin seems quite supportive of the “ditch the ratings” bandwagon that is rolling through the popular business literature, and his article is a relatively comprehensive survey of the emerging technologies that are supporting various versions of the largely qualitative feedback market.  But right in the middle he made my head spin in Kubrick-like fashion when he starts talking about the need for ways to “let employees rate their managers,” as if this a) is something new, and b) can be done without using ratings.  Instead of “No fighting in the War Room!”, there is “No rating in the evaluation system!”   I’m curious: Is an evaluation not a “rating” because it doesn’t have a number? Won’t someone attach a number to the evaluation? Either explicitly or implicitly? And wouldn’t it be better if there were some agreement as to what number is attached to that evaluation?

What I think is most useful in Bersin’s article is his categorization and differentiation of the types of feedback processes and tools that seem to be evolving in our field, using his labels:

  • Next Generation Pulse Survey and Management Feedback Tools
  • “Open Suggestion Box” and Anonymous Social Network Tools
  • Culture Assessment and Management Tools
  • Social Recognition Tools

I want to focus on Culture Assessment and Management Tools, in the context of this discussion of ratings and performance management, and, in doing so, referencing some points I have made in the past. If you look at Mr. Bersin’s “Simply Irresistible Organization” (in the article), it contains quite a few classic HR terms like “trust,”, “coaching”, transparency,” “support,” “humanistic,” “inspiration,” “empowered,” and so on, that he probably defines somewhere but nonetheless cry out for behavioral descriptors to tell us what we will see happening when they are being done well, if at all. Ultimately it is those behaviors and the support for those behaviors that defines the culture. Furthermore, we can observe and measure those behaviors, and then hold employees accountable for acting in ways consistent with the organization’s needs.

To quote from Booz & Co in 2013:

On the informal side, there must be tangible behaviors that demonstrate what the culture looks like, and they must be granular enough that all levels of the organization can exhibit the behaviors.”

“On the formal side — and where HR can help out — the performance management and rewards systems must reward people for displaying the right behaviors that exemplify the culture. Too often, changes to the culture are not reflected in the formal elements, such as the performance-management process. This results in a relapse to the old ways of working, and a culture that never truly evolves.

Of course, all that requires measurement, which requires ratings. Which, in turn, begs for 360 Feedback, if we agree that supervisory ratings by themselves are inadequate. My experience is that management demand ratings. My prediction is that unchecked qualitative feedback will also run its course and be rejected as serving little purpose in supporting either evaluation or development.

There may be a place for the kind of feedback that social networks provide that is open and basically uncontrolled in providing spontaneous recognition. But I totally disagree with Mr. Bersin who states that any feedback is better than no feedback.  I have and still do counsel against survey comment sections that are totally open and beg for “please whine here” types of comments that are often not constructive and not actionable.

Mr. Bersin brings up the concept of feedback as a “gift” that I recently addressed as going against the notion that feedback providers need to have accountability for their feedback and see it as an investment, not a gift, especially a thoughtless gift (https://dwbracken.wordpress.com/2015/04/06/feedback-is-not-a-gift-its-an-investment/).

There is a very basic, important difference in how the field of feedback is trending, i.e., more quantity, less quality, too many white elephants. We need more 401Ks.

©2015 David W. Bracken

Advertisements

Frequency: Too Often

with 4 comments

[tweetmeme source=”anotherangle360″]

I delivered a webinar last week on using 360 Feedback in Performance Management Processes (PMP), partially built upon a recent article that Allan Church and I published in HRPS’s People & Strategy journal on that topic (let me know if you want a copy).  In the webinar, I spent a little time talking about the challenges of creating reliable/valid measurement when we are relying on input not from the target person but from observers of his/her behavior. 

One of the many elements that come into play when asking employees to rate something (a person, an organization) is the rating scale that is being used.  Note also that the rating scale’s effectiveness is likely to be directly affected by the quality of rater training, which is often neglected beyond the most basic of written instructions. 

In the webinar, I shared a list of a dozen or so various rating scales that I have encountered over the years, all in a 5 point format.  We also see in The 3D Groups recent benchmark study of over 200 organizations that use 360 feedback that, by far, the 5 point scale and the Likert Agree/Disagree format are used more often than any other scale type.  I’m not going too far out on a limb to propose that the use of the 5 point Likert scale is a carry over from employee surveys.  While there is something to be said for familiarity, I also propose that this practice is a form of laziness in 360 designers who haven’t reflected long or hard enough to consider scales that work better when the target is a specific person and not some nebulous entity like an organization that is the focus of the engagement survey.

I have advocated for the need to have the scale to match the purpose in an earlier blog (https://dwbracken.wordpress.com/2010/09/02/put-your-scale-where-your-money-is-or-isnt/) so I will move on to another pet peeve.

In the last few weeks, I pulled together a group of colleagues to submit a proposal for a SIOP symposium on helping managers to be better coaches. This process is always fun when you see research others are conducting in an area where you have special interests (kind of like buying a box set of CDs by a favorite artist and discovering some less well known gems).  One of the research papers demonstrates once again the inadequacy of frequency scales (typically 5 point scales that ask how often the person does something, ranging from Never to Always). 

Frequency scales continue to be widely used.  The aforementioned 3D Group study indicates that 23% of the reporting organizations use this scale, third most often behind Agreement (49%) and Effectiveness (31%) (which adds up to more than 100%; it may be that companies were allowed to report on more than one 360 process in their organization).  Frankly, the 23% is shockingly high.  Very recently (https://dwbracken.wordpress.com/2013/08/11/what-is-a-coach-redux/) I cited a study that presents a newly developed questionnaire about manager behaviors in the context of performance management that uses a frequency scale, to my chagrin.

For starters, a frequency scale is conceptually flawed. People can’t do everything “Always” (or even Almost Always, as some scales use).  And because they do something “always” doesn’t mean they do it well, and, conversely, because they do it Rarely or Never doesn’t mean they are bad at it. 

As importantly, every time I have seen them scrutinized in research, frequency scales come out poorly in comparison to other formats in terms of reliability and validity.  This is the 20th anniversary of a paper Karen Paul (now at 3M) and I presented at SIOP that indicated that frequency scales severely penalize supervisors who do some things infrequently but are otherwise perceived to be effective.

In a (frankly) more rigorous piece of research by Kaiser and Kaplan (2006) (that you can access here: http://kaplandevries.com/thought-leadership/list/C44), they also demonstrate that frequency scales are, by far, less satisfactory when compared to Evaluative and “Do More/Do Less” scales.

Frequency scales are used far too frequently.  They should be used Never.

 

Kaiser, R.B., & Kaplan, R.E. (2006, April). Are all scales created equal? Response format and the validity of managerial ratings. Paper in B.C. Hayes (Chair), The Four “Rs” of 360º Feedback: Second Generation Research on Determinants of Its Effectiveness, symposium presented at the 21st Annual Conference of the Society for Industrial and Organizational Psychology, Dallas, TX.

 

©2013 David W. Bracken

Written by David Bracken

September 25, 2013 at 1:09 pm