Strategic 360s

Making feedback matter

Archive for August 2016

Crowd-sourced: Drive-thru Feedback?

leave a comment »

danger-fast-food

(co-authored with Dale Rose)

There were a couple of interesting webinars in the last 2 weeks on the topic of performance management trends.  One was hosted by AON (Levi Segal and Seymour Adler) and the other by Talent Quarterly (Dave Ulrich, hosted by Marc Effron).

I (Dave) am particularly interested in this topic at this moment because I will be hosting my own discussion/debate on this topic at SHRM Florida on August 31 in Orlando. There I will be joined by Keith Lykins (Lykins International) and Joann Gamicchia (Orange County Clerk of Courts) to share our perspectives and engage the audience in an exchange.

As a result, I recently became aware of work by Gerry Ledford regarding trends in the field of performance management (http://goo.gl/lpv8OZ).  He writes about “cutting-edge performance management,” which is characterized by three things: Ongoing feedback, ratingless reviews, and crowd-sourced feedback.

While there has been a lot of banter recently about how to create ongoing discussions between managers and their direct reports, what really caught my attention was this statement about crowd-sourced feedback (CSF):

There is very little written about and almost no research on this growing area, but we think it may replace traditional 360 feedback over time. It uses a technology (social media) that most employees know, it is delivered in real time rather than annually, and the feedback is free form and therefore less artificial than a 360 rating form.

It is interesting to hear a well respected author suggest that a feedback method with a fairly sizable research base might be replaced by another method because the new method is 1) familiar, 2) faster, and 3) easier to do.  This sounds a little like replacing a healthy nutritious meal with fast food. It’s not that fast food is without any merit – certainly we’ve all traveled enough to know that sometimes you just need something quick and easy.  But let’s not jump too quickly into assuming that fast-food-feedback will serve the same needs as 360° feedback.  Gerry is certainly correct that crowd-sourced feedback does not qualify as 360° feedback, especially if you compare it to the definition that we (Bracken, Rose & Church, in press) have proposed:

360° Feedback is a process for collecting, quantifying, and reporting co-worker observations about an individual (i.e., a ratee) that facilitates/enables three specific data-driven/based outcomes: (a) the collection of rater perceptions of the degree to which specific behaviors are exhibited; (b) the analysis of meaningful comparisons of rater perceptions across multiple ratees, between specific groups of raters for an individual ratee, and changes over time; and (c) the creation of sustainable individual, group and /or organizational change in behaviors valued by the organization.

At this point, it is difficult to make generalizations and comparison with 360° Feedback because CSF comes in many different forms. Josh Bersin’s review of the emerging feedback market has no clear category for the type of feedback system Ledford describes. Just from what we have read in various articles, we see that CSF might be:

  • “Push” feedback (ratees asking for feedback)
  • “Pull” feedback (raters provide feedback on their own, at their own initiative)
  • “Event” oriented (e.g., how did I do in a presentation?), though this is not really “ongoing”
  • Totally unstructured (open ended comments on whatever topic occurs to the rater)
  • Open ended but requires attaching comments to rating dimensions
  • Monitored by the organization or unfettered
  • Only for ratee or shared with/used by the organization (manager, HR, other decision makers)

We see potential value in many of these types of feedback, but they clearly do not provide the same benefits to a leader or organization that 360° Feedback can provide.

If we can make some comparisons between true 360° Feedback and CSF, we see these differences of some significance:

  1. Open-ended feedback (which CSF relies on) is highly skewed to a narrow set of content areas (Rose et al, 2004)
  2. Self-selection in crowd sourcing causes sampling bias
  3. CSF makes no allowance for “opportunity to observe” error/bias, i.e., the competence and motivation of the source (rater)
  4. CSF has no method to track individual or group change over time
  5. By using standardized survey content, 360s allow strategically-aligned behavior change across the system
  6. Use of feedback to create real change is greater with 360s (until proven otherwise)
  7. Well done 360s have safeguards against retaliation and misuse
  8. Normative comparisons to other company leaders is an option with 360s
  9. 360s can be aggregated to view company-wide or system-wide trends that can be compared over time (crowd sourcing cannot)
  10. Unlike CSF, 360s allow for census participation – all leaders can be directed to participate in a standardized process; allowing leaders to create organization-wide shifts in behavior and culture.

CSF’s are equivalent to 1-2 item 360’s in most cases where the rater is providing feedback on a very narrow set of behaviors (which may or may not be specified, may or may not be actionable). They are narrowly focused on content that may or may not be aligned with organizational competencies and/or values. They may be more timely than regularly scheduled 360’s, but not necessarily so (CSF may not be timely, and 360’s do not have to be just annual events). The opportunity for timeliness may be an illusion, an opportunity offered but not always fulfilled.

Dr. Ledford’s call for more research needs to be answered.  Here are some things we would like to know:

  • What are the various contexts in which CSF is collected? (We certainly should combine different methods in examining the effectiveness of the feedback, though we could compare methods).
  • Do ratees actually use the feedback (i.e., change their behavior, let alone pay attention to it)?
  • Does the novelty wear off over time?
  • What types of individuals avoid CSF vs. those who use it frequently? (are high performing early career employees more likely to use CSF than veterans with a long track record of success?)
  • What is the differential effect due to type of CSF?
  • What are the opinions of CSF? For users, nonusers and other stakeholders (e.g., HR, management)?

While we are certainly encouraged that there is so much interest in finding ways to improve employee feedback, it’s worth recognizing that 360° Feedback has a long history of success helping leaders to learn from their environment.  Further, there is a fair amount of research and consensus around best practice in 360° Feedback.  Hopefully researchers and practitioners will take a careful look at new feedback methods like CSF.  Until we have a longer track record and much more experience with CFS, it may be a bit premature to assume that CFS will fully serve an organization’s need for valid feedback that is useful for guiding a wide range of talent decisions.

This is not necessarily an either/or choice between using 360° Feedback and CSF. But we don’t think it should ever be a “CSF only” choice.

 

Bersin, J. (2015). Feedback is the killer app: A new market and management model emerges. Forbes, August 26.  Retrieved at http://www.forbes.com/sites/joshbersin/2015/08/26/employee-feedback-is-the-killer-app-a-new-market-emerges/#41bf71036626

Bracken, D. W., Rose, D. S., & Church, A. H. (in press). The evolution and devolution of 360° feedback. Industrial and Organizational Psychology: Perspectives on Science and Practice.

Rose, D. S., Farrell, T., & Robinson, G. N.  (2004). Are Narrative Comments in 360-Degree Feedback Useful or Useless?  Technical Report #8253. Berkeley, CA: Data Driven Decisions, Inc.

Advertisements