Strategic 360s

Making feedback matter

Posts Tagged ‘feedback

When “Feedback” Is Not Feedback

with 2 comments

 

A couple of my colleagues and I are working on a definition of “360 Degree Feedback,” and have been discussing about whether a 360, or feedback in general for that matter, is indeed “feedback” if not used (i.e., creates behavior change).

I found this definition of feedback in a business dictionary:

Feedback is the information sent to an entity (individual or a group) about its prior behavior so that the entity may adjust its current and future behavior to achieve the desired result. 

http://www.businessdictionary.com/definition/feedback.html#ixzz3lk0oFWjj

I REALLY like that it focuses on “behavior” that is defined by its relevance (i.e., “desired result,” assuming the desired result is what is important to the organization and not some whim.) But there are certain aspects of this definition that don’t quite fit my idea of what “feedback” is in practice. I believe it is not what is sent but what is received (i.e., what is heard AND interpreted correctly) and that the “may” part is ambiguous, especially if it implies that adjustment is optional.  So I propose this version:

Feedback is the information received by an entity (individual or a group) about its prior behavior so that the entity adjusts (or continues) its current and future behavior to achieve the desired result. 

Returning to the discussion with my colleagues, we have come to an agreement that feedback must be used in order to be called “feedback.”  If it is not used, then it is just information or information that is judged to be irrelevant and not worth using.  At that point it is no longer “feedback” and the sender should be made aware of that (if the sender is human).

Feedback

Sometimes the problem is that the message is not received, as in our cartoon.  Whose fault it that? I had an ex-girlfriend in college who was one of 3 passengers on a long car trip we took. At our destination, she said to me, “You know a lot of words to songs!”  It wasn’t until sometime (too much) later that I realized it was feedback about me singing to the radio.  Maybe that is also part of the “ex” part of our relationship?

But the part that we are debating more vociferously is about the “use part” (the “adjusts or continues” phrase).  This has direct implications for supposed “feedback” processes (such as 360’s in particular that are clearly labeled as “feedback”).  We assert that even if the target receives the information as intended, if he/she does not consciously act (adjust or purposively continue), then the information remains only information and is not “feedback.”

Some of you would assert that “feedback” must only create awareness.  But why send feedback if we feel that that is sufficient?  Why provide feedback if there is no change (or use) expected? Of course, awareness alone is not sufficient, and it must be followed by acceptance.  But still that is insufficient. (If not used, which is information and potential feedback to the sender, the sender may adjust his/her behavior as well, including just giving up.)

Some 360 processes hold that Awareness is sufficient and the leader need not actually use the feedback. We propose that such processes should not be called “360 Feedback” because there is no real feedback, just information. Feedback requires using the information.

Is a chair a “chair” if it is never sat in? I would say no, it is something else. Maybe it is a closet and has ceased being a “chair.”  (And for some of us, a perfectly fine closet.)

NotAChair

Is your process providing “feedback” or is it a closet?  If it’s not producing behavior change, you can call it anything you want  except “feedback.”

Advertisements

WAINT (Why Am I NOT Talking)

leave a comment »

Welch

In a response to my last post, (https://goo.gl/HW1lzl), Jason Read (@JasonReadPHD) correctly notes, “If only they practiced this ratio…”

It’s easy to blame the leader and then the organization (as creators of ability and culture) for not acting as a “coach” by stopping talking and asking, WAIT (Why Am I Talking).  Well, guess what. There are two parties in that exchange and the “other” person (employee, customer, child) should be thinking, WAINT (Why Am I NOT Talking).

There are a number of plausible reasons why the “other” doesn’t ask WAINT more often.

  • Both managers AND the employee (or “other,” whomever it is) have “always done it this way,” i.e., it has become the accepted MO for management. I talk, you listen. Then you do it. See ya.
  • Some people like being told what to do. They don’t expect to be asked, so they either don’t prepare or want to put in the effort.
  • They don’t have the opportunity to talk. Often not enough time is allotted for the real exchange of ideas, which ties back to the first point of the expectation of how the exchange is expected to occur (if “exchange” is even the right word; maybe more of a lecture).
  • Some people have self-doubts, and it becomes a self-limiting obstacle to personal contribution. This also has lots of reasons, including past experiences and past contributions not being acknowledged, tried, and/or rewarded. This can go WAY back in a person’s upbringing, and can be difficult to change, but it often is an assumption the person is making about outcomes.

I feel myself drifting into clinical psychology (where I don’t want to be and am not qualified to be), so this behaviorist will return to the REAL reason for this post, and that is to propose that WAINT is fixable, regardless of the histoy. The first requisite of change is to increase awareness, and so we need to make people (all the “others” in the world) to first realize they are not talking and that, at times, that needs to change.

When we are the “other,” we have a responsibility to contribute.  And we, as change agents (consultants, HR professionals, trainers, leaders who want change) need to create an environment (culture) that encourages the “others” to get involved and to be supported.

It starts with the awareness creation that the status quo is not working, and both managers and “others” need to change. The organization is losing a major resource in the minds and abilities of its employees when they aren’t heard , supported and recognized.

In a prior blog (https://goo.gl/6w57Fd) I proposed taxonomy of manager/other interactions, four types of discussions that are used in different situations.  I propose that it is insufficient for managers to go off to training and learn this approaching to being a better manager and coach. It is equally important to create the awareness of the “others” that these conversations are all important and each type has its time and place. Part of the message is that Activator exchanges need to be happing more often, and this is where 10:1 ratio of listening to talking comes into play.

I also propose that part of this orientation for both managers and the “others” is to create a language that forms expectations about what kind of exchange is about to happen, as in the manager saying, “Lets have a check-in” so that both parties have a vision of what their roles will be. Or the employee might say, “I’m having a problem and we need to have an Activator chat.”  When they enter that talk, they should be thinking that the 10:1 ratio will be used, versus maybe a 1:10 ratio when the Director discussion is happening. And, if the expectation is that the employee will have the opportunity to talk for 90% of the conversation, he/she had better be prepared to do just that.

Yes, the manager has the WAIT question to wrestle with. But the “other” has a WAINT to be aware of as well. It won’t do any good for the manager to create air time if, as they say on the radio, there is only dead air.

Written by David Bracken

February 14, 2016 at 12:33 pm

No Fighting in The War Room!

leave a comment »

[tweetmeme source=”strategic360s”]

My apologies (or sympathies) to those of you who have not seen the black satire, “Dr. Strangelove: or How I Learned to Stop Worrying and Love the Bomb,” which contains the line, “No fighting in the War Room!”  I was reminded of this purposively humorous contradiction in reading an otherwise very insightful summary of the state of feedback tools by Josh Bersin that I hope you can access via LinkedIn here:  https://www.linkedin.com/pulse/employee-feedback-killer-app-new-market-emerges-josh-bersin.

Mr. Bersin seems quite supportive of the “ditch the ratings” bandwagon that is rolling through the popular business literature, and his article is a relatively comprehensive survey of the emerging technologies that are supporting various versions of the largely qualitative feedback market.  But right in the middle he made my head spin in Kubrick-like fashion when he starts talking about the need for ways to “let employees rate their managers,” as if this a) is something new, and b) can be done without using ratings.  Instead of “No fighting in the War Room!”, there is “No rating in the evaluation system!”   I’m curious: Is an evaluation not a “rating” because it doesn’t have a number? Won’t someone attach a number to the evaluation? Either explicitly or implicitly? And wouldn’t it be better if there were some agreement as to what number is attached to that evaluation?

What I think is most useful in Bersin’s article is his categorization and differentiation of the types of feedback processes and tools that seem to be evolving in our field, using his labels:

  • Next Generation Pulse Survey and Management Feedback Tools
  • “Open Suggestion Box” and Anonymous Social Network Tools
  • Culture Assessment and Management Tools
  • Social Recognition Tools

I want to focus on Culture Assessment and Management Tools, in the context of this discussion of ratings and performance management, and, in doing so, referencing some points I have made in the past. If you look at Mr. Bersin’s “Simply Irresistible Organization” (in the article), it contains quite a few classic HR terms like “trust,”, “coaching”, transparency,” “support,” “humanistic,” “inspiration,” “empowered,” and so on, that he probably defines somewhere but nonetheless cry out for behavioral descriptors to tell us what we will see happening when they are being done well, if at all. Ultimately it is those behaviors and the support for those behaviors that defines the culture. Furthermore, we can observe and measure those behaviors, and then hold employees accountable for acting in ways consistent with the organization’s needs.

To quote from Booz & Co in 2013:

On the informal side, there must be tangible behaviors that demonstrate what the culture looks like, and they must be granular enough that all levels of the organization can exhibit the behaviors.”

“On the formal side — and where HR can help out — the performance management and rewards systems must reward people for displaying the right behaviors that exemplify the culture. Too often, changes to the culture are not reflected in the formal elements, such as the performance-management process. This results in a relapse to the old ways of working, and a culture that never truly evolves.

Of course, all that requires measurement, which requires ratings. Which, in turn, begs for 360 Feedback, if we agree that supervisory ratings by themselves are inadequate. My experience is that management demand ratings. My prediction is that unchecked qualitative feedback will also run its course and be rejected as serving little purpose in supporting either evaluation or development.

There may be a place for the kind of feedback that social networks provide that is open and basically uncontrolled in providing spontaneous recognition. But I totally disagree with Mr. Bersin who states that any feedback is better than no feedback.  I have and still do counsel against survey comment sections that are totally open and beg for “please whine here” types of comments that are often not constructive and not actionable.

Mr. Bersin brings up the concept of feedback as a “gift” that I recently addressed as going against the notion that feedback providers need to have accountability for their feedback and see it as an investment, not a gift, especially a thoughtless gift (https://dwbracken.wordpress.com/2015/04/06/feedback-is-not-a-gift-its-an-investment/).

There is a very basic, important difference in how the field of feedback is trending, i.e., more quantity, less quality, too many white elephants. We need more 401Ks.

©2015 David W. Bracken

Just Shut Up and Listen

with 4 comments

[tweetmeme source=”anotherangle360″]

I still get the Sunday New York Times in “hard copy” on Sundays (in addition to the electronic version the other days), partly because my wife and I are addicted to the crosswords.  Let me add that I am one of those people who mourn the fadeout of the newspaper, and often find that browsing the physical newspaper often exposes me to pieces of information that I would otherwise miss in the electronic version (whatever form your “browsing” takes, if at all).  (I believe, for what it’s worth, that a similar phenomenon is happening in the music world with the ease of downloading single songs and probably less “browsing” of albums where some other gems are often lurking.)

Back on topic, the Sunday NYT also has a feature in the Business section called “Corner Office” where a business leader is interviewed.  This week it was Francesca Zambello, general and artistic director of the Glimmerglass Festival and artistic director of the Washington National Opera. When asked about leadership lessons she has learned, she says:

When you’re in your 20s and have that leadership gene, the bad thing is that you don’t know when to shut up. You think you know all the answers, but you don’t. What you learn later is when to just listen to everybody else. I’m finding that all those adages about being humble and listening are truer and truer as I get older. Creativity cannot explode if you do not have the ability to step back, take in what everybody else says and then fuse it with your own ideas.

In the parallel universe of my personal life, my daughter Ali sent along an edition of the ABA Journal that references a study of the happiest and unhappiest workers in the US (http://www.abajournal.com/news/article/why_a_career_website_deems_associate_attorney_the_unhappiest_job_in_america/) that cites associate attorney as the unhappiest profession (which by coincidence is her husband’s job).  If you don’t want to go to the link, the five unhappiest jobs are:

1) Associate attorney

2) Customer service associate

3) Clerk

4) Registered nurse

5) Teacher

The five happiest are:

1) Real estate agent

2) Senior quality assurance engineer

3) Senior sales representative

4) Construction superintendent

5) Senior applications designer

Looking at the unhappiest list and possible themes/commonalities among these jobs, one is lack of empowerment and probably similar lack of influence in their work and work environment. (The job of teacher may less so, and its inclusion on this list is certainly troubling and complicated I am sure).  But I suspect that these first four jobs have a common denominator in the way they are managed that ties back to Ms. Zambello’s reflections on her early management style, i.e., having all the answers and not taking advantage of the knowledge and creativity of the staff.  It also causes me to remember the anecdote of the GM retiree who mused, “They paid me for my body. They could have had my mind for free.”

This is certainly not an epiphany for most of us, but more serendipity that two publications this week once again tangentially converged on this topic. I will once again recommend Marshall Goldsmith’s book, “What Got You Here Won’t Get You There” that is a compendium of mistakes that leaders make in their careers, including behaviors that might have served them well when starting their career but lose their effectiveness as they move up the organization. The classic case being the subject matter expert who gets promoted and assumes that being the “expert” is always the road to success. In Marshall’s book there are 20 of these ineffective, limiting behaviors (some might call them “derailers”), and when we think of the prototypical leader who wants to be the “expert” and doesn’t listen, it potentially touches on multiple behaviors in the list of 20, including:

2. Adding too much value

6. Telling the world how smart we are

10. Failing to give proper recognition

11. Claiming credit we don’t deserve

13. Clinging to the past

16. Not listening

Considering this list as possible motivators for the umbrella behavior of “not listening,” we can see how it might be very challenging to change this behavior if the leader believes (consciously or unconsciously) that one or more of these factors are important to maintain, or (as Marshall also notes) are “just the way I am” and not changeable.

We behaviorists believe that any behavior is changeable, whether a person wants to change or not. What is required is first awareness, i.e., that there is a gap between their behavior and the desired/required behavior, followed by motivation to change that may come internal to the person, but more often requires external motivation that usually comes from accountability. Awareness and accountability are critical features of a valid 360 feedback process if designed to create sustainable behavior change.

Let me add that the “shut up and listen” mantra is a core behavior for coaches as well. This consultant believes that the challenge that most organizations have in morphing managers into effective coaches is also rooted in this core belief that the role of coach is to solve problems for their subordinates, versus listening to fully understand the issue and then help the subordinate “discover” the solution that best works for them and the situation.

This is a serious problem that has two major downsides. For one, it, at least in some major way, is likely a root cause of creating the “unhappy” job incumbents that in turn leads to multiple negative outcomes for the organization. The other major downside is a version of our GM retiree’s lament, that is, the organization is losing out capitalizing on a significant resource in the form of the individual and collective contributions of its workforce.

There may be no time in our history where involving our young workers is more critical, which includes listening to their input and empowering them to act. Consider the many reasons that this might be so:

  • The pace of change, internally and externally, requires that we have processes that allow us to recognize and react in ways that most likely will diverge from past practices
  • Younger workers bring perspectives on the environment, technology and knowledge that are often hidden from the older generations (that are, by the way, retiring)
  • As the baby boomers do retire en masse, we need to be developing the next generation of leaders.  Another aside, this means allowing them to fail, which is another leadership lesson that Ms. Zambello mentions (remember her?).

Listening is actually a very complex behavior to change, but it begins with increasing awareness of ineffectiveness, and the creating motivation to change by educating leaders on its negative consequences and lost opportunities.

©2013 David W. Bracken

I Need an Exorcism

leave a comment »

[tweetmeme source=”anotherangle360″]

Being the 360 Feedback nerd I am, I love it when some new folks get active on the LinkedIn 360 discussion group. One discussion emerged recently that caught my eye, and I have been watching it with interest, mulling over the perspectives and knowing I had to get my two cents in at some point.

Here is the question:

How many raters are too many raters?

We normally recommend 20 as a soft limit. With too many, we find the feedback gets diluted and you have too many people that don’t work closely enough with you to provide good feedback. I’d be curious if there are any suggestions for exceptions.

This is an important decision amongst the dozens that need to be made in the course of designing and implementing 360 processes. The question motivated me to pull out The Handbook of Multisource Feedback and find the excellent chapter on this topic by James Farr and Daniel Newman (2001), which reminded me of the complexity of this decision. Let me also reiterate that this is another decision that has different implications for “N=1” 360 processes (i.e., feedback for a single leader on an ad hoc basis) versus “N>1” systems (i.e., feedback for a group of participants); this blog and discussion is focused on the latter.

Usually people argue that too many surveys will cause disruption in the organization and unnecessary “soft costs” (i.e., time). The author of this question poses a different argument for limiting the rater population, which he calls “dilution” due to inviting unknowledgeable raters.  For me, one of the givens of any 360 system is that the raters must have sufficient experience with the ratee to give reliable feedback. One operationalization of that concept is to require that an employee must have worked with/for the ratee for some minimum amount of time (e.g., 6 months or even 1 year), even if he/she is a direct report. Having the ratee select the raters (with manager approval) is another practice that is designed to help get quality raters that then also facilitate the acceptance of the feedback by the ratee. So “dilution” due to unfamiliarity can be combated with that requirement, at least to some extent.

One respondent to this question offers this perspective:

The number of raters depends on the number of people that deal with this individual through important business interactions and can pass valuable feedback based on real experience. There is no one set answer.

I agree with that statement. Though, while there is no one set answer, some answers are better than others (see below).

In contrast, someone else states:

We have found effective to use minimum 3 and maximum 5 for any one rater category.

The minimum of 3 is standard practice these days as a “necessary but not sufficient” answer to the number of raters. As for the maximum of 5, this is also not uncommon but seems to ignore the science that supports larger numbers.  When clients seek my advice on this question of number of raters, I am swayed by the research published by Greguras and Robie (1998) who collected and researched the question of the reliability of various rater sources (i.e., subordinates, peers and managers). They came to the conclusion that different rater groups provide differing levels of reliable feedback, probably because the number of “agendas” lurking within the various types of raters. The least reliable are the subordinates, followed by the peers, and then the managers, the most reliable rater group.

One way to address rater unreliability is to increase the size of the group (another might be rater training, for example). Usually there is only one manager and best practice is to invite all direct reports (who meet the tenure guidelines), so the main question is the number of peers. This research suggests that 7-9 is where we need to aim, noting also that that is the number of returns needed, so inviting more is probably a good idea if you expect less than a 100% response rate.

Another potential rater group is external customers. Recently I was invited to participate in a forum convened by the American Board of Internal Medicine (ABIM) to discuss the use of multisource feedback in physician recertification processes. ABIM is one of 24 member Boards of the American Board of Medical Specialties (ABMS), which has directed that some sort of multisource (or 360) feedback be integrated into recertification.

The participants in this forum included many knowledgable, interesting researchers on the use of 360 in the context of medicine (a whole new world for me, which was very energizing). I was invited to represent the industry (“outside) perspective. One of the presenters spoke to the challenge of collecting input from their customers (i.e., patients), a requirement for them. She offered up the number of 25 as the number of patients needed to create a reliable result, using very similar rationale as Greguras and Robie regarding the many individual agendas of raters.

Back to LinkedIn, there was then this opinion:

I agree that having too many raters in any one rater group does dilute the feedback and make it much harder to see subtleties. There is also a risk that too many raters may ‘drown out’ key feedback.

This is when my head started spinning like Linda Blair in The Exorcist.  This perspective is SO contrary to my 25 years of experience in this field that I had to prevent myself from discounting it as my head continued to rotate.  I have often said that a good day for me includes times when I have said, “Gee, I have never thought of (insert topic) in that way.” I really do like hearing new and different views, but it’s difficult when they challenge some foundational belief.

For me, maybe THE most central tenet of 360 Feedback is the reliance on rater anonymity in the expectation (or hope) that it will promote honesty. This goes back to the first book on 360 Feedback by Edwards and Ewen (1996) where 360’s were designed with this need for anonymity being in the forefront. That is why we use the artificial form of communication of using anonymous questionnaires and usually don’t report in groups of less than 3. We know that violations of the anonymity promise result in less honesty and reduced response rates, with the grapevine (and/or social media) spreading violated trust throughout the organization.

The notion that too many raters will “drown out key feedback” seems to me to be a total reversal of this philosophy of protecting anonymity. It also seems to place an incredible amount of emphasis on the report itself where the numbers become the sole source of insight. Other blog entries of mine have proposed that the report is just the conversation starter, and that true insight is achieved in the post-survey discussions with raters and manager.

I recall that in past articles (see Bracken, Timmreck, Fleenor and Summers, 2001) we made the point that every decision requires what should be a conscious value judgment as to who the most important “customer” is for that decision, whether it be the rater, ratee, or the organization. For example, limiting the number of raters to a small number (e.g., 5 per group or not all Direct Reports) indicates that the raters and organization are more important than the ratee, that is, that we believe it is more important to minimize the time required of raters than it is to provide reliable feedback for the ratee. In most cases, my values cause me to lobby on behalf of the ratee as the most important customer in design decisions.  The time that I will rally to the defense of the rater as the most important customer in a decision is when anonymity (again, real or perceived) is threatened. And I see these arguments for creating more “insight” by keeping rater groups small or subdivided are misguided IF these practitioners share the common belief that anonymity is critical.

Finally (yes, it’s time to wrap this up), Larry Cipolla, an extremely experienced and respected practitioner in this field, offers some sage advice with some comments, including the folly of increasing rater group size by combining rater groups. As he says, that is pure folly. But I do take issue with one of his practices:

We recommend including all 10 raters (or whatever the n-count is) and have the participant create two groups–Direct Reports A and Direct Reports B.

This seems to me to be a variation on the theme of breaking out groups and reducing group size with the risk of creating suspicions and problems with perceived (or real) anonymity. Larry, you need to show that doing this kind of subdividing creates higher reliability in a statistical sense that can overcome the threats to reliability created by using smaller N’s.

Someone please stop my head from spinning. Do I just need to get over this fixation with anonymity in 360 processes?

References

Bracken, D.W., Timmreck, C.W., and Church, A.H. (2001). The Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Bracken, D.W., Timmreck, C.W., Fleenor, J.W., and Summers, L. (2001). 360 feedback from another angle. Human Resource Management, 1, 3-20.

Edwards, M. R., and Ewen, A.J.  (1996). 360° Feedback: The powerful new model for employee assessment and performance improvement. New York: AMACOM.

Farr, J.L., and Newman, D.A. (2001). Rater selection: Sources of feedback. In Bracken, D.W., Timmreck, C.W., and Church, A.H. (eds.), The Handbook of Multisource Feedback. San Francisco: Jossey-Bass.

Greguras, G.J., and Robie, C. (1998).  A new look at within-source interrater reliability of 360-degree feedback ratings. Journal of Applied Psychology, 83, 960-968.

©2012 David W. Bracken

Is Your Mirror Foggy?

leave a comment »

[tweetmeme source=”anotherangle360″]

As an alumnus of Dartmouth College, I receive the Alumni Magazine whose current issue contains an interview with the new(ish) president, Jim Kim (see http://dartmouthalumnimagazine.com/the-dam-interview/  if you can’t control yourself).  A couple things in the interview caught my attention, including this statement:

The folks in leadership studies at Tuck have said the one thing that is critical for the development of better leaders is self-awareness, the so-called 360-degree analysis. The challenge for us is to structure the kind of education that will lead to the graduation of young people with a clearer sense of what it will take for them to be effective human beings.

Of course, the “360” part is interesting in itself, though I’m not sure what the “so-called” part is all about.

Is self-awareness the most critical of all leadership qualities? My model of leadership behavior change includes awareness, followed by acceptance, as the “keystones” to creating sustainable change.  Organizations are also in constant flux and in need of change, and organizations need some way to create awareness.  Dashboards are a way that organizations become of area in which they are succeeding and failing, and therefore drive change. For the individual leader, the 360 feedback process may be the most powerful dashboard if done correctly, at least on the “how” side of the performance equation (versus the “what”).

Another argument for the importance of awareness came to my attention during the current Republican primary contest. One pundit, in comparing the field of contenders, offered an observation that, except for Rick Santorum, the other players seem to be lacking this sense of self that Dr. Kim alludes to in his quote. One symptom of that lack of self is the constant and repeated use of “Reagan Republican” by almost all the candidates to describe themselves. I even saw a parody of a contest of the candidates as to who could say the name “Reagan” the most times in 10 seconds.

While I’m at it, there was one other quote from the interview that is worth sharing:

It’s fairly well known now that I have a leadership coach, Marshall Goldsmith, who was recently ranked one of the world’s top-10 thought leaders and who also teaches at Tuck. He took me on as a pro bono case. In Marshall’s book, What Got You Here Won’t Get You There, he lists the 20 most common mistakes that CEOs make. Probably the biggest mistake is adding too much value. I didn’t understand that in the beginning, but I sure do now.

You may know that I am a follower of Marshall’s and the book is one I have reviewed and passed along to others (including my family members).  Dr. Kim offers up another important leadership characteristic or, in this case, flaw that plague leaders as they move up the organization. I have compared Marshall’s list of 20 pitfalls to be similar in spirit to the derailers described many years ago by Morgan McCall and associates at the Center for Creative Leadership, though the specific content is different. But both can also be useful content for 360 feedback processes.

Is it time to go and defog your mirror and test your self awareness?   Oh, and remember that what is in your mirror may be closer than it appears.

©2012 David W. Bracken

A Dangerous Place

with 2 comments

[tweetmeme source=”anotherangle360″]

The world is a dangerous place to live; not because of the people who are evil, but because of the people who don’t do anything about it.

Albert Einstein
I hadn’t heard this quote before this weekend. It happened to be on a sign carried by a lone protester outside the entrance to the Penn State football game, standing by the Joe Paterno statue. Needless to say, his presence and message wasn’t appreciated by some of the PSU faithful, though he stated that he was once “one of them” but now had a different perspective as a family man in the wake of these recent events.

Another article in today’s (Nov 12) NY Times also caught my eye in what I felt had a related message, titled “For Disabled Care Complaints, Vow of Anonymity Was False.” (http://www.nytimes.com/2011/11/12/nyregion/ombudsmen-gave-whistle-blowers-names-to-state-agency.html?src=me&ref=nyregion).  A spokesman for the agency, Travis Proulx, said in an interview in August that “there is no confidentiality for any employee who is reporting abuse or neglect, even to the ombudsman.” Is it any wonder that people are afraid to step forward?

Organizations, including universities, are in many ways closed systems with their own methods for defining and living values (http://www.nytimes.com/2011/11/12/us/on-college-campuses-athletes-often-get-off-easy.html?ref=ncaafootball).   See also the recent news story about the Texas judge who has been exposed via YouTube of his own brand of values inside his “organization,”, i.e., his family (http://www.nytimes.com/2011/11/13/us/ruling-against-judge-seen-beating-daughter.html?scp=1&sq=texas%20judge%20belt&st=cse).  Without getting into legalities and regulation and the such, let us focus on the fact that organizations (of any kind) need some sort of internal processes, formal and/or informal, to define proper behavior and to rectify instances of wrongdoing.

Whatever the unit of analysis, the definition of “evil” is a very subjective process. In an earlier blog, I pointed to some research that suggested that some questionable practices are more acceptable in some industries than in others (https://dwbracken.wordpress.com/2011/03/15/what-is-normal/).  And I do believe that organizations have the right to define their values and to hold employees accountable for behaving in ways consistent with those values. Some actions are egregious that they are universally rejected (at least within certain cultures), including those exhibited by psychopaths as described in the book, Snakes in Suits.

One of the many benefits of doing a system-wide (e.g., company, department) 360 feedback process is the opportunity it creates for miscreants to be identified through anonymous input from coworkers. If system-wide, they hopefully also detect psychopaths and the such who are also very skilled at escaping detection. Unlike other “whistle blowing,” 360’s rely on a consensus from feedback providers that theoretically protects both the raters and the ratees. The data generated by 360’s is reported in aggregate form, usually requiring a minimum of three respondents to create a mean score. Assuming the organization has access to these scores, they can be analyzed to detect particularly low mean scores that indicate that the leader in question is being cited by multiple coworkers as being out of synch with the rest of the organization.

So what do we need to do to make our 360 processes useful for detecting misbehavior and protecting both the raters and the ratees alike? Some suggestions include:

  • Be clear as to the purpose of the process
  • Require participation by all organizational leaders
  • Give access to results to the organization (including HR and management)
  • Strictly adhere to minimum group size requirements for reporting results (e.g., minimum of 3)
  • Use a well designed behavioral model to form the basis for the content.
  • Include write in comments.
  • Train users (managers, HR) on the proper interpretation and use of results
  • Administer on a regular (annual) basis
  • Immediately address instances of leaders seeking retribution against raters (real or inferred)

Any other suggestions?

©2011 David W. Bracken