Strategic 360s

360s for more than just development

Posts Tagged ‘accountability

A Matter of Trust

with one comment

“Apologizing does not always mean that you are wrong and the other person is right. It just means you value the relationship more than your ego.”

I saw that anonymous quote on LinkedIn recently and it drew me back to a small note in Traning & Development magazine dated February 24 on this topic. (http://goo.gl/8X6yRe) The text follows:

A recent survey of 954 global professionals by the Forum Corporation found that although 87 percent of managers say that they either always or often apologize for their mistakes at work, only 19 percent of employees say that their managers apologize most or all of the time.

Naturally, managers not owning up to their errors has a direct impact on employee trust levels. Another interesting insight from the survey is that while 91 percent of employees say it’s “extremely important” to have a manager they can trust, only 48 percent of managers agree that it’s extremely important for employees to trust their managers.

So we can only assume that it’s those managers who do not place a premium on trust who are committing the following worst management sins, as identified by survey participants:

  • lying
  • taking credit for others’ ideas or blaming
    employees unfairly
  • gossiping
  • poor communication
  • lack of clarity.

Managers may condone their mistakes because they are afraid of tarnishing their image. According to the survey, 51 percent of managers believe apologizing makes them appear incompetent, 18 percent believe it makes them look weak, and 18 percent shrug it off, saying that apologizing is unnecessary.

Unfortunately, the study also shows that a low regard for employees’ trust may result in low engagement levels.

This note caught my attention for a few reasons. First, this concept of trust is one that is central to the “manager as coach” work we have been doing in defining the foundation of a productive relationship that is required (in our opinion) if a manager is to be a successful coach for his/her team members.

Trust is also manifested in the perceptions of senior management, whether that group is perceived as individuals or in their aggregate actions. Either way, time after time we see that employee surveys indicate that “trust in senior leadership” is usually the primary driver of employee engagement, confirming the last sentence of the article.

Secondly, the basis for trust (or lack thereof), as listed in the bullets, is determined by behaviors. Behaviors are a choice; a person can choose to do them or not. That choice can be influenced by consequences. Evidently, a majority of managers see more value in behaving badly. We can change that behavior by making them aware that they are behaving badly, and then having negative consequences for doing so. From top to bottom.

Thirdly was the discrepancy between the importance of trust to employees versus their managers. It is hard to believe that organizations do not preach honesty, integrity and so on, whether through Values statements that hang on the walls, or by lip service. It does suggest that there is inadequate accountability.

This T+D blurb is another in a series of articles and blogs I have seen recently that bemoan bad leader behavior and the effect on an organization’s climate (see my recent blog http://dwbracken.wordpress.com/2014/02/05/nimble-and-sustainable/), but with no specific recommendation as to a solution.

I really hate whining without a proposed solution. I have suggested that a 360 process with accountability (i.e., consequences, good or bad) is a viable solution.   I recently heard of a major organization that has introduced a new leadership behavior (competency) model, and, when I asked how leaders are to be measured against the model, the response what to fall back on single-source supervisor evaluation because “360’s haven’t worked here.” I felt like I was in a backward time warp to 20 years when we started talking seriously about the shortcomings of single-source (manager) performance evaluations (see Edwards and Ewen’s first 360 degree feedback book).

Behaviors can be shaped, starting with creating awareness that change is needed, aligning to the desired behavior, and usually requiring consequences (i.e., accountability). A few leaders will change without the carrot & stick, but those are usually the ones who are not the ones who need fixing.

If you have leaders who are undermining trust, you have a problem. I think there is a solution.

 

Big Data and Multisource Feedback

leave a comment »

Here’s another NYTimes Corner Office offering, featuring Laszlo Bock, SVP of People Operations at Google. (http://www.nytimes.com/2013/06/20/business/in-head-hunting-big-data-may-not-be-such-a-big-deal.html?pagewanted=1).  The first half is about hiring with some interesting observations (especially if you have responsibilities in that area).  The second half describes their Upward Feedback process, along with other HR systems. And, no, they are not a client.

I offer these observations for your consideration:

  • Big Data is the new fad, but many of us have been using large data bases to understand the impact of our change processes for a long time, whether at the organizational level (employee surveys) or the individual level (360 Feedback).
  • Your organization is not using “Big Data” (at least in the way Laszlo is describing) if you are using external norms.  Note that Google is using internal norms very aggressively, tracking progress in moving the norm over time AND giving percentile rankings for each leader.
  • The challenges he describes regarding hiring practices are very interesting, and it appears they are making some progress in implementing processes that are more predictive and more consistent. That said, hiring is always a challenge, and emphasizes the importance of using processes such as multisource (360) feedback to identify and either improve or weed out poor managers.
  • He speaks to the importance of consistency in leaders.  360 Feedback promotes consistency in a number of ways.  First, it defines the behaviors that describe successful leaders, a form of alignment. One of the behaviors can relate to consistency itself, i.e., providing feedback to the leader about whether he/she is consistent.  In addition, an organization-wide 360 process that is administered and used in a consistent manner can only help in reinforcing the views of employees that decisions are being made on a fair basis. Organization-wide implementation is the key to success in creating change, acceptance and sustainability.
  • Back to the percentile rankings.  I have found organizations strangely averse to this practice of letting the leader know where he/she ranks against peers.  As Laszlo notes, the challenge is to give the leader a realistic view of how he/she is perceived, and to create some motivation to change.  By the way, these rankings are one “solution” to leniency trends, that is, saying to the leader, “You may think you are hot stuff because you got a 4.0 rating (out of 5)  on that behavior, but you are still lower than 80% of your peers.”  That scenario is common in areas such as Integrity where we expect high scores from our leaders.
  • I am a little surprised that he believes that the managers can “self-motivate” in the way he describes. I am usually skeptical that leaders will change without accountability. I would like to know more about that.  I have already noted the use of percentile rankings that most organizations dismiss, and are seen are powerful motivators in this process.  Laszlo also describes a dialog of sorts with the leader at the 8th percentile. Who is that conversation with? If it is with another person (boss, coach, HR manager), that alone creates a form of accountability and an implied consequence if improvement isn’t seen. If the conversation is just in the leader’s head, it speaks to the power of the information provided by the percentile score.  Creating awareness is one thing. Awareness with context (e.g., comparison to others) is much more powerful.  (Maybe like, “That’s a nice pair of pants!  If it were the 60’s.”)
  • Lastly, Laszlo  speaks to the uniqueness of his and other organizations regarding what the organization needs from its leaders and how an individual employee might fit in and contribute. This clearly speaks to the need for custom designed content for hiring practices and then internal assessments once an employee is onboard.

Google is doing some very interesting research regarding leadership.  Go back and look at their work on leadership competencies that they publicized a couple years ago. http://www.nytimes.com/2011/03/13/business/13hire.html?pagewanted=all

Beyond the research, Google is actually using their Big Data to create a culture, define the leaders they require, and putting some teeth into the theory with upward feedback at the forefront.  Yet, at the end, he notes that all the measurement must be viewed through the lens of human insight.  The context is deeper than just organization; it is also moderated by the current version of strategy, the team requirements, the job requirements, and the personal situation, all of which are in a constant state of flux.

©2013 David W. Bracken

Aligning To Alignment

leave a comment »

I have been citing the “Corner Office” (NY Times) a few times lately, but I can’t help but do it again. Recently the guest was Salesforce COO George Hu (http://www.nytimes.com/2013/04/19/business/salesforcecom-executive-on-seeking-out-challenges.html?src=recg).  When asked about leadership lessons, he turns to the importance of communication and alignment.  He says, “We use this process called V2MOM, which stands for vision, values, methods, obstacles and measures.”

In this model, the vision and values part is the alignment component, basically what we are going to do and how we are going to do it (i.e., (my words) how we are going to treat each other and our customers).  I know that “alignment” is one of those terms that has been overworked but, in this case, maybe for a reason: it is important.

In some past blogs I have shared my ALAMO model of performance:

Performance = Alignment x (Ability x Motivation x Opportunity)

While all four variables in the model can drive a fatal blow by going to zero, Alignment is the only one that can also be a negative value because it can actually draw resources away from the organization if the individual/team/organization is working on the wrong thing. “Working on the wrong thing” can be accidental (by misdiagnosis or misdirection), or even purposeful (such as sabotage, where a very motivated person can destroy value).

Misalignment can happen to both the vision and the values part of his model, but I would like to focus on the Values part as it relates to the role that 360 Feedback can play in focusing the alignment of behaviors throughout the organization.

Many organizations have Values statements, often met with some well-deserved cynicism as a plaque on the wall.  Stating a value (e.g., Respect for the Individual”) must go much farther than just defining it. It must also must be defined in behavioral terms, that is, what an employee is doing (or not doing) when they are exhibiting that value.

Some of the most spirited meetings I have been in or led have been about what a Value means in behavioral terms.  Many, many organizations have some version of Respect for the Individual in its Values list. But what does “respect” mean for your organization?  Treating everyone the same regardless of level? Saying “thank you”? Acknowledging the viewpoints of others? Creating work-life balance (i.e, acknowledging personal lives)?  Creating diversity in practice?  You have to pick; the answer isn’t “all of the above.” A Value isn’t effective if it is vaguely defined or too encompassing.

One benefit of creating behavioral definitions of a value is making it very tangible if described specifically. I am reminded of the story of the homeowner who decided he need to fix his front sidewalk, spending all day on Saturday breaking up the old one, and replacing it with a nicely laid cement walkway.  As the sun was setting, he looked out his window admiring his handy work only to see a dog run up and down the walk, leaving his footprints for posterity. The man got his gun (sorry) and shot the dog.  When brought before the court, the judge looked down and asked, “Young man, just what were you thinking?” The man replied, “Your Honor, I really like dogs in the abstract, but not in the concrete.”  Ba bump.

Values are very easy to like in the abstract, but much less so in the “concrete,” as in your actions. Just ask religious leaders about that.

Another value that might seem obvious to you but not others is Integrity.  One version of Integrity is the core notion of telling the truth, not lying, not cheating, etc.  But more and more we see organizations who see telling the truth as a given, and choose to use Integrity as communicating the more subtle message of “walking the talk, “ as in doing what you say you will do, following through on commitments, and following the same rules/expectations that you set for others.

An organization-wide 360 feedback process built around an organization’s Values has many powerful benefits, including:

  • Reinforces the importance of the Values as part of the “how” side of performance
  • Requires the identification of the behaviors that uniquely define the Values for the organization
  • Are disseminated to all employees, usually requiring serious consideration as the raters perform their duties as feedback providers
  • Creates accountability for follow through assuming development plans are integrated into performance management processes
  • Creates a method for trending individual and organizational progress toward “living the Values.”
  • Can be used to identify leaders who do not comply with the Values

We would like to think that Values statements are enduring and wouldn’t require change very often. But if the organization finds that it needs to change its emphasis to support strategy (e.g., more customer focus, quality, innovation, accountability), the message can be quickly operationalized by inserting the behaviors (labeled as a dimension to further create alignment) in the 360 that is used by all segments of the enterprise.  This need to shift quickly is now called “Agility” in the vernacular, and organizations as well as individuals are being required to demonstrate it more than ever.

Alignment and Agility are intertwined, and communicate simultaneously focus and flexibility on both the Vision (“What”) and Values (“How”) that are uniquely defined by the organization.  I would argue that Alignment is one activity that cannot be overdone or overused, which is one message I take away from George Hu’s lessons of leadership.

Finally, one other message to take away from Mr. Hu’s V2MOM: Measurement.  Measurement reinforces Alignment, and you get what you measure. Measurement also creates accountability.  And a 360 Assessment, well-designed and delivered, does both. We largely know how to measure the “what;” show me a better way to measure the “how.”

©2013 David W. Bracken

Written by David Bracken

April 22, 2013 at 9:55 am

Just Shut Up and Listen

with 2 comments

I still get the Sunday New York Times in “hard copy” on Sundays (in addition to the electronic version the other days), partly because my wife and I are addicted to the crosswords.  Let me add that I am one of those people who mourn the fadeout of the newspaper, and often find that browsing the physical newspaper often exposes me to pieces of information that I would otherwise miss in the electronic version (whatever form your “browsing” takes, if at all).  (I believe, for what it’s worth, that a similar phenomenon is happening in the music world with the ease of downloading single songs and probably less “browsing” of albums where some other gems are often lurking.)

Back on topic, the Sunday NYT also has a feature in the Business section called “Corner Office” where a business leader is interviewed.  This week it was Francesca Zambello, general and artistic director of the Glimmerglass Festival and artistic director of the Washington National Opera. When asked about leadership lessons she has learned, she says:

When you’re in your 20s and have that leadership gene, the bad thing is that you don’t know when to shut up. You think you know all the answers, but you don’t. What you learn later is when to just listen to everybody else. I’m finding that all those adages about being humble and listening are truer and truer as I get older. Creativity cannot explode if you do not have the ability to step back, take in what everybody else says and then fuse it with your own ideas.

In the parallel universe of my personal life, my daughter Ali sent along an edition of the ABA Journal that references a study of the happiest and unhappiest workers in the US (http://www.abajournal.com/news/article/why_a_career_website_deems_associate_attorney_the_unhappiest_job_in_america/) that cites associate attorney as the unhappiest profession (which by coincidence is her husband’s job).  If you don’t want to go to the link, the five unhappiest jobs are:

1) Associate attorney

2) Customer service associate

3) Clerk

4) Registered nurse

5) Teacher

The five happiest are:

1) Real estate agent

2) Senior quality assurance engineer

3) Senior sales representative

4) Construction superintendent

5) Senior applications designer

Looking at the unhappiest list and possible themes/commonalities among these jobs, one is lack of empowerment and probably similar lack of influence in their work and work environment. (The job of teacher may less so, and its inclusion on this list is certainly troubling and complicated I am sure).  But I suspect that these first four jobs have a common denominator in the way they are managed that ties back to Ms. Zambello’s reflections on her early management style, i.e., having all the answers and not taking advantage of the knowledge and creativity of the staff.  It also causes me to remember the anecdote of the GM retiree who mused, “They paid me for my body. They could have had my mind for free.”

This is certainly not an epiphany for most of us, but more serendipity that two publications this week once again tangentially converged on this topic. I will once again recommend Marshall Goldsmith’s book, “What Got You Here Won’t Get You There” that is a compendium of mistakes that leaders make in their careers, including behaviors that might have served them well when starting their career but lose their effectiveness as they move up the organization. The classic case being the subject matter expert who gets promoted and assumes that being the “expert” is always the road to success. In Marshall’s book there are 20 of these ineffective, limiting behaviors (some might call them “derailers”), and when we think of the prototypical leader who wants to be the “expert” and doesn’t listen, it potentially touches on multiple behaviors in the list of 20, including:

2. Adding too much value

6. Telling the world how smart we are

10. Failing to give proper recognition

11. Claiming credit we don’t deserve

13. Clinging to the past

16. Not listening

Considering this list as possible motivators for the umbrella behavior of “not listening,” we can see how it might be very challenging to change this behavior if the leader believes (consciously or unconsciously) that one or more of these factors are important to maintain, or (as Marshall also notes) are “just the way I am” and not changeable.

We behaviorists believe that any behavior is changeable, whether a person wants to change or not. What is required is first awareness, i.e., that there is a gap between their behavior and the desired/required behavior, followed by motivation to change that may come internal to the person, but more often requires external motivation that usually comes from accountability. Awareness and accountability are critical features of a valid 360 feedback process if designed to create sustainable behavior change.

Let me add that the “shut up and listen” mantra is a core behavior for coaches as well. This consultant believes that the challenge that most organizations have in morphing managers into effective coaches is also rooted in this core belief that the role of coach is to solve problems for their subordinates, versus listening to fully understand the issue and then help the subordinate “discover” the solution that best works for them and the situation.

This is a serious problem that has two major downsides. For one, it, at least in some major way, is likely a root cause of creating the “unhappy” job incumbents that in turn leads to multiple negative outcomes for the organization. The other major downside is a version of our GM retiree’s lament, that is, the organization is losing out capitalizing on a significant resource in the form of the individual and collective contributions of its workforce.

There may be no time in our history where involving our young workers is more critical, which includes listening to their input and empowering them to act. Consider the many reasons that this might be so:

  • The pace of change, internally and externally, requires that we have processes that allow us to recognize and react in ways that most likely will diverge from past practices
  • Younger workers bring perspectives on the environment, technology and knowledge that are often hidden from the older generations (that are, by the way, retiring)
  • As the baby boomers do retire en masse, we need to be developing the next generation of leaders.  Another aside, this means allowing them to fail, which is another leadership lesson that Ms. Zambello mentions (remember her?).

Listening is actually a very complex behavior to change, but it begins with increasing awareness of ineffectiveness, and the creating motivation to change by educating leaders on its negative consequences and lost opportunities.

©2013 David W. Bracken

It’s Human Nature

leave a comment »

One question that has been at the core of best practices in 360 Feedback since its inception relates to the conditions that are most likely to create sustained behavior change (at least for those of us that believe that behavior change is the ultimate goal).  Many of us believe that behavior change is not a question of ability to change but primarily one of motivation. Motivation often begins with the creation of awareness that some change is necessary, the accepting the feedback, and then moving on to implementing the change.

One of the more interesting examples of creating behavior change began when seat belts were included as standard equipment in all passenger vehicles in 1964.  I am old enough to remember when that happened and started driving not long thereafter. So using a seat belt was part of the driver education routine since I began driving and has not been a big deal for me.

The reasons for noncompliance with seatbelt usage are as varied as human nature. Some people see it as a civil rights issue, as in, “No one is going to tell me what to do.” There is also the notion that it protects against a low probability event, as in “It won’t happen to me. I’m a careful driver.” Living in Nebraska for a while, I learned that people growing up on a farm don’t “have the time” to buckle and unbuckle seatbelts in their trucks when they are learning to drive, so they don’t get into that habit. (I also found, to my annoyance, that they also never learned how to use turn signals.)

I remember back in the ‘60’s reading about a woman who wrote a car manufacturer to ask that they make the seat belts thinner because they were uncomfortable to sit on.  Really.

Some people have internal motivation to comply, which can also be due to multiple factors such as personality, demographics, training, norms (e.g., parental modeling), and so on. This is also true when we are trying to create behavior change in leaders, but we will see that these factors are not primary determinants of compliance..

In thinking about seatbelt usage as a challenge in creating behavior change, I found study from 2008 by the Department of Transportation. It is titled “How States Achieve High Seat Belt Use Rates” (DOT HS 810 962).  (Note: This is a 170 page report with lots of tables and statistical analyses, and if any of you geeks want a copy, let me know.)

The major finding of this in-depth study states:

The statistical analyses suggest that the most important difference between the high and low seat belt use States is enforcement, not demographics or funds spent on media.

This chart Seatbelt Usage in US, amongst the many in this report, seems to capture the messages fairly well to support their assertion.  This chart plots seat belt usage by state, where we see a large spread ranging from just over 60% (Mississippi) to about 95% (Hawaii).  It also shows whether each state has primary seatbelt laws (where seatbelt usage is a violation by itself), or secondary laws (where seatbelt usage can only be enforced if the driver is stopped for another purpose). Based on this table alone, one might argue causality but the study systematically shows that this data, along with others relating to law enforcement practices, are the best predictors of seatbelt usage.

One way of looking at this study is to view law enforcement as a form of external accountability, i.e., having consequences for your actions (or lack thereof). The primary versus secondary law factor largely shifts the probabilities of being caught, with the apparent desired effect on seatbelt usage.

So, back to 360 Feedback. I always have been, and continue to be, mystified as to how some implementers of 360 feedback processes believe that sustainable behavior change is going to occur in the vast majority of leaders without some form of external accountability. Processes that are supposedly “development only” (i.e., have no consequences) should not be expected to create change. In those processes, participants are often not required to, or even discouraged from, sharing their results with others, especially their manager. I have called these processes “parlor games” in the past because they are kind of fun, are all about “me,” and have no consequences.

How can we create external accountability in 360 processes?  I believe that the most constructive way to create both motivation and alignment (ensuring behavior change is in synch with organizational needs/values) is to integrate the 360 feedback into Human Resource processes, such as leadership development, succession planning, high potential programs, staffing decisions, and performance management.  All these uses involve some form of decision making that affects the individual (and the organization), which puts pressure on the 360 data to be reliable and valid. Note also that I include leadership development in this list as a form of decision making because it does affect the employee’s career as well as the investment (or not) of organization resources.

But external accountability can be created by other, more subtle ways as well. We all know from our kept and (more typically) unkept New Year’s resolutions about the power of going public with our commitments to change. Sharing your results and actions with your manager has many benefits, but can cause real and perceived unfairness if some people are doing it and others not. Discussing your results with your raters and engaging them in your development plans has multiple benefits.

Another source of accountability can (and should) come from your coach, if you are fortunate enough to have one.  I have always believed that the finding in the Smither et al (2005) meta-analysis that the presence of a coach is one determinant of whether behavior change is observed is due to the accountability that coaches create by requiring the coachee to specifically state what they are going to do and to check back that the coachee has followed through on that commitment.

Over and over, we see evidence that, when human beings are not held accountable, more often than not they will stray from what is in their best interests and/or the interests of the group (organization, country, etc.).  Whether it’s irrational (ignoring facts) or overly rational (finding ways to “get around” the system), we should not expect that people will do what is needed, and we should not rely on our friends, neighbors, peers or leaders to always do what is right if there are no consequences for inaction or bad behavior.

©2012 David W. Bracken

What Is a “Decision”?

with one comment

My good friend and collaborator, Dale Rose, dropped me a note regarding his plans to do another benchmarking study on 360 Feedback processes. His company, The 3D Group, has done a couple of these studies before and Dale has been generous in sharing his results with me, which I have cited in some of my workshops and webinars. The studies are conducted by interviewing coordinators of active 360 systems.  Given that they are verbal, some of the results have appeared somewhat internally inconsistent and difficult to reconcile, though the general trends are useful and informative.

Many of the topics are useful for practitioners to gauge their program design, such as the type of instrument, number of items, rating scales, rater selection, and so on. For me, the most interesting data relates to the various uses of 360 results.

Respondents in the 2004 and 2009 studies report many uses. In both studies, “development” is the most frequent response, and that’s how it should be.  In fact, I’m amazed that the responses weren’t 100% since a 360 process should be about development. The fact that in 2004 only 72% of answers included development as a purpose is troubling whether we take the answers as factual or if they didn’t understand the question. The issue at hand here is not whether 360’s should be used for development; it is what else they should, can, and are used for in addition to “development.”

In 2004, the next most frequent use was “career development;” that makes sense. In 2009, the next most frequent was “performance management,” and career development dropped way down. Other substantial uses include high potential identification, direct link to performance measurement, succession planning, and direct link to pay.

But when asked whether the feedback is used “for decision making or just for development”, about 2/3 of the respondents indicated “development only” and only 1/3 for “decision making.” I believe these numbers understate the actual use of 360 for “decision making” (perhaps by a wide margin), though (as I will propose), it can depend on how we define what a “decision” is.

To “decide” is “to select as a course of action,” according to Miriam Webster (in this context). I would build on that definition that one course of action is to do nothing, i.e., don’t change the status quo or don’t let someone do something. It is impossible to know what goes on in person’s mind when he/she speaks of development, but it seems reasonable to suppose that it involves doing something beyond just leaving the person alone, i.e., maintaining the status quo.  But doing nothing is a decision. So almost any developmental use is making a decision as to what needs to be done, what personal (time) and organizational (money) resources are to be devoted to that person. Conversely, denying an employee access to developmental resources that another employee does get access to is a decision, with results that are clearly impactful but difficult to measure.

To further complicate the issues, it is one thing to say your process is for “development only,” and another to know how it is actually used.  Every time my clients have looked behind the curtain of actual use of 360 data, they unfailingly find that managers are using it for purposes that are not supported. For example, in one client of mine, anecdotal evidence repeatedly surfaced that the “development only” participants were often asked to bring their reports with them to internal interviews for new jobs within the organization. The bad news was that this was outside of policy; the good news was that leaders saw the data as useful in making decisions, though (back to bad news) they may have been untrained to correctly interpret the reports.

Which brings us to why this is an important issue. There are legitimate “development only” 360 processes where the participant has no accountability for using the results and, in fact, is often actively discouraged from sharing the results with anyone else. Since there are not consequences, there are few, if any, consequential actions or decisions required. But most 360 processes (despite the benchmark results suggesting otherwise) do result in some decisions being made, which might include doing nothing by denying an employee access to certain types of development.

The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for Multisource Feedback When Used for Decision Making.”  My sense is many designers and implementers of 360 (multisource) processes feel that these Guidelines don’t apply because their system isn’t used for decision making. Most of them are wrong about that. Their systems are being used for decision making, and, even if not, why would we design an invalid process? And any system that involves the manager of the participant (which it should) is creating the expectation of direct or indirect decision making to result.

So Dale’s question to me (remember Dale?) is how would I suggest wording a question in his new benchmarking study that would satisfy my curiosity regarding the use of 360 results. I proposed this wording:

“If we define a personnel decision as something that affects an employee’s access to development, training, jobs, promotions or rewards, is your 360 process used for personnel decisions?” 

Dale hasn’t committed to using this question in his study. What do you think?

©2012 David W. Bracken

What does “beneficial” mean?

with one comment

My friend, Joan Glaman, dropped me a note after my last blog, (http://dwbracken.wordpress.com/2011/08/30/thats-why-we-have-amendments/ ) with this suggestion:

“I think your closing question below would be a great next topic for general discussion: ‘Under what conditions and for whom is multisource feedback likely to be beneficial?’”

To refresh (or create) your memory, that question that Joan cites is from the Smither, London and Reilly (2005) meta analysis. The article abstract states:

“…improvement is most likely to occur when feedback indicates that change is necessary, recipients have a positive feedback orientation, perceive a need to change their behavior, react positively to the feedback, believe change is feasible, set appropriate goals to regulate their behavior, and take actions that lead to skill and performance improvement.”

Before we answer Joan’s question, we should have a firm grasp on what we mean by “beneficial.” I don’t think we all would agree on that in this context.  Clearly, Smither et al. define it as “improvement,” i.e., positive behavior change. That is the criterion (outcome) measure that they use in their aggregation of 360 studies. I am in total agreement that behavior change is the primary use for 360 feedback, and we (Bracken, Timmreck, Fleenor and Summers, 2001) defined a valid 360 process as one that creates sustainable behavior change in behaviors valued by the organization.

Not everyone will agree that behavior change is the primary goal of a 360 process. Some practitioners seem to believe that creating awareness alone is a sufficient outcome since they do not support any activity or accountability, proposing that simply giving the report to the leader is going far enough and in fact discourage the sharing of results with anyone else.

If you will permit a digression, I will bring to your attention a recent blog by Sandra Mashihi (http://results.envisialearning.com/5-criteria-a-360-degree-feedback-must-meet-to-be-valid-and-reliable/) where one of her lists of “musts” (arrrgh!) is criterion related validity, which she defines as, …does the customized instrument actually predict anything meaningful like performance?” Evidently she would define “beneficial” then to not be behavior change but to be able to measure performance to make decisions about people.  This testing mentality just doesn’t work for me since 360’s are not tests (http://dwbracken.wordpress.com/2010/08/31/this-is-not-a-test/) and it is not realistic to expect them to predict behavior, especially if we hope to actually change behavior.

Let’s get back to Joan’s question (finally). I want to make a couple comments and then hopefully others will weigh in. The list of characteristics that Smither et al provide in the abstract is indeed an accumulation of individual and organizational factors. This is not an “and” list that says that a “beneficial” process will have all these things. It an “or” list where each characteristic can have benefits.  The last two, (set goals and take actions) can be built into the process as requirements regardless of whether the individual reacts positively and/or perceives the need to change. Research shows that follow up and taking action are powerful predictors of behavior change, and I don’t believe that it is important (or matters) to know if the leader wants to change or not. What if he/she doesn’t want to change? Do they get a pass? Some practitioners would probably say, yes, and point to this study as an indication that it is not worth the effort to try to get them to change.

I suggest that this list of factors that lead to behavior change are not independent of each other. In our profession, we speak of “covariates”, i.e., things that are likely to occur together across a population. A simple example is gender and weight, where men are, on average, heavier than women. But we don’t conclude that men as a gender manage their weight less well than women, it’s due to being taller (and other factors, like bone structure).

My daughter, Anne, mentioned in passing an article she read about people who don’t brush their teeth twice a day having a shorter life expectancy than those who do.  So the obvious conclusion is that brushing teeth more often will make us live longer.  There is certainly some benefit to regularly brushing teeth, but it’s more likely that there are covariates of behavior for people that don’t have good dental hygiene that have a more direct impact on health.  While I don’t have data to support it, it seems likely that people who don’t brush regularly also don’t go to the dentist regularly for starters.  It seems reasonable to surmise that, on average, those same people don’t go to their doctor for a regular checkup.

My hypothesis is that 360 participants who aren’t open to feedback, don’t perceive a need to change, don’t feel that they can change, etc., are also the people who are less likely to set goals and take action (follow up) if given the option to not do those things.  In other words, it’s not necessarily their attitudes that “cause” lack of behavior change, but the lower likelihood that they will do what is necessary, i.e., set goals and follow through, in order to be perceived as changing their behavior. Those “behaviors” can be modified/changed while their attitudes are likely to be less modifiable, at least until they have had a positive experience with change and its benefits.

One last point of view about “beneficial.” Another definition could be change that helps the entire organization. That is the focus of the recent publication by Dale Rose and myself, where (in answer to Joan’s question) we state:

“…four characteristics of a 360 process that are required to successfully create organization

change, (1) relevant content, (2) credible data, (3) accountability, and (4) census participation…”

We go on to offer the existing research that supports that position, and the wish list for future research. One way of looking at this view of what is “beneficial” is to extrapolate what works for the individual and apply it across the organization (which is where the census (i.e., whole population) part comes into play.)

I will stop there, and then also post this on LinkedIn to see if we can get some other perspectives.

Thanks, Joan!

©2011 David W. Bracken

Built to Fail/Don’t Let Me Fail

leave a comment »

This is a “two sided” blog entry, like those old 45 rpm records that had hit songs on both sides (think  “We Can Work It Out”/”Daytripper” by the Beatles),though my popularity may not be quite at their level.  This is precipitated by a recent blog (and LinkedIn discussion entry) coming from the Envisia people. The blog entry is called, “Does 360-degree feedback even work?” by Sandra Mashihi and can be found at http://results.envisialearning.com/.   It would be helpful if you read it first, but not necessary.

Sandra begins by citing some useful research regarding the effectiveness of 360 processes. And she concludes that sometimes 360’s “work” and sometimes not.  Her quote is, “Obviously, the research demonstrates varied results in terms of its effectiveness.”

What is frustrating for some of us are the blanket statement about failures (and using terms like “obvious”) without acknowledging that many 360’s are “built to fail.” This is the main thesis of the article Dale Rose and I just published in the Journal of Business and Psychology. http://www.springerlink.com/content/85tp6nt57ru7x522/

http://www.ioatwork.com/ioatwork/2011/06/practical-advice-for-designing-a-360-degree-feedback-process-.html

Dale and I propose four features needed in a 360 process if it is likely to create sustainable behavior change:

1)      Reliable measurement: Professionally developed, custom designed instruments

2)      Credible data: Collecting input from trained, motivated raters with knowledge of ratees

3)      Accountability: Methods to motivate raters and ratees to fulfill their obligations

4)      Census participation: Requiring all leaders in an organizational unit to get feedback

We go on to cite research that demonstrates how the failure to build these features into 360 can, in some cases, almost guarantee failure and/or the ability to detect behavior change when it does occur. One such feature, for example, is whether the ratee follows up with raters (which I have mentioned in multiple prior blogs). If/when a 360 (or a collection of 360’s, such as in a meta analysis) is deemed a “failure”, I always want to know things such as whether raters were trained and whether follow up was required, for starters.

We are leaning more and more about the facets that increase the probability that behavior change will occur as a result of 360 feedback. Yet all too often these features are not built into many processes, and practitioners are surprised (“shocked, I’m shocked”) when it doesn’t produce desired results.

Sandra then goes on to state: “I have found 360-degree feedback worked best when the person being rated was open to the process, when the company communicated its purpose clearly, and used it for development purposes.” I assume that she means “development only” since all 360’s are developmental.  I definitely disagree with that feature. 360’s for “development (only) purposes” usually violate one or more of the 4 features Dale and I propose, particularly the accountability one. They often do not generate credible data because too few raters are used, even the best practice of including all direct reports.

The part about “being open to the process” is where I get the flip side of my record, i.e., don’t hurt my feelings.  In one (and only one) way, this makes sense. If the ratee doesn’t want to be in a development-only process, then by all means don’t force them. It is a waste of time and money. On the other hand, all development only processes are a waste of money in my opinion for most people. (And, by the way, development only is very rare if that means that no decisions are being made as a result.)

But if we ARE expecting to get some ROI (such as sustained behavior change) from our 360’s, then letting some people to opt out so their feelings aren’t hurt is totally contrary to helping the organization manage its leadership cadre. Intuitively, we should expect that those who opt out are the leaders that need it the most, who know that they are not effective and/or are afraid to be “discovered” as the bullies, jerks, and downright psychopaths that we know exist out there.

I have some fear that this fear of telling leaders that they are less than perfect is stemming from this troubling trend in our culture where everyone  has to succeed. I think that the whole “strengths” movement is a sign of that.

Over the last couple of weeks, I have seen a few things that further sensitized me to this phenomenon. One big one is this article in The Atlantic: http://www.theatlantic.com/magazine/archive/2011/07/how-to-land-your-kid-in-therapy/8555/1/.  Protecting our children from failure is not working. Protecting our leaders from failure is also dooming your organization.

I swear I never watch America’s Funniest Videos, but during a rain delay of a baseball game recently, I did stumble upon it and succumbed. AFV is all about failure, and I’m not so sure that people always learn from these failures. But one video I enjoyed showing a 2 year old boy trying to pour apple juice from a BIG bottle into a cup. He put the cup on the floor and totally missed the first two times (with the corresponding huge mess). As a parent and grandparent, I was quite amazed that the person behind the camera just let it happen. But on the third try, the task was accomplished successfully, followed by applause and smiles! There was a huge amount of learning that occurred in just a minute or two because the adults allowed it to happen, with a bit of a mess to clean up.

How many of us would have just poured the juice for him? His learning isn’t over; he will make more mistakes and miss the cup occasionally. But don’t we all.

As a parting note, Dale and I support census participation for a number of reasons, one of which is the point I have already made about otherwise missing the leaders that need it most. We also see 360’s as a powerful tool for organizational change, and changing some leaders and not others does not support that objective. Having all leaders participate is tangible evidence that the process has organization support and is valued. Finally, it creates a level playing field for all leaders for both evaluation and development, communicating to ALL employees what the organization expects from its leaders.

©2011 David W. Bracken

What You See Is What You Get

leave a comment »

Every month or so I get an invitation/newsletter from Marshall Goldsmith and Patricia Wheeler. This month’s had a couple gems in it, and I have provided the link at the end of this article.  Marshall’s entry on life lessons is very much worth reading. But Patricia’s offering particularly struck me since I have been thinking a lot about leader behavior. As you will see it also relates directly to the hazards of misdiagnosis, another human flaw that is especially salient for those of us in consulting and coaching where we are prone to jumping to conclusions too quickly.

Several years ago my mother experienced stomach pains.  Her physician, one of the best specialists in the city, ordered the usual tests and treated her with medication.  The pains continued; she returned to his office and surgery was recommended, which she had.  After discharge the pains recurred, stronger than ever; she was rushed to the emergency room, where it was determined that her physician had initially misdiagnosed her. She had further surgery; unfortunately she was unable to withstand the stress of two surgeries, fell into a coma and died several days later.  Several days after her second surgery, her physician approached me, almost tearfully, with an apology.

“I apologize,” he said, “this is my responsibility.”  He should have done one additional test, he said, requiring sedation and an invasive procedure, but he did not want to impose the pain of that procedure on her, feeling at the time that his diagnosis was correct.  “I am truly sorry and I will never make that mistake again.”  What struck me at the time and continues to stay with me is that this doctor was willing to take the risk of telling the whole difficult truth, and that taking responsibility for the situation was more important to him than the very real possibility of a malpractice suit.  I forgave him, and I believe my mother would have as well.

Real apologies have positive impact that, in most if not all cases, outweigh the risk factors.  Ask yourself, when does an apology feel heartfelt to you? When does it seem empty?  Think of a time when you heard a public or corporate figure apologize and it rang true and think of a time when it didn’t.  What made the difference? Here are a few guidelines:

Is it from the heart or the risk management office?  If your apology reads like corporate legalese, it won’t be effective.

Is it unequivocal?  Too many apologies begin with “I’m sorry, but you were at fault in this too.”  An attempt to provoke the other party into apologizing or accepting fault will fail.

Is it timely?  If you delay your apology, perhaps wishing that the issue would just go away (trust me, it won’t), its effect will diminish proportionately.

Does it acknowledge the injury and address the future?  In other words, now that you know your words or actions have caused injury, what will you do going forward?

While we can’t avoid all errors, missteps and blind spots, we can at least avoid compounding them with empty words, blaming and justification.

Patricia is focusing on a particular behavior, i.e., apologizing. This behavior, like all other behaviors, is modifiable if we are aware of the need to change and motivated to do so.  It may not be easy and you may not be comfortable doing it, but that is no excuse. And, by the way, people really don’t care what is going on inside your head to justify not changing (e.g., “they know that I’m sorry without me saying it”). Making an apology is often difficult, as Patricia points out, and maybe that’s why it can be so striking and memorable when someone does it well.

In his book, “What Got You Here Won’t Get You There,” Marshall makes a similar point about the simple behavior of saying “thank you,” which is a common shortcoming in even the most successful leaders.  Leaders find all sorts of excuses for avoiding even that seemingly easy behavior, including “that’s just not me.” The point is that what you do and what people see (i.e., behaviors) IS who you are.

The good news for us practitioners of 360 Feedback is that observing behaviors is what it is (or should be) all about. In a 360 process, the organization defines the behaviors it expects from its leaders, gives them feedback on how successful they are in doing so, and then (ideally) holds them accountable for changing.

This also means that we go to great lengths to ensure that the content of 360 instruments uses items that describe behaviors, hopefully in clear terms.  We need to ensure that we are asking raters to be observers and reporters of behavior, not mind readers or psychologists.  We need to especially wary of items that include adjectives that ask the rater to peer inside the ratee’s head, including asking what the ratee “knows” or “is aware of” or “believes” or even what the leader is “willing” to do.

As a behaviorist, in the end I only care what a leader does and not why (or if) he/she wants to do it. That’s the main reason why I have found personality assessments to be of little interest, with the exception of possibly providing insights into how the coaching relationship might be affected by things like openness to feedback or their preferred style for guidance and learning.

Another piece of good news for us behaviorists came out in a recent article in Personnel Psychology titled, “Trait and Behavioral Theories of Leadership: An Integration and Meta-Analytic Test of Their Relative Validity” (Derue, Nahrgang, Wellman and Humphrey, 2011).  To quote from the abstract, they report:

Leader behaviors tend to explain more variance in leadership effectiveness than leader traits, but results indicate that an integrative model where leader behaviors mediate the relationship between leader traits and effectiveness is warranted.

The last part about mediation suggests that, even when traits do a decent job (statistically) of predicting leader effectiveness, they are “filtered” through leader behaviors. For example, all the intelligence in the world doesn’t do much good if you are still a jerk (or bully, or psychopath, etc.)

All of this reinforces the importance of reliably measuring leader behaviors, especially if we believe that the “how” of performance is at least as important as the “what.”

Link:  http://email.e-mailnetworks.com/hostedemail/email.htm?h=bdd4c78f38fd64341d6760533238799c&CID=4826566929&ch=487D1DD320A1A801E8ACD8949CEAC445

©2011 David W. Bracken

What I Learned at SIOP

with one comment

The annual conference of the Society for Industrial/Organizational Psychology (SIOP) was held in Chicago April 14-16 with record attendance. I had something of a “360 Feedback-intensive” experience by running two half-day continuing education workshops (with Carol Jenkins) on 360 feedback, participating on a panel discussion of the evolution of 360 in the last 10 years (with other contributors to The Handbook of Multisource Feedback), and being the discussant for a symposium regarding Implicit Leadership Theories that largely focused on cultural factors in 360 processes. Each forum gave me an opportunity to gauge some current perspectives on this field, and here are a few that I will share.

The “debate” continues but seems to be softening. The “debate” is, of course, how 360 feedback should be used: development only and/or for decision making. In our CE Workshop, we actually had participants stand up and stand in corners of the room to indicate their stance on this issue, and, judging from that exercise, there are still many strong proponents of each side of that stance. That said, one of the conclusions the panel seemed to agree upon is that there is some blurring of the distinction between uses and some acknowledgement that 360’s are successfully being used for decision making, and that 360’s are far less likely to create sustainable behavior change without accountability that comes with integration with HR systems.

We need to be sensitive to the demands we place on our leaders/participants. During our panel discussion, Janine Waclawski (who is currently an HR generalist at Pepsi) reminded us of how we typically inundate 360 participants with many data points, beginning with the number of items multiplied by the number of rater groups. (I don’t believe the solution to this problem is reducing the number of items, especially below some arbitrary number like 20 items.)  Later, I had the opportunity to offer commentary on four terrific research papers that had a major theme of how supervisors need to be aware of the perspectives of their raters that may well be caused by their cultural backgrounds.

As someone who is more on the practitioner end of the practitioner-scientist continuum, I tried to once again put myself in the seat of the feedback recipient (where I have been many times) and consider how this research might be put into practice. On one hand, organizations are using leadership competency models and values statements to create a unified message (and culture?) that spans all segments of the company. We can (and should) have debates about how useful and realistic this practice is, but I think most of us agree that the company has a right to define the behaviors that are expected of successful leaders. 360 processes can be a powerful way to define those expectations in behavioral terms, to help leaders become aware of their perceived performance of those behaviors, to help them get better, and to hold leaders accountable for change.

On the other hand, the symposium papers seem to suggest that leader behaviors should be molded from “the bottom up,” i.e., by responding to the expectations of followers (raters) that may be attributed to their cultural backgrounds and their views of what an effective leader should be (which may differ from the leader’s view and/or the organization’s view of effective leadership).  By the way, this “bottoms up” approach applies also to the use of importance ratings (which is not a cultural question).

My plea to the panel (perhaps to their dismay) was to at least consider the conundrum of the feedback recipient who is being given this potentially incredibly complex task of not only digesting the basic data that Janine was referring to, but then to fold in the huge amount of information created by having to consider the needs of all the feedback providers. Their research is very interesting and useful in raising our awareness of cultural differences that can affect the effectiveness of our 360 processes. But PLEASE acknowledge the implications for putting all of this to use.

The “test” mentality is being challenged.  I used the panel discussion to offer up one of my current pet peeves, namely to challenge the treatment of 360 Feedback as a “test.”  Both in the workshops and again at the panel, I suggested that applying practices such as randomizing items and using reverse wording to “trick” the raters is not constructive and most likely is contrary to our need to help the raters provide reliable data. I was gratified to receive a smattering of applause when I made that point during the panel.  I am looking forward to hopefully discussing (debating) this stance with the Personnel Testing Council of Metropolitan Washington in a workshop I am doing in June, where I suspect some of the traditional testing people will speak their mind on this topic.

This year’s SIOP was well done, once again. I was especially glad to see an ongoing interest in the evolution of the field of 360 feedback judging from the attendance at these sessions, let alone the fact that the workshop committee identified 360 as a topic worthy of inclusion after going over 10 years since the last one.  360 Feedback is such a complex process, and we are still struggling with the most basic questions, including purpose and use.

©2011 David W. Bracken

Follow

Get every new post delivered to your Inbox.

Join 35 other followers