Strategic 360s

Making feedback matter

Posts Tagged ‘orgvitality

Strategic 360 Forum

with one comment

[tweetmeme source=”anotherangle360″]

I am organizing a one day event in New York City on July 24 for organizations that are using 360 Feedback processes for purposes beyond just leadership development.   There is no cost. Any organization wishing to be considered for attendance should contact me. The representative must be a senior leader who has responsibility both for implementation and influence regarding its strategic use in the company.

Strategic 360 Forum

July 24, 2013

Description

One day meeting, coordinated by David Bracken (OrgVitality), of organizations using 360 Assessments for strategic purposes, including support of human resource processes (e.g., talent management, staffing, performance management, succession planning, high potential programs).   Attendees will be senior leaders with responsibilities for both process implementation as well as strategic applications. Larger organizations (5000+ employees) will be given priority consideration for inclusion.

If there is sufficient interest and support from the participating companies, the Forum would continue to meet on a semi-annual basis.

Location and Date:  July 24 at KPMG, 345 Park Avenue, New York, NY 10154-0102

Tentative Participant Organizations:  Guardian, Bank of America, GlaxoSmithKline, KPMG, Starwood, PepsiCo, Federal Reserve NY, JP Morgan Chase

Benefits for Participants

  • Learn of best practices in the use of 360 Assessments in progressive organizations
  • Discover ways that 360 Assessments support human resource initiatives, including problems and solutions
  • Create personal networks for future situations
  • Create opportunities for future professional contributions, including 2014 SIOP Conference

NOTE: The specific process and agenda will evolve as the organizers interact with the participants and discover their expectations and ways that they can best contribute to the event.

Cost

There is no cost for participants beyond their active contribution. Lunch is provided.

Content

The core content will consist of brief presentations by select attendees.  For attendees interested in participating in a submission for the 2014 SIOP Conference, we will use the format and content of their presentations to feed a proposal submission.

Presentations will be followed by a group discussion where questions can be asked of the presenter and alternative viewpoints shared.

Depending on the programs and interests of the participating organizations, we will explore select  theme topics of high relevance relating to use of 360 Feedback. These topics may include:

  • Performance Management
  • Succession Planning
  • High Potential Identification and Development
  • Staffing/Promotions
  • Coaching Programs
  • Sustainability

The Forum meeting will also include a presentation by Bracken and Church based on their People & Strategy (HRPS) article on the use of 360 Assessments in support of (or replacing) performance management processes, followed by discussion.

Outputs

1)      As noted above, the content will be the basis for a proposal for inclusion in the 2014 SIOP Conference.

2)      The presentations and discussions will be organized and reported to participants.

Contact Information

Interested organizations should email me with a brief description of the 360 process(es) you wish to highlight/share (purpose, size, longevity, innovations), and your personal role/responsibilities.

David W. Bracken, Ph.D.

Vice President, Leadership Development and Assessment

OrgVitality, LLC

402-617-5152 (cell)

david.bracken@orgvitality.com

The Manager-Coach

leave a comment »

[tweetmeme source=”anotherangle360″]

A recent posting on the 360 Degree Feedback group in LinkedIn posed this question to the group:

I have been interested in more fine-grained detail about what it is that manager-coaches actually do that leads to perceptions on the part of the coachee that their managers are effective and supportive coaches. I see a lot of speculation and ‘armchair’ theorizing, but I cannot find specific, rigorous empirical research. Have I overlooked some references?

I have been similarly interested in this topic, largely due to my bias that 360 Feedback is most effective when the manager (boss) is involved in the use of the results, contrary to some practitioner who advise against it.

To that end, the work of Dr. Brodie Gregory caught my eye, particularly the instrument she developed and researched as part of her doctoral dissertation under the direction of Dr. Paul Levy at the University of Akron. Brodie has made a major contribution in identifying four constructs that she believes define the effective manager-coach:

  • Genuineness of the Relationship
  • Effective Communication
  • Comfort with the Relationship
  • Facilitating Development

Dr. Gregory’s research, though, may not fully answer the LinkedIn questioner since she doesn’t as yet have performance data on managers and the relationship to effective coaching.

I am posting two publications by Dr. Gregory in order to provide easy access to those of you who are interested in this topic:

Employee coaching relationships: enhancing construct clarity and measurement

IT’S NOT ME, IT’S YOU: A MULTILEVEL EXAMINATION OF VARIABLES THAT IMPACT EMPLOYEE COACHING RELATIONSHIPS

I have also developed a workshop called The ManagerCoach©, designed to be delivered for organizations who wish to make their managers better coaches. The workshop integrates a feedback instrument that includes, with Dr. Gregory’s permission (the instrument is copyrighted), the item content she has developed along with some other constructs.

For your information, I will be giving a webinar that describes the concept of The ManagerCoach and introduces the content of the workshop. I will deliver the webinar next on September 6 at 12:30 EDT. Let me know if you would like to register (free). Or check at http://www.orgvitality.com.

I consistently see, when organizations have the nerve to ask, that the lowest scores managers receive on both surveys and 360 feedback are often related to employee development and/or, more specifically, coaching abilities. This is a fixable and measurable area of leadership development.

©2012 David W. Bracken

Can you change a culture?

leave a comment »

[tweetmeme source=”anotherangle360″]

Us folks at OrgVitality have a view of the “vital” organization that includes concepts of ambidexterity, agility and resilience. These concepts can be operationalized to promote the creation of a culture that makes those characteristics a way of life in the organization.

I found a recent article (Lengnick-Hall, Beck and Lengnick-Hall, 2010) titled, “Developing a capacity for organizational resilience through strategic human resource management.”  Their message of creating and sustaining a culture through human resource processes is a powerful concept.

These authors define resilience as:

“…a firm’s ability to effectively absorb, develop situation-specific responses to, and ultimately engage in transformative activities to capitalize on disruptive surprises that potentially threaten organization survival.”  They go on to propose that resilience should be created through individual knowledge, skills, and abilities and organizational routines and processes.

This is good stuff but I think they have missed an opportunity to talk about creating a culture through behavior change. Culture has a lot of definitions, but a couple are consistent with this view of behavior being a key factor. I have been drawn to an observable and measurable definition of culture offered by Bossidy and Charan (2002) in their seminal book, Execution: The discipline of getting things done,:

The culture of a company is the behavior of its leaders. Leaders get the behavior they exhibit and tolerate.”

While many traditionalists will argue with such a “superficial” treatment of culture, it was foreshadowed by Kotter and Heskett (1992) who refined their definition of culture with this statement: “…culture represents the behavior patterns or style of an organization that new employees are automatically encouraged to follow by their fellow employees.” (p. 4)

This definition is too limiting in not directly acknowledging that the “fellow employees” who have the most impact on creating the culture are the leaders of that organization.

Let’s return to the resilience article. I looked for statements of behaviors that might be useful for creating a culture of resilience, particularly defined in terms of leader behavior that could easily be fodder for a 360 or upward feedback process. Fortunately for me, there is a section called, “Behavioral elements of organizational resilience.” Their language is somewhat academic (e.g., “nonconforming strategic repertoires”), but here are some examples of behaviors that I propose support their conceptualization of resilience:

  • Encourages new solutions to problems
  • Finds new strategies that are different from the past and industry norms
  • Takes the initiative and moves quickly to overcome challenges
  • Ensures that new and creative solutions are consistent with organizational goals and values
  • Challenges the status quo
  • Encourages the discarding of obsolete information and practices
  • Recognizes and rewards behaviors that demonstrate flexibility and resourcefulness

They list a whole raft of HR policies, principles and practices that can support the development of resilience, including things like after-action reviews, open architecture, broad job descriptions, employee suggestions, and cross-departmental task forces. They reference a need to include performance reviews (“results-based appraisals) that encourage the right activities.

But nowhere is 360 feedback mentioned as a potentially powerful tool to reinforce and create culture change. Here are a few ways that 360 processes can be integral parts of a culture change initiative:

  • Defines the construct (e.g., resilience) in behavioral terms
  • Communicates the construct as an organizational priority (i.e., is being measured)
  • Potentially communicates to all employees (raters, ratees) on a repeated basis
  • Creates a metric for tracking progress over time
  • Creates a metric for identifying individual, team, and organizational gaps in performance
  • Creates accountability for behavior consistent with organizational needs
  • Supports aligned HR practices when integrated with other HR systems (e.g., development, staffing, succession planning, performance reviews, high potential development)

This list makes some assumptions about the design and implementation of 360 processes that support culture change. That is such a large topic that it would require an entire book. Stay tuned for that.

I am amazed and disappointed that a major treatise on what is in effect culture change would not include 360 feedback as at least worth consideration as a supporting HR practice. It makes me wonder why that is.

References

Bossidy, L, and Charan, R. (2002). Execution: The Discipline of Getting Things Done. New York: Crown Business.

Kotter, J.P., and Heskett, J.L. (1992). Corporate Culture and Performance. New York: Free Press.

Lengnick-Hall, C.A., et al. (2010). Developing a capacity for organizational resilience through strategic human resource management. Human Resource Management Review, doi:10.1016/j.hrmr.2010.07.001.

©2011 David W. Bracken

Who is the customer? Take Two

leave a comment »

[tweetmeme source=”anotherangle360″]

In an earlier blog, I asked the question, “Who is the customer in a 360 process?”  The particular focus of that discussion was the length of 360 questionnaires.

I have recently been exploring the websites of various 360 Feedback providers to see the products/services that are offered, and how they are positioned. I was surprised by how many vendors are offering processing services without consulting services, which I think relates back to the potential problems that computer-driven solutions can cause that were covered a couple blogs ago.

What I would really like to briefly address, stemming from this search exercise, is the report formats and associated decisions in their design. One of the many decisions in designing a report is whether to show the actual frequency of responses. In my search of 360 websites, when report samples were provided they more often than not showed the mean score for each rater group (e.g., self, boss, direct reports, peers, customers) but not how the average was derived (i.e., the rating distribution).

From experience, I know where this decision comes from. Typically, the rationale is that showing the frequencies will potentially draw attention to outliers (a single rating, usually at the low end of the scale) and cause problems if the ratee misuses that information. Misuse can come in the form of making assumptions as to who gave the rating, and/or exact some form of retribution on the supposed outlier.

These things do happen. My question is whether the best solution to this problem is to deny this potentially useful data to the ratee and other consumers of the report, such as their manager and possibly a coach.

When discussing the question of the length of the survey, I proposed that short (less than 25 items, for example) surveys can be a sign that the rater is a more important “customer” of a 360 process than the ratee. The abbreviated survey makes the task supposedly easier for the raters but denies the ratee from having useful data on a broader range of behaviors that may be useful depending on his/her situation.

Similarly, not showing frequencies again identifies the rater as being more important than the ratee. Not providing the frequencies somehow protects the rater from potential abuse. On the other hand, providing that information to the ratees can be extremely useful in understanding the degree of consensus among raters. (Some reports provide some index of rater agreement in lieu of rating distributions, but I have consistently found those to be almost useless and frequently misunderstood).

Distributions can also help the ratee and other uses to see how outliers do have a major impact on mean scores, especially when the N is small (which it often is with 360’s). I have also found it useful to be able to see cases where there is one outlier on almost every question. When possible, I have asked the data processor to verify that the outlier was the same person (without identifying who it was), and I informed the ratee and his/her manager that scores have been affected by one person who has abused their role as a feedback provider. I also have provided counsel on how to use that information, including not making assumptions as to who the outlier is and to not attempt to discover who it was.

With one client that was provided with rating distributions, the report had this kind of pattern with one outlier on every item. (I have told this story before but in a different context.) Since he met with his raters (a best practice) and shared his report (another best practice), he felt compelled to mention that apparently someone had some problems with his management style and that he would appreciate it if the person, whoever it was, would come talk with him sometime. He left it at that.  Sure enough, a member of the team did come to see him and, with embarrassment, confessed that he had accidentally filled it out backwards (misread the scale).  Think how this helped the manager in so many ways to not have to make assumptions as to the cause of the feedback results. If he did not have the detail of the frequencies he (and his boss/coach) would not know how the averages had been artificially affected. It is also one more reason why 360’s should be used with some judgment, not just treated as a “score.”

We do need to protect raters from being abused. We also need to help them feel safe from identification so that they will continue to be honest in their responses. One way to approach that is to ensure that managers and coaches reinforce to the ratees the proper ways to read, interpret and use the feedback. There should also be processes in place to identify ratees who do behave improperly and to hold them accountable for their actions.

Ratees should be the most important “customer” in 360 processes. They need to understand how their raters responded and, by the way, on a sufficient number of items to apply to their situation. Design and implementation decisions that treat raters as being more important than ratees are misguided.

©2011 David W. Bracken

Making Mistakes Faster

leave a comment »

[tweetmeme source=”anotherangle360″]

The primary purpose of this brief blog entry is to bring to your awareness a new article by Dale Rose, Andrew English, and Christine Thomas in The Industrial/Organizational Psychologist (TIP). I assume that a minority of readers of this blog receive TIP, or, if they do, have not had a chance to read this article. (The title would not immediately draw attention to the fact that the majority of the content is about 360 Feedback for starters.)

The article can be accessed at http://www.siop.org/tip/jan11/04rose.aspx.

As you will see, Dale and colleagues focus primarily on how technology has affected 360 Feedback processes, for good and bad. This should be required reading for practitioners in this field.

They reference a discussion Dale and I had on this blog about the “silly” rating format where raters can rate multiple ratees at the same time using a kind of spread sheet layout. They are correct that there is no research that we are aware of that studies the effects of rating formats like this on the quality of ratings and user reactions (including response rates, for example). We won’t rehash the debate here, but suffice to say that it is one area where Dale and I are in disagreement.

Other than that, I endorse his viewpoints about the pitfalls of technology. I recall when computers first became available to us to support our research. As we all struggled to use technology effectively, I remember saying that computers allow us to make mistakes even faster.

I will use my next blog to talk about, “When Computers Go Too Far,” which builds on some of Dale’s observations. Hope you will tune in!

©2011 David W. Bracken

The “You” in Useless?

with one comment

[tweetmeme source=”anotherangle360″]

I received an interesting comment to one of my blog entries via the OrgVitality LinkedIn discussion site:

David, I have been through the 360 process a few times and have always come away with a few ego-stroking facts and a few interesting but mostly unusable ones that point out need for improvement. The tools and methods were largely suited to quantitative research but the sample size too small and too varied for any kind of generalization.
So my general take-home from 360 was “interesting but useless”.

I am in total agreement that too many 360 processes are “interesting but useless.” I sometimes put those in the category of “parlor games,” which might be described the same way.

I do not personally know this author, but I do appreciate his perspectives. Interestingly, my most recent blog was about treating problems (such as 360’s being “useless”) as opportunities for exploring solutions.  In fact, this person did offer up his own solution in his comment, but still feels that overall it is still useless.

So let’s consider some factors that might cause a 360 to be perceived as “useless,” and some possible solutions:

Problem: Feedback isn’t relevant

Solutions:  Use a custom designed instrument derived from organizational priorities (e.g., strategies, leadership competency model, values). Keep it short (no more than 50 items). Have follow up mini surveys that cover development priorities.

Problem: Feedback isn’t reliable/credible

Solutions: Ratee picks raters, with manager approval. Have sufficient number of raters to create reliable data, including all direct reports.

Problem: Feedback isn’t a priority

Solutions: Gain and communicate senior leadership support (and participation). Integrate feedback into HR processes (e.g., performance management, leadership development, succession planning). Conduct on a regular basis, like other HR processes.  Hold participants and their supervisors accountable.

One of the more difficult problems to solve is when awareness isn’t followed by acceptance. For some people, feedback is just wasted effort. Some people don’t want feedback (see our friend, the “hoss”). Some people don’t want to change and don’t see a need. If the system tolerates that attitude, then 360 probably is useless. But whether the “you” in useless is yourself or the organization, there are ways to solve that problem.

Any other observations??

©2010 David W. Bracken

It’s wonderful, Dave, but…

with 2 comments

[tweetmeme source=”anotherangle360″]

This is one of my favorite cartoons (I hope I haven’t broken too many laws by using it here; I’m certainly not using it for profit!).  I sometimes use it to ask whether people are more “every problem has a solution” or “every solution has a problem” types. Clearly, Tom’s assistant is the latter.

I thought of this cartoon again this past week during another fun (for me, at least) debate on LinkedIn about the purpose of 360’s, primarily about the old decision making vs. development only debate.

Now, I don’t believe that 360 is comparable to the invention of the light bulb (though there is a metaphor lurking in there somewhere), nor did I invent 360. But, as a leading proponent of using 360 for decision making purposes (under the right conditions), by far the most common retort is something along the lines of, “It’s (360) wonderful, Dave, but using it for decisions distorts the responses when raters know it might affect the ratee.”

Yes, there is some data that suggests that raters report their ratings would be affected if they knew they would penalize the ratee in some way.  And it does make intuitive sense to some degree. But I offer up these counterpoints for your consideration:

  • I don’t believe I have ever read a study (including meta analyses) that even considers, let alone studies, rater training effects, starting with whether it is included as part of the 360 system(s) in question. In my recent webinar (Make Your 360 Matter), I presented what I think is some compelling data from a large sample of leaders on the effects of rater training and scale on 360 rating distributions. (We will discuss this data again at our SIOP Pre-Conference Workshop in April.) In the spirit of “every problem has a solution,” I propose that rater training has the potential to ameliorate leniency errors.
  • There is a flip side to believing that your ratings will affect the ratee in some way, which, of course, is believing that your feedback doesn’t matter. I am not aware of any studies that directly address that question, but there is anecdotal and indirect evidence that this also has negative outcomes. What would you do if you thought your efforts made no difference (including not being read)? Would you even bother to respond? Or take time to read the items? Or offer write in comments? Where is the evidence that “development only” data is more “valid” than that used for other purposes?  It may be different, but that does not always mean better.

The indirect data I have in mind are the studies published by Marshall Goldsmith and associates on the effect of follow up on reported behavioral change. (One chapter is in The Handbook of MultiSource Feedback; another article is “Leadership is a Contact Sport,” which you can find at marshallgoldsmith.com.)  The connection I am making here is in suggesting that lack of follow up by the ratee can be a signal that the feedback does not matter, with the replicated finding that reported behavior change is typically zero or even negative. Conversely, when the feedback does matter (i.e., the ratee follows up with raters), behavior change is almost universally positive (and increases with the more follow up reported).

It’s all too easy to be an “every solution has a problem” person. We all do it. I do it too often. But maybe it would help if we became a little more aware of when we are falling into that mode.  It may sound naïve to propose that “every problem has a solution,” but it seems like a better place to start.

©2010 David W. Bracken