Strategic 360s

Making feedback matter

Posts Tagged ‘marshall goldsmith

WAIT (Why Am I Talking)

leave a comment »


I first came across the WAIT acronym in a Facebook discussion my daughter “liked” from a blog about parenting. My daughter (and husband) have two daughters, ages 7 and 5, so commiserating with parents with similar demographics can be useful when there is no instruction guide (other than grandparents, hah). WAIT stands for “Why Am I Talking,” and it was an interesting take on how to interact with young children when (like many/most managers) we want to “be in charge,” “be the expert,” and “have the last word.”  And, in the process of doing all those things, of course we are not listening, let alone trying to understand.

I was reminded of WAIT recently when reading a LinkedIn posting by Ted Bauer that pointed me to this ( by Art Petty, who suggests a 10:1 ratio of listening to talking in order to be a more effective manager.  A 10:1 ratio is pretty radical! I have more typically seen an 80/20 ratio in the context of good coaching. But why not aim high!

I’ll tell you why: because it is so antithetical to the mental model most of us have when we think of “coach” or even “parent.”  But let’s stop (i.e., stop talking) for a few minutes and think about all the possible benefits of WAIT.  A few of these relate to Marshall Goldsmith’s list of negative behaviors in his great book, What Got You Here Won’t Get You There.

  • We never learn anything when we are talking (LBJ and others…)
  • It diminishes the felt value of others when they are not heard
  • It also diminishes the real, actual value of others when their knowledge is not used. As a GM retiree is famously reported to have said, “For 30 years they paid me for my body. They could have had my mind for free.”
  • Our initial need to talk often causes us to state an opinion or make a decision we regret based on insufficient information or analysis. In Jerome Groopman’s book “How Doctor’s Think,” he reports that, on average, a physician will interrupt a patient describing her symptoms within eighteen seconds. His study of malpractice leads him to conclude, “Sometimes the key to success is uncertainty,” that is, don’t decide too quickly.
  • We may be angry or upset. We all know from experience that these are not good times to be talking without “counting to 10.”
  • It feeds our need to be “right”, and in our mind we are right and always will be if others don’t tell us we are wrong. We hate being wrong because we have been brought up to be “right” (e.g., get straight A’s).  See this great TED talk:
  • It feeds our need to have the last word, regardless of how little value it adds.
  • We actually may not know what we are talking about.

Oh, yeah; and then listen.

WAIT!  (your turn)


Written by David Bracken

February 12, 2016 at 3:47 pm

Just Shut Up and Listen

with 4 comments

[tweetmeme source=”anotherangle360″]

I still get the Sunday New York Times in “hard copy” on Sundays (in addition to the electronic version the other days), partly because my wife and I are addicted to the crosswords.  Let me add that I am one of those people who mourn the fadeout of the newspaper, and often find that browsing the physical newspaper often exposes me to pieces of information that I would otherwise miss in the electronic version (whatever form your “browsing” takes, if at all).  (I believe, for what it’s worth, that a similar phenomenon is happening in the music world with the ease of downloading single songs and probably less “browsing” of albums where some other gems are often lurking.)

Back on topic, the Sunday NYT also has a feature in the Business section called “Corner Office” where a business leader is interviewed.  This week it was Francesca Zambello, general and artistic director of the Glimmerglass Festival and artistic director of the Washington National Opera. When asked about leadership lessons she has learned, she says:

When you’re in your 20s and have that leadership gene, the bad thing is that you don’t know when to shut up. You think you know all the answers, but you don’t. What you learn later is when to just listen to everybody else. I’m finding that all those adages about being humble and listening are truer and truer as I get older. Creativity cannot explode if you do not have the ability to step back, take in what everybody else says and then fuse it with your own ideas.

In the parallel universe of my personal life, my daughter Ali sent along an edition of the ABA Journal that references a study of the happiest and unhappiest workers in the US ( that cites associate attorney as the unhappiest profession (which by coincidence is her husband’s job).  If you don’t want to go to the link, the five unhappiest jobs are:

1) Associate attorney

2) Customer service associate

3) Clerk

4) Registered nurse

5) Teacher

The five happiest are:

1) Real estate agent

2) Senior quality assurance engineer

3) Senior sales representative

4) Construction superintendent

5) Senior applications designer

Looking at the unhappiest list and possible themes/commonalities among these jobs, one is lack of empowerment and probably similar lack of influence in their work and work environment. (The job of teacher may less so, and its inclusion on this list is certainly troubling and complicated I am sure).  But I suspect that these first four jobs have a common denominator in the way they are managed that ties back to Ms. Zambello’s reflections on her early management style, i.e., having all the answers and not taking advantage of the knowledge and creativity of the staff.  It also causes me to remember the anecdote of the GM retiree who mused, “They paid me for my body. They could have had my mind for free.”

This is certainly not an epiphany for most of us, but more serendipity that two publications this week once again tangentially converged on this topic. I will once again recommend Marshall Goldsmith’s book, “What Got You Here Won’t Get You There” that is a compendium of mistakes that leaders make in their careers, including behaviors that might have served them well when starting their career but lose their effectiveness as they move up the organization. The classic case being the subject matter expert who gets promoted and assumes that being the “expert” is always the road to success. In Marshall’s book there are 20 of these ineffective, limiting behaviors (some might call them “derailers”), and when we think of the prototypical leader who wants to be the “expert” and doesn’t listen, it potentially touches on multiple behaviors in the list of 20, including:

2. Adding too much value

6. Telling the world how smart we are

10. Failing to give proper recognition

11. Claiming credit we don’t deserve

13. Clinging to the past

16. Not listening

Considering this list as possible motivators for the umbrella behavior of “not listening,” we can see how it might be very challenging to change this behavior if the leader believes (consciously or unconsciously) that one or more of these factors are important to maintain, or (as Marshall also notes) are “just the way I am” and not changeable.

We behaviorists believe that any behavior is changeable, whether a person wants to change or not. What is required is first awareness, i.e., that there is a gap between their behavior and the desired/required behavior, followed by motivation to change that may come internal to the person, but more often requires external motivation that usually comes from accountability. Awareness and accountability are critical features of a valid 360 feedback process if designed to create sustainable behavior change.

Let me add that the “shut up and listen” mantra is a core behavior for coaches as well. This consultant believes that the challenge that most organizations have in morphing managers into effective coaches is also rooted in this core belief that the role of coach is to solve problems for their subordinates, versus listening to fully understand the issue and then help the subordinate “discover” the solution that best works for them and the situation.

This is a serious problem that has two major downsides. For one, it, at least in some major way, is likely a root cause of creating the “unhappy” job incumbents that in turn leads to multiple negative outcomes for the organization. The other major downside is a version of our GM retiree’s lament, that is, the organization is losing out capitalizing on a significant resource in the form of the individual and collective contributions of its workforce.

There may be no time in our history where involving our young workers is more critical, which includes listening to their input and empowering them to act. Consider the many reasons that this might be so:

  • The pace of change, internally and externally, requires that we have processes that allow us to recognize and react in ways that most likely will diverge from past practices
  • Younger workers bring perspectives on the environment, technology and knowledge that are often hidden from the older generations (that are, by the way, retiring)
  • As the baby boomers do retire en masse, we need to be developing the next generation of leaders.  Another aside, this means allowing them to fail, which is another leadership lesson that Ms. Zambello mentions (remember her?).

Listening is actually a very complex behavior to change, but it begins with increasing awareness of ineffectiveness, and the creating motivation to change by educating leaders on its negative consequences and lost opportunities.

©2013 David W. Bracken

Is Your Mirror Foggy?

leave a comment »

[tweetmeme source=”anotherangle360″]

As an alumnus of Dartmouth College, I receive the Alumni Magazine whose current issue contains an interview with the new(ish) president, Jim Kim (see  if you can’t control yourself).  A couple things in the interview caught my attention, including this statement:

The folks in leadership studies at Tuck have said the one thing that is critical for the development of better leaders is self-awareness, the so-called 360-degree analysis. The challenge for us is to structure the kind of education that will lead to the graduation of young people with a clearer sense of what it will take for them to be effective human beings.

Of course, the “360” part is interesting in itself, though I’m not sure what the “so-called” part is all about.

Is self-awareness the most critical of all leadership qualities? My model of leadership behavior change includes awareness, followed by acceptance, as the “keystones” to creating sustainable change.  Organizations are also in constant flux and in need of change, and organizations need some way to create awareness.  Dashboards are a way that organizations become of area in which they are succeeding and failing, and therefore drive change. For the individual leader, the 360 feedback process may be the most powerful dashboard if done correctly, at least on the “how” side of the performance equation (versus the “what”).

Another argument for the importance of awareness came to my attention during the current Republican primary contest. One pundit, in comparing the field of contenders, offered an observation that, except for Rick Santorum, the other players seem to be lacking this sense of self that Dr. Kim alludes to in his quote. One symptom of that lack of self is the constant and repeated use of “Reagan Republican” by almost all the candidates to describe themselves. I even saw a parody of a contest of the candidates as to who could say the name “Reagan” the most times in 10 seconds.

While I’m at it, there was one other quote from the interview that is worth sharing:

It’s fairly well known now that I have a leadership coach, Marshall Goldsmith, who was recently ranked one of the world’s top-10 thought leaders and who also teaches at Tuck. He took me on as a pro bono case. In Marshall’s book, What Got You Here Won’t Get You There, he lists the 20 most common mistakes that CEOs make. Probably the biggest mistake is adding too much value. I didn’t understand that in the beginning, but I sure do now.

You may know that I am a follower of Marshall’s and the book is one I have reviewed and passed along to others (including my family members).  Dr. Kim offers up another important leadership characteristic or, in this case, flaw that plague leaders as they move up the organization. I have compared Marshall’s list of 20 pitfalls to be similar in spirit to the derailers described many years ago by Morgan McCall and associates at the Center for Creative Leadership, though the specific content is different. But both can also be useful content for 360 feedback processes.

Is it time to go and defog your mirror and test your self awareness?   Oh, and remember that what is in your mirror may be closer than it appears.

©2012 David W. Bracken


What You See Is What You Get

leave a comment »

[tweetmeme source=”anotherangle360″]

Every month or so I get an invitation/newsletter from Marshall Goldsmith and Patricia Wheeler. This month’s had a couple gems in it, and I have provided the link at the end of this article.  Marshall’s entry on life lessons is very much worth reading. But Patricia’s offering particularly struck me since I have been thinking a lot about leader behavior. As you will see it also relates directly to the hazards of misdiagnosis, another human flaw that is especially salient for those of us in consulting and coaching where we are prone to jumping to conclusions too quickly.

Several years ago my mother experienced stomach pains.  Her physician, one of the best specialists in the city, ordered the usual tests and treated her with medication.  The pains continued; she returned to his office and surgery was recommended, which she had.  After discharge the pains recurred, stronger than ever; she was rushed to the emergency room, where it was determined that her physician had initially misdiagnosed her. She had further surgery; unfortunately she was unable to withstand the stress of two surgeries, fell into a coma and died several days later.  Several days after her second surgery, her physician approached me, almost tearfully, with an apology.

“I apologize,” he said, “this is my responsibility.”  He should have done one additional test, he said, requiring sedation and an invasive procedure, but he did not want to impose the pain of that procedure on her, feeling at the time that his diagnosis was correct.  “I am truly sorry and I will never make that mistake again.”  What struck me at the time and continues to stay with me is that this doctor was willing to take the risk of telling the whole difficult truth, and that taking responsibility for the situation was more important to him than the very real possibility of a malpractice suit.  I forgave him, and I believe my mother would have as well.

Real apologies have positive impact that, in most if not all cases, outweigh the risk factors.  Ask yourself, when does an apology feel heartfelt to you? When does it seem empty?  Think of a time when you heard a public or corporate figure apologize and it rang true and think of a time when it didn’t.  What made the difference? Here are a few guidelines:

Is it from the heart or the risk management office?  If your apology reads like corporate legalese, it won’t be effective.

Is it unequivocal?  Too many apologies begin with “I’m sorry, but you were at fault in this too.”  An attempt to provoke the other party into apologizing or accepting fault will fail.

Is it timely?  If you delay your apology, perhaps wishing that the issue would just go away (trust me, it won’t), its effect will diminish proportionately.

Does it acknowledge the injury and address the future?  In other words, now that you know your words or actions have caused injury, what will you do going forward?

While we can’t avoid all errors, missteps and blind spots, we can at least avoid compounding them with empty words, blaming and justification.

Patricia is focusing on a particular behavior, i.e., apologizing. This behavior, like all other behaviors, is modifiable if we are aware of the need to change and motivated to do so.  It may not be easy and you may not be comfortable doing it, but that is no excuse. And, by the way, people really don’t care what is going on inside your head to justify not changing (e.g., “they know that I’m sorry without me saying it”). Making an apology is often difficult, as Patricia points out, and maybe that’s why it can be so striking and memorable when someone does it well.

In his book, “What Got You Here Won’t Get You There,” Marshall makes a similar point about the simple behavior of saying “thank you,” which is a common shortcoming in even the most successful leaders.  Leaders find all sorts of excuses for avoiding even that seemingly easy behavior, including “that’s just not me.” The point is that what you do and what people see (i.e., behaviors) IS who you are.

The good news for us practitioners of 360 Feedback is that observing behaviors is what it is (or should be) all about. In a 360 process, the organization defines the behaviors it expects from its leaders, gives them feedback on how successful they are in doing so, and then (ideally) holds them accountable for changing.

This also means that we go to great lengths to ensure that the content of 360 instruments uses items that describe behaviors, hopefully in clear terms.  We need to ensure that we are asking raters to be observers and reporters of behavior, not mind readers or psychologists.  We need to especially wary of items that include adjectives that ask the rater to peer inside the ratee’s head, including asking what the ratee “knows” or “is aware of” or “believes” or even what the leader is “willing” to do.

As a behaviorist, in the end I only care what a leader does and not why (or if) he/she wants to do it. That’s the main reason why I have found personality assessments to be of little interest, with the exception of possibly providing insights into how the coaching relationship might be affected by things like openness to feedback or their preferred style for guidance and learning.

Another piece of good news for us behaviorists came out in a recent article in Personnel Psychology titled, “Trait and Behavioral Theories of Leadership: An Integration and Meta-Analytic Test of Their Relative Validity” (Derue, Nahrgang, Wellman and Humphrey, 2011).  To quote from the abstract, they report:

Leader behaviors tend to explain more variance in leadership effectiveness than leader traits, but results indicate that an integrative model where leader behaviors mediate the relationship between leader traits and effectiveness is warranted.

The last part about mediation suggests that, even when traits do a decent job (statistically) of predicting leader effectiveness, they are “filtered” through leader behaviors. For example, all the intelligence in the world doesn’t do much good if you are still a jerk (or bully, or psychopath, etc.)

All of this reinforces the importance of reliably measuring leader behaviors, especially if we believe that the “how” of performance is at least as important as the “what.”


©2011 David W. Bracken


What is the ROI for 360’s?

with 2 comments

[tweetmeme source=”anotherangle360″]

Tracy Maylett recently started a LinkedIn discussion in the 360 Feedback Surveys group by asking, “Can you show ROI on 360-degree feedback processes?” To date, no one has offered up any examples, and this causes me to reflect on this topic. It will also be part of our (Carol Jenkins and myself) discussion at the Society for Industrial and Organizational Psychology (SIOP) Pre-Conference Workshop on 360 Feedback (April 13 in Chicago; see

Here are some thoughts on the challenges in demonstrating ROI with 360 processes:

1)      It is almost impossible to assess the value of behavior change. Whether we use actual measurements (e.g., test-retest) or just observer estimations of ratee change, assigning a dollar value is extremely difficult. My experience is that, no matter what methodology you use, the results are often large and cause consumers (e.g., senior management) to question and discount the findings.

2)      The targets for change are limited, by design. A commonly accepted best practice for 360’s is to guide participants in using the data to focus on 2-3 behaviors/competencies. If some overall measure of behavior change is used (e.g., the average of all items in the model/questionnaire), then we should expect negligible results since the vast majority of behaviors have not been addressed in the action planning (development) process.

3)      The diversity of behaviors/competencies will mean that they have differential ease of change (e.g., short vs. long term change) and different value to the organization. For example, what might be the ROI for significant change (positive or negative) in ethical behavior compared to communication? Each is very important but with very different implications for measuring ROI.

4)      Measurable change is dependent on design characteristics of each 360 process.  I have suggested in earlier blogs that there are design decisions that are potentially so powerful as to promote or negate behavior change. One source for that statement is the article by Goldsmith and Morgan called, “Leadership is a contact sport,” which can be found on  In this article  (that I have also mentioned before), they share results from hundreds of global companies and thousands of leaders that strongly support the conclusion that follow up with raters may be the single best predictor of observed behavior change.

Dale Rose and I have an article in press with the Journal of Business and Psychology titled, “When does 360-degree Feedback create behavior change?  And would we know it when it does?” One of our major objectives in that article is to challenge blanket statements about the effectiveness of 360 processes since there are so many factors that will directly impact the power of the system to create the desired outcomes. The article covers some of those design factors and the research (or lack thereof) associated with them.

If anyone says, for example, that a 360 process (or a cluster, such as in a meta analysis) shows minimal or no impact, my first question would be, “Were the participants required to follow up with their raters?” I would also ask about things like reliability of the instrument, training of raters, and accountability as a starter list of factors that can result in unsuccessful ability to cause and/or measure behavior change.

Tracy’s question regarding ROI is an excellent one, and we should be held accountable for producing results. That said, we should not be held accountable for ROI when the process has fatal flaws in design that almost certainly will result in failure and even negative ROI.

©2011 David W. Bracken


It’s wonderful, Dave, but…

with 2 comments

[tweetmeme source=”anotherangle360″]

This is one of my favorite cartoons (I hope I haven’t broken too many laws by using it here; I’m certainly not using it for profit!).  I sometimes use it to ask whether people are more “every problem has a solution” or “every solution has a problem” types. Clearly, Tom’s assistant is the latter.

I thought of this cartoon again this past week during another fun (for me, at least) debate on LinkedIn about the purpose of 360’s, primarily about the old decision making vs. development only debate.

Now, I don’t believe that 360 is comparable to the invention of the light bulb (though there is a metaphor lurking in there somewhere), nor did I invent 360. But, as a leading proponent of using 360 for decision making purposes (under the right conditions), by far the most common retort is something along the lines of, “It’s (360) wonderful, Dave, but using it for decisions distorts the responses when raters know it might affect the ratee.”

Yes, there is some data that suggests that raters report their ratings would be affected if they knew they would penalize the ratee in some way.  And it does make intuitive sense to some degree. But I offer up these counterpoints for your consideration:

  • I don’t believe I have ever read a study (including meta analyses) that even considers, let alone studies, rater training effects, starting with whether it is included as part of the 360 system(s) in question. In my recent webinar (Make Your 360 Matter), I presented what I think is some compelling data from a large sample of leaders on the effects of rater training and scale on 360 rating distributions. (We will discuss this data again at our SIOP Pre-Conference Workshop in April.) In the spirit of “every problem has a solution,” I propose that rater training has the potential to ameliorate leniency errors.
  • There is a flip side to believing that your ratings will affect the ratee in some way, which, of course, is believing that your feedback doesn’t matter. I am not aware of any studies that directly address that question, but there is anecdotal and indirect evidence that this also has negative outcomes. What would you do if you thought your efforts made no difference (including not being read)? Would you even bother to respond? Or take time to read the items? Or offer write in comments? Where is the evidence that “development only” data is more “valid” than that used for other purposes?  It may be different, but that does not always mean better.

The indirect data I have in mind are the studies published by Marshall Goldsmith and associates on the effect of follow up on reported behavioral change. (One chapter is in The Handbook of MultiSource Feedback; another article is “Leadership is a Contact Sport,” which you can find at  The connection I am making here is in suggesting that lack of follow up by the ratee can be a signal that the feedback does not matter, with the replicated finding that reported behavior change is typically zero or even negative. Conversely, when the feedback does matter (i.e., the ratee follows up with raters), behavior change is almost universally positive (and increases with the more follow up reported).

It’s all too easy to be an “every solution has a problem” person. We all do it. I do it too often. But maybe it would help if we became a little more aware of when we are falling into that mode.  It may sound naïve to propose that “every problem has a solution,” but it seems like a better place to start.

©2010 David W. Bracken


I see you rolling your eyes

with 2 comments

[tweetmeme source=”anotherangle360″]

This is the second of a series of blog entries that I am using to respond to some questions that were submitted during my recent webinar, “Make Your 360 Matter.”  Some of these questions were ones I got to during the webinar, but would like to expand on my answer and at the same time share the thoughts with others who might not have attended.

I am going to combine two questions to address one important topic:

You went over this Dave, but if you could pick 1-3 things that most 360s do poorly–and could easily do better

­Rater training seems so important, but it’s hard enough to get ANY training out there.  Are there streamlined ways to get the key points conveyed to raters?

For ten years or more, I (and some coauthors) have contended that rater training might well be the most important and neglected practice in 360 systems. My suspicions are that at least part of the reason for this lies in the tone of the second question, i.e., the “rolling of the eyes” every time the topic of rater training comes up due to preconceptions of what it involves.

During the webinar, I presented data on two 360 practices that appear to have major effects on the ability to create and measure behavior change. One of those design features is follow up with raters with data from the article, “Leadership is a Contact Sport,” which can be found on

The second topic was actually a combination of practices, i.e., choice of rating scale combined with rater training. I presented some data that strongly indicates the potential power of those factors in affecting the distribution of ratings, especially in reducing leniency error.

If we think about it a little, every 360 process already (hopefully) has some sort of rater training, usually in the form of directions on how to complete the questionnaire. In its simplest form, that might consist only of basic directions on how to physically complete the survey (e.g., one answer per question, must answer every question).  I find it is becoming increasingly common for instructions to go a little further by providing further guidance to the rater, such as:

  • Think of the leader’s behavior during the past year
  • Do not give excessive weight to recent events or observations
  • Use the full rating scale as appropriate. No leader is so good to deserve all “5’s” or so bad to get all “1’s”

We also often give a form of “training” in regard to write in comments:

  • Be specific about what you observed and/or what you suggest
  • Be constructive
  • Limit you r comments to job-related behaviors
  • Do not identify yourself, unless you intentionally desire to do so

In recent blogs and the webinar as well, I have expressed the view that 360 surveys are not “tests.” In the classic view of tests, we strive to identify individual differences that will help us differentiate subjects in order to predict future behavior, such as success on the job. In 360’s, we have no desire or need to measure individual differences in the raters since they are not the focus of our measurement efforts. Instead, we use training to minimize or remove individual differences in raters that we define as rater error.

So how do we deliver rater training? I have seen it come in many forms, including classroom training. But its most common design is a set of slides that the rater must, at a minimum, review prior to completing the first questionnaire. Once they have done that, they are “certified” and do not have to participate in any other training for that administration cycle.

Some of the typical content areas for rater training can include:

  • Purpose of the 360 process
  • How the feedback will be used
  • How anonymity is protected
  • Source of the behavioral items (e.g., values, competency model)
  • Rating scale format/content
  • When to respond “Not Observed” (Don’t Know)
  • Time frame (e.g., 1 year of behavior)
  • Types of rating errors and how to avoid (e.g., leniency, severity, recency, halo)
  • Case studies/examples (e.g., how to use rating scale)

There is a lot of variability in both amount of content and the pace that raters go through the slides, so the time required is hard to predict. Some processes have some sort of test at the end to ensure some level of attention.

Don’t discount using group training if the setting allows it. I once did a 360 at a hospital and was able to convene groups of nurses for 20-30 minute sessions, for example. Based on their questions, I know that it improved the quality of the feedback by correcting misconceptions and/or misinformation.

Another observation I made in the webinar is that rater training is a best practice in performance management/appraisal processes and a guard against legal challenge. Since 360’s closely resemble performance appraisals, this would seem applicable to them as well.

So please stop rolling your eyes and consider the potential benefit, with little cost, in implementing some form of rater training. It can make a difference.

Please share any experience you have had with rater training, good or bad!!

©2010 David W. Bracken