Strategic 360s

Making feedback matter

Author Archive

Is Your Online Class is Broken?!?

leave a comment »

I have been teaching in an online environment for four years, so when the recent forced mass migration of traditional learning programs to online environments occurred, an opportunity was created to help colleagues think through the resulting challenges and opportunities.  For example, colleagues from the Society for Industrial and Organizational Psychology (SIOP) created a resource document, the “Online Teaching Survival Guide” as just that, a guide to transitioning to online teaching.

I participated in a Zoom session organized by SIOP with similar objectives and used the ALAMO performance model to organize my advice to instructors in transition. ALAMO is a diagnostic tool to guide the systematic analysis of suboptimal performance that can be used at the individual, team, and organizational levels; see some of my prior posts and a recent book chapter (Bracken & Rotolo, 2019) where it has been used.

ALAMO is an acronym where:

Performance = Alignment x (Ability x Motivation x Opportunity)

In the current application, let’s assume our target audience is university-level instructors, both undergraduate and graduate level. I am becoming painfully aware that this covers a very wide range of teaching environments, starting with class sizes that can vary from a handful (e.g., graduate seminars) to hundreds with a corresponding array of individual differences in areas such as Ability and Motivation even in the traditional setting.  So, the discussion and corresponding advice needs to be broad enough to apply to a very diverse audience.

What is “Performance” in this application?  Let’s operationally define it as creating a learning opportunity of similar quality as offered in the traditional environment.

Let’s use the model to build a starter list of factors that may be barriers to achieving that goal of optimal “Performance” in the transition from traditional to online teaching.

Alignment (Knowing What to Do, The “Rules”)

  • Are communications getting to the student?
  • Does the student understand expectations?
  • What are the changes to deadlines? Grading? Attendance?
  • How does the student reach you?

Ability

  • Does the student know how to access the online environment?
  • Does the student know how to use the learning software?

Motivation

  • How will the instructor maintain ties between learning and outcomes (grades)?
  • What will be the attendance requirements for synchronous learning events (meetings, lectures)?
  • How will accountability be established for asynchronous learning?
  • What will the policies be for handling individual circumstances?

Opportunity (Having the Resources, Confidence)

  • What if the student doesn’t have a personal computer?
  • What if there is no internet access?
  • How will the student notify the instructor if there are circumstances beyond their control that prevent participation?
  • What if the student is so stressed as to believe they cannot succeed?

There may be a bit of a “duh” reaction to the ALAMO model.  The objective of using an acronym is a) to make it each to remember, and b) to ensure that all the possible causal factors are considered simultaneously.  That is why the elements of the model are multiplicative; each one is so powerful as to drag Performance to zero by itself.  In fact, Alignment can actually be negative and cause performance to draw resources away from the system.

It’s as simple as that. If things aren’t going as well as hoped (in any situation), we need to consider all the possible causes, and, in this case, ideally anticipate them before they do drag the system to zero (or worse).

 

Bracken, D. W., & Rotolo, C. T. (2019). Can we improve rater performance? In A. H. Church, D. W. Bracken, J. W. Fleenor, & D. S. Rose (Eds.), Handbook of strategic 360 feedback. Oxford University Press.

©David W. Bracken 2020

 

Written by David Bracken

April 6, 2020 at 12:06 pm

The Feedback Reality

with 2 comments

Please visit this site to read a response my colleagues Dale Rose, John Fleenor, Allan Church and me to the recent HBR article regarding negative feedback. Comments (feedback) of any ilk are always welcome!

Written by David Bracken

July 20, 2019 at 1:52 pm

Here is Some Unfavorable Feedback

with one comment

My associates and I (Omeniho, Bracken, Mendelson, & Kuchinka, 2019) are presenting a poster at the Annual Conference of the Society for Industrial and Organization Psychology (SIOP) on April 6 based upon Chioma’s recently completed dissertation.  At the same time, Buckingham and Goodall (2019) have published an article in Harvard Business Review (cover topic, no less) on the perils of negative/unfavorable feedback.  With no real data to support their case.  Must be nice.

In contrast, our paper/poster reinforces the value of feedback, both Favorable AND Unfavorable from BOTH supervisors and coworkers using work engagement as the outcome measure.

I offer an addendum to another benefit to collecting balanced feedback on all organizational leaders, i.e., to identify those who are not behaving in concert with organizational values and creating a toxic environment where inappropriate behaviors are being tolerated and, thereby, reinforced.

Reference: Buckingham M. & Goodall, A. (2019).  The feedback fallacy. Harvard Business Review, March-April..

Omeniho et al 2019

What’s Your “Way”?

leave a comment »

I had unexpected knee surgery recently and received a “Get Well” gift from my friends and colleagues, Laurie and John Slifka, in the form of the book, The Cubs Way (by Tom Verducci), which was especially meaningful coming from die hard St. Louis Cardinal fans.  The book traces the genesis of their 2016 Championship season, using the World Series as the backdrop.

If you are a baseball fan, you know who Theo Epstein is.  He became the youngest General Manager in baseball when he took that position with the Boston Red Sox (28 years old), and took them to their first championship in 86 years, a drought exceeded only by the Cubs.  After winning another championship, he came to Chicago to try to bring the same magic to the North Side’s “Lovable Losers,” the Cubs.

Epstein brought with him some staff from Boston and kept some staff from the Cubs, and gracefully integrated them into a team by creating a spirit of collaboration and a focus on winning.  Listening to their input, he created a 259 page manual called “The Cubs Way.” It covered every aspect of behavior on and off the field, from the top of the hierarchy to the bat boys.

I am writing this blog piece and this particular topic because of several themes I have been pursuing in my writing and presentations regarding topics such as how to create a culture and the role that trust plays in that process, and then how trust must be established at the level of the supervisor-subordinate relationship where feedback is difficult and sparse.

Trust, Respect and Culture

I believe that trust and respect are created between people when honest feedback is given, both the favorable and unfavorable.  This is in direct contrast to the “strengths only” movement that has gained much too much popularity.  I believe if you respect a person, you are honest with them.

So I had to put down the book and take up the keyboard as I was reading this book as Verducci relates parts of Epstein’s philosophy that became a key part of “The Cubs Way”:

“For years baseball teams rarely shared evaluations about players with the players themselves… It occurred to Epstein that the first time a team truly tells a player he’s not good enough is when it’s too late – when it releases him. It sounded absurd to him that a team wouldn’t tell a player about his strengths and weaknesses… It (a player development plan) does really create a great connection with the player and helps him develop himself… Epstein wanted a culture in which the players could trust the front office. And the way to help build that trust was to develop an open and honest personal connection.” (pp. 104-105).

For fun, they dug out an old scouting report on one of the coaches, and the report said that he was slow at turning double plays.  The coach was angry; “Why didn’t anyone tell me I needed to work on my turns?… I would have gotten to the big leagues so much quicker!”

Unfavorable Feedback is Better than None

I just completed chairing a dissertation that confirmed what most research says, i.e., that the most engaged employees are those that get both favorable and unfavorable feedback, and the least engaged are those who get neither.  Employees who get mostly unfavorable feedback are more engaged than those who get neither, and about the same as people who get mostly favorable feedback.

This philosophy is a core part of the culture the Cubs have built, “The Cubs Way.”  Your organization should have a “Way” as well.  When a Cubs employee does something exceptional, they yell out, “That Cub!!!”  And the example is set by the leaders; their behavior sets the culture.

What is “Your Way?”

Written by David Bracken

November 5, 2018 at 9:20 pm

Manager-Employee Feedback and Development: Why is it SO Hard?

leave a comment »

The current climate surrounding performance appraisals leans toward the abandonment of the administrative exercise we all have come to despise and, instead, replace it with a feedback culture of continuous exchanges between manager and direct report. The solution is not new, so why has it not been implemented in more organizations?

Access the article here:  Manager-Employee Feedback.

 

 

Strategic New Year!!

leave a comment »

2018 will be a seminal year for Strategic 360 Degree Feedback for several reasons.  To refresh your collective memories, in a previous post (https://wordpress.com/post/dwbracken.wordpress.com/656) I defined it as having these characteristics:

  • The content must be derived from the organization’s strategy and values, which are unique to that organization. Often derived from the organization’s values, they can be explicit (the ones that hang on the wall) or implicit (which some people call “culture”). To me, “strategic” and “off-the-shelf” is an oxymoron and the two words cannot be used in the same sentence (though I just did).
  • Participation must be inclusive, i.e., a census of the leaders/managers in the organizational unit (e.g., total company, division, location, function, level). I say “leaders/managers” because a true 360 requires that subordinates are a rater group. One reason for this requirement is that I (and many others) believe 360’s, under the right circumstances, can be used to make personnel decisions and that usually requires comparing individuals, which, in turn, requires that everyone have available the same data. This requirement also enables us to use Strategic 360’s to create organizational change, as in “large scale change occurs when a lot of people change just a little.”
  • The process must be designed and implemented in such a way that the results are sufficiently reliable (we have already established content validity in requirement #1) that we can use them to make decisions about the leaders (as in #4). This is not an easy goal to achieve, even though benchmark studies continue to indicate that 360’s are the most commonly used form of assessment in both public and private sectors.
  • The results of Strategic 360’s are integrated with important talent management and development processes, such as leadership development and training, performance management, staffing (internal movement), succession planning, and high potential processes. Research indicates that properly implemented 360 results can not only more reliable (in a statistical meaning) than single-source ratings, but are also more fair to minorities, women, and older workers. Integration into HR systems also brings with it accountability, whether driven by the process or internally (self) driven because the leader knows that the results matter.

For this past year, I have teamed with Allan Church, John Fleenor and Dale Rose to recruit an all-star roster of practitioners in our field to contribute chapters for an edited book, The Handbook of Strategic 360 Feedback (Oxford University Press). Though a continuation of many of the themes covered in The Handbook of Multisource Feedback (Bracken, Timmreck, & Church, 2001), this Handbook will have more of a practitioner focus with several case studies and new trends in this field.

The four of us will also host a panel discussion at the Annual Conference of the Society of Industrial and Organizational Psychology (SIOP) in Chicago on April 19 at Noon. Joined by Michael Campion and Janine Waclawksi (PepsiCo), we will present our learnings and observations from assembling the thirty-chapter volume.

The 3D Group and PepsiCo will also host another in our series of semi-annual meetings of the Strategic 360 Forum, a consortium of organizations that use 360 Feedback for strategic purposes and are interested in sharing best practices.  This full day meeting will be held in Chicago on April 17 with several Handbook contributors leading discussions on various topics. For more information, go to the 3D Group website (https://3dgroup.net/strategic-360-feedback-forum/).

Finally, Strategic 360 Feedback will continue to be the most powerful tool in our kit for reliably measuring leadership behaviors that form the basis for engagement, motivation, productivity and retention. Using 360’s, we can create culture change and develop leaders by defining, measuring, and holding leaders accountable for behaving consistently with organizational goals and values.

Have a Strategic New Year!

David Bracken

AI YI YI!

leave a comment »

Image result for images of cartoon head exploding

Artificial Intelligence is not only here to stay, it may well outlive and replace most of us.  During this rapidly evolving introduction of AI into our lives (sometimes without our knowledge and/or consent; see Amazon.com’s recent experience with lawsuits aimed at their Alexa division), we should be vigilant regarding its use.

I have been invited to participate in a conversation hour at the next SIOP Conference (in Chicago in April, 2018) on the implications of AI for our profession and organizations in general.  In our proposal writing process, I came upon this article about the use of AI in the recruiting and hiring process as used by Unilever (https://goo.gl/KH2LVW).

Frankly, it blows my mind.  Or should I say, blows up.

Almost every day, my favorite blog, The LowDown (thelowdownblog.com), seems to have a new article regarding AI, but I hadn’t thought enough about how it will affect our profession as IO Psychologist and our clients who look to us for expertise in helping them to make better decisions about current and prospective employees.  My hunch is that we (again, as a profession) are lagging behind in anticipating the issues coming down the pike on the back of AI tools.

The Unilever case study is remarkable for many reasons. They claim great efficiencies that AI creates in terms of handling large numbers of potential applicants at significant cost savings.  As an IO Psychologist, I became curious as to accuracy (i.e., validity) of their screens and evidence for job-relatedness.

At the risk of serving as free advertising, I want to draw your attention to the two vendors that Unilever uses in their hiring process, Pymetrics and HireVue. Pymetrics uses games to assess candidates and to apply neuroscience to the decision to progress or not. For those who pass, they are funneled into the HireVue interview technology, though not a “live” interview. Applicants are evaluated for key words, body language and tone.

Maybe you want to search their websites with me.  Here are two companies that are affecting the lives of thousands of people just with this one experience.

The Pymetrics website says (regarding validity), “The games have been validated through decades of use in neuroscience and cognitive psychology research settings to identify and evaluate people’s cognitive, emotional, and social traits. Several of the games have physical analogues dating back to the 19th century.” (https://goo.gl/iq5xgT)   Not a word about being job related or predicting actual job performance. They speak of reducing bias. I can do that too. Give me a coin to flip. That would be even faster (though I could still charge a lot for my flipping skill).

So who are these people?  HireVue’s founder has a Master’s in finance.  No evidence of science, but they look like they are having fun!  Pymetrics does have a neuroscientist on their senior team, and some other neuroscientists lurking.

Fast, fun and flexible. Is that our mantra for best practices in making decisions about people?  Maybe so. They seem to be doing quite well.  “They” being the vendors, maybe not so much the applicants.

Artificial Intelligence is not only here to stay, it may well outlive and replace most of us.  During this rapidly evolving introduction of AI into our lives (sometimes without our knowledge and/or consent; see Amazon.com’s recent experience with lawsuits aimed at their Alexa division).

I have been invited to participate in a conversation hour at the next SIOP Conference (in Chicago in April, 2018) on the implications of AI for our profession and organizations in general.  In our proposal writing process, I came upon this article about the use of AI in the recruiting and hiring process as used by Unilever (https://goo.gl/KH2LVW).

Frankly, it blows my mind.  Or should I say, blows up.

Almost every day, my favorite blog, The LowDown (thelowdownblog.com), seems to have a new article regarding AI, but I hadn’t thought enough about how it will affect our profession as IO Psychologist and our clients who look to us for expertise in helping them to make better decisions about current and prospective employees.  My hunch is that we (again, as a profession) are lagging behind in anticipating the issues coming down the pike on the back of AI tools.

The Unilever case study is remarkable for many reasons. They claim great efficiencies that AI creates in terms of handling large numbers of potential applicants at significant cost savings.  As an IO Psychologist, I became curious as to accuracy (i.e., validity) of their screens and evidence for job-relatedness.

At the risk of serving as free advertising, I want to draw your attention to the two vendors that Unilever uses in their hiring process, Pymetrics and HireVue. Pymetrics uses games to assess candidates and to apply neuroscience to the decision to progress or not. For those who pass, they are funneled into the HireVue interview technology, though not a “live” interview. Applicants are evaluated for key words, body language and tone.

Maybe you want to search their websites with me.  Here are two companies that are affecting the lives of thousands of people just with this one experience.

The Pymetrics website says (regarding validity), “The games have been validated through decades of use in neuroscience and cognitive psychology research settings to identify and evaluate people’s cognitive, emotional, and social traits. Several of the games have physical analogues dating back to the 19th century.” (https://goo.gl/iq5xgT)   Not a word about being job related or predicting actual job performance. They speak of reducing bias. I can do that too. Give me a coin to flip. That would be even faster (though I could still charge a lot for my flipping skill).

So who are these people?  HireVue’s founder has a Master’s in finance.  No evidence of science, but they look like they are having fun!  Pymetrics does have a neuroscientist on their senior team, and some other neuroscientists lurking.

Fast, fun and flexible. Is that our mantra for best practices in making decisions about people?  Maybe so. They seem to be doing quite well.  “They” being the vendors, maybe not so much the applicants.

©David W. Bracken, 2017

Written by David Bracken

August 8, 2017 at 11:00 pm