Strategic 360s

Making feedback matter

Archive for the ‘Decision Making’ Category

AI YI YI!

leave a comment »

Image result for images of cartoon head exploding

Artificial Intelligence is not only here to stay, it may well outlive and replace most of us.  During this rapidly evolving introduction of AI into our lives (sometimes without our knowledge and/or consent; see Amazon.com’s recent experience with lawsuits aimed at their Alexa division), we should be vigilant regarding its use.

I have been invited to participate in a conversation hour at the next SIOP Conference (in Chicago in April, 2018) on the implications of AI for our profession and organizations in general.  In our proposal writing process, I came upon this article about the use of AI in the recruiting and hiring process as used by Unilever (https://goo.gl/KH2LVW).

Frankly, it blows my mind.  Or should I say, blows up.

Almost every day, my favorite blog, The LowDown (thelowdownblog.com), seems to have a new article regarding AI, but I hadn’t thought enough about how it will affect our profession as IO Psychologist and our clients who look to us for expertise in helping them to make better decisions about current and prospective employees.  My hunch is that we (again, as a profession) are lagging behind in anticipating the issues coming down the pike on the back of AI tools.

The Unilever case study is remarkable for many reasons. They claim great efficiencies that AI creates in terms of handling large numbers of potential applicants at significant cost savings.  As an IO Psychologist, I became curious as to accuracy (i.e., validity) of their screens and evidence for job-relatedness.

At the risk of serving as free advertising, I want to draw your attention to the two vendors that Unilever uses in their hiring process, Pymetrics and HireVue. Pymetrics uses games to assess candidates and to apply neuroscience to the decision to progress or not. For those who pass, they are funneled into the HireVue interview technology, though not a “live” interview. Applicants are evaluated for key words, body language and tone.

Maybe you want to search their websites with me.  Here are two companies that are affecting the lives of thousands of people just with this one experience.

The Pymetrics website says (regarding validity), “The games have been validated through decades of use in neuroscience and cognitive psychology research settings to identify and evaluate people’s cognitive, emotional, and social traits. Several of the games have physical analogues dating back to the 19th century.” (https://goo.gl/iq5xgT)   Not a word about being job related or predicting actual job performance. They speak of reducing bias. I can do that too. Give me a coin to flip. That would be even faster (though I could still charge a lot for my flipping skill).

So who are these people?  HireVue’s founder has a Master’s in finance.  No evidence of science, but they look like they are having fun!  Pymetrics does have a neuroscientist on their senior team, and some other neuroscientists lurking.

Fast, fun and flexible. Is that our mantra for best practices in making decisions about people?  Maybe so. They seem to be doing quite well.  “They” being the vendors, maybe not so much the applicants.

Artificial Intelligence is not only here to stay, it may well outlive and replace most of us.  During this rapidly evolving introduction of AI into our lives (sometimes without our knowledge and/or consent; see Amazon.com’s recent experience with lawsuits aimed at their Alexa division).

I have been invited to participate in a conversation hour at the next SIOP Conference (in Chicago in April, 2018) on the implications of AI for our profession and organizations in general.  In our proposal writing process, I came upon this article about the use of AI in the recruiting and hiring process as used by Unilever (https://goo.gl/KH2LVW).

Frankly, it blows my mind.  Or should I say, blows up.

Almost every day, my favorite blog, The LowDown (thelowdownblog.com), seems to have a new article regarding AI, but I hadn’t thought enough about how it will affect our profession as IO Psychologist and our clients who look to us for expertise in helping them to make better decisions about current and prospective employees.  My hunch is that we (again, as a profession) are lagging behind in anticipating the issues coming down the pike on the back of AI tools.

The Unilever case study is remarkable for many reasons. They claim great efficiencies that AI creates in terms of handling large numbers of potential applicants at significant cost savings.  As an IO Psychologist, I became curious as to accuracy (i.e., validity) of their screens and evidence for job-relatedness.

At the risk of serving as free advertising, I want to draw your attention to the two vendors that Unilever uses in their hiring process, Pymetrics and HireVue. Pymetrics uses games to assess candidates and to apply neuroscience to the decision to progress or not. For those who pass, they are funneled into the HireVue interview technology, though not a “live” interview. Applicants are evaluated for key words, body language and tone.

Maybe you want to search their websites with me.  Here are two companies that are affecting the lives of thousands of people just with this one experience.

The Pymetrics website says (regarding validity), “The games have been validated through decades of use in neuroscience and cognitive psychology research settings to identify and evaluate people’s cognitive, emotional, and social traits. Several of the games have physical analogues dating back to the 19th century.” (https://goo.gl/iq5xgT)   Not a word about being job related or predicting actual job performance. They speak of reducing bias. I can do that too. Give me a coin to flip. That would be even faster (though I could still charge a lot for my flipping skill).

So who are these people?  HireVue’s founder has a Master’s in finance.  No evidence of science, but they look like they are having fun!  Pymetrics does have a neuroscientist on their senior team, and some other neuroscientists lurking.

Fast, fun and flexible. Is that our mantra for best practices in making decisions about people?  Maybe so. They seem to be doing quite well.  “They” being the vendors, maybe not so much the applicants.

©David W. Bracken, 2017

Advertisements

Written by David Bracken

August 8, 2017 at 11:00 pm

Our Responsibility to Help Organizations Make Good Decisions

leave a comment »

Here are two pieces on performance management that surfaced today that motivated and informed this blog entry:

https://www.linkedin.com/pulse/big-idea-2016-dont-ditch-performance-management-process-herena

https://www.linkedin.com/pulse/stop-whining-performance-ratings-glen-kallas

I was asked by a high school teacher to visit his class and talk to them about my profession, that is, just what does an I/O Psychologist do?  I find that a lot of us in this field struggle with a concise answer to that question, perhaps because we touch so many different parts of the interface between people and organizations.

For the purpose of this 30 minute time with the class of juniors, I landed on the notion of a common denominator for the applying of our trade is that of helping organizations make decisions about people. The obvious starting point is the major role we play in helping organizations decide which people to hire or not, though some of us do get involved in the employment life cycle even before that (e.g., during recruitment and advertising to draw applicants.)

Moving on from employment decisions, we can move through all sorts of stages in the career of an employee where decisions are being made (and they are making decisions as well), and wouldn’t it be nice if those decisions are being made based on criteria that are “valid” (to use our lingo), fair and transparent.  And, I told them, that was a major contribution we as I/O Psychologists bring to the process, using science and experience for the benefit of both the employee and the organization to increase the probabilities that the decision is more likely to lead to successful performance than if it were just a random (e.g., flip of the coin, gut instinct, expeditious) choice.

This little discussion was a few years ago, and it came to mind now as I read some more articles on the ongoing discussion/debate regarding Performance Appraisal/Performance Management.  Depending on what version of a Performance Management Process (PMP) makes up your mental model, a PMP can have direct consequences for an employee. In the current discussion and debate on this topic, people are fretting (and rightly so) about the mechanics of evaluating an employee.  They/we also are worrying about other facets of the PMP process that should include higher quality (and more frequent) interactions between the manager and his/her employees for both performance discussions and development conversations, with aspirations that such interactions happen more often than the once or twice a year that “formal” appraisal systems require.

One proposed solution to creating more frequent interactions between managers and employees is to get rid of the formal sessions, symbolically represented by the evil rating process.  One of the many problems this creates is to remove a source of information that the organization needs to make decisions about people.  It is our responsibility to provide decision makers with methods to provide them (at all levels) with reliable data. If the current PMP system at an organization is not doing that, it is fixable as suggested by Glen Kallas and his blog piece.  Dismantling the system does not help unless somehow that data can be generated by whatever is taking its place.  I don’t see that happening, at least in what I am reading.  If there are data being created in the alternate processes that involve more frequent interactions between managers and employees, then we have the same responsibility to ensure that information is as good or better than what it is replacing.

The Herena blog speaks to the many benefits of maintaining or even enhancing your PMP. Then she (and her CEO) go on to call for supplementing PMP by making their managers into better “coaches,” which is fantastic! Especially when supported from the top.  She doesn’t speak to the benefits of PMPs in terms of the data they produce, though the alignment benefit is extremely important and potentially lost when the system goes away.

IF you agree that the organization needs reliable data to make decisions about people throughout their employment cycle, then no profession is better equipped to do that.  Arguing that the solution is to remove the data generator instead of fixing it seems irresponsible.

I was watching a documentary about George Harrison’s life, and they interviewed his second (and last) wife, Olivia.  They were married for 23 years until his death, and it was clear that their marriage, like many, had a lot of bumps (or whatever euphemism you want to use).  Her observation was that the secret to a long marriage is not getting divorced, which I took to mean not giving up when things are difficult.  Well, there are many reasons we should not be giving up (as Glen and Monique point out), and I hope I am adding one more reason to the mix.

We have a responsibility to help organizations make good decisions about people.  And there are decisions being made constantly, ranging from promotions to pay to job assignments, and even what developmental experience you get or don’t get.  What I suggested to those students is that there should be some comfort in knowing that there are people like us that are trying to create a level playing field and good information so that the decisions that affect them (of which many are life and/or job changing) are based on reliable information.  We need to consider that responsibility when we make or influence other types of decisions, including those decisions that reduce the quality of that data.  In other words, help organizations to not “divorce” their PMPs just because they might be doing what we want them to do.

Written by David Bracken

January 29, 2016 at 6:50 pm

What are “Strategic 360’s”?

leave a comment »

[tweetmeme source=”strategic360s”]

A colleague recently asked me, “Exactly what is ‘Strategic 360 Feedback’?”  Heck, it’s only the name of this blog and in the name the consortium I have helped form, The Strategic 360 Forum (that is meeting for its 5th time in April).  The concepts are also laid out pretty well in the article Dale Rose and I published in 2011 in the Journal of Business in Psychology (“When Does 360-degree Feedback Create Behavior Change? And How Would We Know It When It Does?”).

In as succinct way as I can muster, here are the four core requirements for “strategic” 360 feedback systems:

  1. The content must be derived from the organization’s strategy and values, which are unique to that organization. Often derived from the organization’s values, they can be explicit (the ones that hang on the wall) or implicit (which some people call “culture”). To me, “strategic” and “off-the-shelf” is an oxymoron and the two words cannot be used in the same sentence (though I just did).
  2. Participation must be inclusive, i.e., a census of the leaders/managers in the organizational unit (e.g., total company, division, location, function, level). I say “leaders/managers” because a true 360 requires that subordinates are a rater group. One reason for this requirement is that I (and many others) believe 360’s, under the right circumstances, can be used to make personnel decisions and that usually requires comparing individuals, which, in turn, requires that everyone have available the same data. This requirement also enables us to use Strategic 360’s to create organizational change, as in “large scale change occurs when a lot of people change just a little.”
  3. The process must be designed and implemented in such a way that the results are sufficiently reliable (we have already established content validity in requirement #1) that we can use them to make decisions about the leaders (as in #4). This is not an easy goal to achieve, even though benchmark studies continue to indicate that 360’s are the most commonly used form of assessment in both public and private sectors.
  4. The results of Strategic 360’s are integrated with important talent management and development processes, such as leadership development and training, performance management, staffing (internal movement), succession planning, and high potential processes. Research indicates that properly implemented 360 results can not only more reliable (in a statistical meaning) than single-source ratings, but are also more fair to minorities, women, and older workers. Integration into HR systems also brings with it accountability, whether driven by the process or internally (self) driven because the leader knows that the results matter.

Let me hasten to say that a) all 360’s, strategic or not, should have a development focus, and b) none of this minimizes the value of 360 processes that are used in support of the development of leaders, one at a time. There is no question that innumerable leaders have benefitted from the awareness created by feedback, though often also supported by a coach who not only helps manage the use of the feedback, but also should be creating accountability for the constructive use of the feedback.

Strategic 360 processes and “development only” processes can successfully coexist in a single organization. But they have different purposes, and purpose should be the primary driver of all design and implementation decisions.

The Debate is Over

with 2 comments

[tweetmeme source=”anotherangle360″]

I have recently had the opportunity to read two large benchmarking reports that relate to talent management, leadership development and, specifically, how 360 Feedback is being used to support those disciplines.

The first is the U.S. Office of Personnel Management “Executive Development Best Practices Guide” (November, 2012), in which includes both a compilation of best practices across 17 major organizations and a survey of Federal Government members of the Senior Executive Services, which was in turn a follow up to a similar survey in 2008.

The second report was created by The 3D Group as the third benchmark study specifically related to practices in 360 Degree Feedback. This year’s study differed from the past versions by being conducted online, which had the immediate benefit of expanding the sample to over 200 organizations. This change in methodology, sample and content makes interpretation of trend scores a little dicey, but the results are compelling nonetheless. Thank you to Dale Rose and his team at 3D Group for sharing the report with me once again.

These studies have many interesting results that relate to the practice of 360 Feedback, and I want to grab the low hanging fruit for the purposes of this blog entry.

As the title teases, the debate is over, with the “debate” being whether 360 Feedback can and should be used for decision making purposes.  Let me once again acknowledge that 1) all 360 Feedback should be used for leadership development, 2) some 360 processes are solely for leadership development, often one leader at time, and 3) these development-only focused 360 processes should not be used for decision making.

But these studies demonstrate that 360 Feedback continues to be used for decision making, at a growing rate, and evidently successfully since their use is projected to increase (more on this later).  The 3D report goes to some length to try to pin down what “decision making” really means so that we can guide respondents in answering how their 360 data are used.  For example, is leadership development training a “decision?” I would say yes since some people get it and some don’t based on 360’s, and that affects both the individual’s career as well as how the organization uses its resources (e.g., people, time and dollars).

But let’s make it clearer and look at just a few of the reported uses for 360 results.  In the 3D Group report, one of the most striking numbers is the 47% of organizations that indicate they use 360’s for performance management (despite on 31% saying in another question that they use it for personnel decisions).  It may well be that “performance management” use means integrating 360 results into the development planning aspect of a PM process, which is a great way to create accountability without overdoing the measurement focus. This type of linkage of development to performance plans is also reinforced as a best practice in the highlights of the OPM study.

In the OPM study, we 56% of the surveyed leaders report participating in a 360 process (up from 41% in 2008), though the purpose is not specified.  360’s are positioned as one of several assessment tools available to these leaders, and an integrated assessment strategy is encouraged in the report.

Two other messages that come out of both of these studies are 1) use of coaches (and/or managers as coaches) for post assessment follow up continues to gain momentum as a key factor in success, and 2) the 360 processes must be linked to organizational objectives, strategies and values in order to have impact and sustainability.

Finally, in the 3D study, 73% of the organizations report that their use of 360’s in the next year will either continue at the same level or increase.

These studies are extremely helpful in gauging the trends within the area of leadership development and assessment, and, to this observer, it appears that some of the research that has promoted certain best practices, such as follow up and coaching, is being considered in the design and implementation of 360 feedback processes.  But it is most heartening to see some indications that organizations are also realizing the value that 360 data can bring to talent management and the decisions about leaders that are inherent in managing that critical resource.

It is no longer useful (if it ever was) to debate whether 360 feedback can be used successfully to inform and improve personnel decisions. It has and it does. It’s not necessarily easy to do right, but the investment is worth the benefits.

©2013 David W. Bracken

It’s Human Nature

leave a comment »

[tweetmeme source=”anotherangle360″]

One question that has been at the core of best practices in 360 Feedback since its inception relates to the conditions that are most likely to create sustained behavior change (at least for those of us that believe that behavior change is the ultimate goal).  Many of us believe that behavior change is not a question of ability to change but primarily one of motivation. Motivation often begins with the creation of awareness that some change is necessary, the accepting the feedback, and then moving on to implementing the change.

One of the more interesting examples of creating behavior change began when seat belts were included as standard equipment in all passenger vehicles in 1964.  I am old enough to remember when that happened and started driving not long thereafter. So using a seat belt was part of the driver education routine since I began driving and has not been a big deal for me.

The reasons for noncompliance with seatbelt usage are as varied as human nature. Some people see it as a civil rights issue, as in, “No one is going to tell me what to do.” There is also the notion that it protects against a low probability event, as in “It won’t happen to me. I’m a careful driver.” Living in Nebraska for a while, I learned that people growing up on a farm don’t “have the time” to buckle and unbuckle seatbelts in their trucks when they are learning to drive, so they don’t get into that habit. (I also found, to my annoyance, that they also never learned how to use turn signals.)

I remember back in the ‘60’s reading about a woman who wrote a car manufacturer to ask that they make the seat belts thinner because they were uncomfortable to sit on.  Really.

Some people have internal motivation to comply, which can also be due to multiple factors such as personality, demographics, training, norms (e.g., parental modeling), and so on. This is also true when we are trying to create behavior change in leaders, but we will see that these factors are not primary determinants of compliance..

In thinking about seatbelt usage as a challenge in creating behavior change, I found study from 2008 by the Department of Transportation. It is titled “How States Achieve High Seat Belt Use Rates” (DOT HS 810 962).  (Note: This is a 170 page report with lots of tables and statistical analyses, and if any of you geeks want a copy, let me know.)

The major finding of this in-depth study states:

The statistical analyses suggest that the most important difference between the high and low seat belt use States is enforcement, not demographics or funds spent on media.

This chart Seatbelt Usage in US, amongst the many in this report, seems to capture the messages fairly well to support their assertion.  This chart plots seat belt usage by state, where we see a large spread ranging from just over 60% (Mississippi) to about 95% (Hawaii).  It also shows whether each state has primary seatbelt laws (where seatbelt usage is a violation by itself), or secondary laws (where seatbelt usage can only be enforced if the driver is stopped for another purpose). Based on this table alone, one might argue causality but the study systematically shows that this data, along with others relating to law enforcement practices, are the best predictors of seatbelt usage.

One way of looking at this study is to view law enforcement as a form of external accountability, i.e., having consequences for your actions (or lack thereof). The primary versus secondary law factor largely shifts the probabilities of being caught, with the apparent desired effect on seatbelt usage.

So, back to 360 Feedback. I always have been, and continue to be, mystified as to how some implementers of 360 feedback processes believe that sustainable behavior change is going to occur in the vast majority of leaders without some form of external accountability. Processes that are supposedly “development only” (i.e., have no consequences) should not be expected to create change. In those processes, participants are often not required to, or even discouraged from, sharing their results with others, especially their manager. I have called these processes “parlor games” in the past because they are kind of fun, are all about “me,” and have no consequences.

How can we create external accountability in 360 processes?  I believe that the most constructive way to create both motivation and alignment (ensuring behavior change is in synch with organizational needs/values) is to integrate the 360 feedback into Human Resource processes, such as leadership development, succession planning, high potential programs, staffing decisions, and performance management.  All these uses involve some form of decision making that affects the individual (and the organization), which puts pressure on the 360 data to be reliable and valid. Note also that I include leadership development in this list as a form of decision making because it does affect the employee’s career as well as the investment (or not) of organization resources.

But external accountability can be created by other, more subtle ways as well. We all know from our kept and (more typically) unkept New Year’s resolutions about the power of going public with our commitments to change. Sharing your results and actions with your manager has many benefits, but can cause real and perceived unfairness if some people are doing it and others not. Discussing your results with your raters and engaging them in your development plans has multiple benefits.

Another source of accountability can (and should) come from your coach, if you are fortunate enough to have one.  I have always believed that the finding in the Smither et al (2005) meta-analysis that the presence of a coach is one determinant of whether behavior change is observed is due to the accountability that coaches create by requiring the coachee to specifically state what they are going to do and to check back that the coachee has followed through on that commitment.

Over and over, we see evidence that, when human beings are not held accountable, more often than not they will stray from what is in their best interests and/or the interests of the group (organization, country, etc.).  Whether it’s irrational (ignoring facts) or overly rational (finding ways to “get around” the system), we should not expect that people will do what is needed, and we should not rely on our friends, neighbors, peers or leaders to always do what is right if there are no consequences for inaction or bad behavior.

©2012 David W. Bracken

What Is a “Decision”?

with one comment

[tweetmeme source=”anotherangle360″]

My good friend and collaborator, Dale Rose, dropped me a note regarding his plans to do another benchmarking study on 360 Feedback processes. His company, The 3D Group, has done a couple of these studies before and Dale has been generous in sharing his results with me, which I have cited in some of my workshops and webinars. The studies are conducted by interviewing coordinators of active 360 systems.  Given that they are verbal, some of the results have appeared somewhat internally inconsistent and difficult to reconcile, though the general trends are useful and informative.

Many of the topics are useful for practitioners to gauge their program design, such as the type of instrument, number of items, rating scales, rater selection, and so on. For me, the most interesting data relates to the various uses of 360 results.

Respondents in the 2004 and 2009 studies report many uses. In both studies, “development” is the most frequent response, and that’s how it should be.  In fact, I’m amazed that the responses weren’t 100% since a 360 process should be about development. The fact that in 2004 only 72% of answers included development as a purpose is troubling whether we take the answers as factual or if they didn’t understand the question. The issue at hand here is not whether 360’s should be used for development; it is what else they should, can, and are used for in addition to “development.”

In 2004, the next most frequent use was “career development;” that makes sense. In 2009, the next most frequent was “performance management,” and career development dropped way down. Other substantial uses include high potential identification, direct link to performance measurement, succession planning, and direct link to pay.

But when asked whether the feedback is used “for decision making or just for development”, about 2/3 of the respondents indicated “development only” and only 1/3 for “decision making.” I believe these numbers understate the actual use of 360 for “decision making” (perhaps by a wide margin), though (as I will propose), it can depend on how we define what a “decision” is.

To “decide” is “to select as a course of action,” according to Miriam Webster (in this context). I would build on that definition that one course of action is to do nothing, i.e., don’t change the status quo or don’t let someone do something. It is impossible to know what goes on in person’s mind when he/she speaks of development, but it seems reasonable to suppose that it involves doing something beyond just leaving the person alone, i.e., maintaining the status quo.  But doing nothing is a decision. So almost any developmental use is making a decision as to what needs to be done, what personal (time) and organizational (money) resources are to be devoted to that person. Conversely, denying an employee access to developmental resources that another employee does get access to is a decision, with results that are clearly impactful but difficult to measure.

To further complicate the issues, it is one thing to say your process is for “development only,” and another to know how it is actually used.  Every time my clients have looked behind the curtain of actual use of 360 data, they unfailingly find that managers are using it for purposes that are not supported. For example, in one client of mine, anecdotal evidence repeatedly surfaced that the “development only” participants were often asked to bring their reports with them to internal interviews for new jobs within the organization. The bad news was that this was outside of policy; the good news was that leaders saw the data as useful in making decisions, though (back to bad news) they may have been untrained to correctly interpret the reports.

Which brings us to why this is an important issue. There are legitimate “development only” 360 processes where the participant has no accountability for using the results and, in fact, is often actively discouraged from sharing the results with anyone else. Since there are not consequences, there are few, if any, consequential actions or decisions required. But most 360 processes (despite the benchmark results suggesting otherwise) do result in some decisions being made, which might include doing nothing by denying an employee access to certain types of development.

The Appendix of The Handbook of Multisource Feedback is titled, “Guidelines for Multisource Feedback When Used for Decision Making.”  My sense is many designers and implementers of 360 (multisource) processes feel that these Guidelines don’t apply because their system isn’t used for decision making. Most of them are wrong about that. Their systems are being used for decision making, and, even if not, why would we design an invalid process? And any system that involves the manager of the participant (which it should) is creating the expectation of direct or indirect decision making to result.

So Dale’s question to me (remember Dale?) is how would I suggest wording a question in his new benchmarking study that would satisfy my curiosity regarding the use of 360 results. I proposed this wording:

“If we define a personnel decision as something that affects an employee’s access to development, training, jobs, promotions or rewards, is your 360 process used for personnel decisions?” 

Dale hasn’t committed to using this question in his study. What do you think?

©2012 David W. Bracken

Full Stops, Neutrinos and Rocket Science

with one comment

[tweetmeme source=”anotherangle360″]

I don’t know why I feel compelled to respond to what I see are unreasonable positions (primarily in LinkedIn discussions). But I do, and this blog gives me a vehicle for doing so without taking up a disproportionate amount of air time on that forum.

So what got me going this time? A LinkedIn discussion (that I started on the topic of 360 validity) got diverted into the topic of “proper” use of 360 feedback (development vs decision making).  The particular comment that got me going was, “I believe these assessments should be used for development – full stop.”  (Virtually 100% of 360 processes are used for development, but the context indicates that he meant “development only.”) Having lived and worked in London for a while, I realized (or realised) that the “full stop” has the same meaning as “period,” implying end of sentence and, with emphasis, no more is worth saying.  By the way, I am using this person only as an example of the many, many individuals who have expressed similar dogmatic views on this topic.

There are probably a few things that are appropriate to put a “full stop” on. That would be an interesting blog for someone, e.g., would we include the Ten Commandments? “Thou Shall Not Kill. Full stop.”  Hmmm… but then we have Christians who believe in capital punishment, so maybe it’s only a partial stop (or pause)?  Like I said, I will let someone else take that on.

Are the physical sciences a place for “full stops?”   Like, “The world is flat. Full stop.”  “ The Sun revolves around the Earth. Full stop.” Just this last week, we were presented with the possibility that another supposedly immutable law is under attack, i.e., “Nothing can go faster than the speed of light. Full stop.”  Now we have European scientists who have observed neutrinos apparently traveling faster than the speed of light and are searching for ways to explain and confirm it. If found to be true, it would challenge many of the basics of physics, opening the door to time travel, for example.  The fact that some scientists are apparently challenging the “full stop” nature of the Theory of Relativity is also fascinating, if only for the reason that they are open to exploring the supposedly impossible. And, by the way, are begging for others to challenge and/or replicate their findings.

I firmly believe that the social sciences have no place for “full stops.”  To me, “full stop” means ceasing to explore and learn. It seems to indicate a lack of openness to considering new information or different perspectives.

I suspect there are many practitioners in the “hard” sciences who question whether what we do is a “science” at all. (I think I am running out of my quota of quotations marks.)  Perhaps they see our work with understanding human behavior as a quest with no hope of ever having answers. That’s what I like about psychology. We will never fully know how to explain human behavior, and that’s a good thing. If we can explain it, then we probably could control it. I think that is a scary thought. BUT we do try to improve our understanding and increase the probabilities of predicting what people will do. That is one of the basic goals of industrial/organizational psychology.

(I have been known to contend that what we do is harder than rocket science because there are no answers to what we do, only probabilities.  The truth is that even the hard sciences have fewer “full stops” than even they would like. I just finished reading a book about the Apollo space program, Rocket Men, and it is very interesting to know how many “hard stops” that used to exist were bashed (e.g., humans can’t live in weightlessness, the moon’s crust will collapse if we try to land on it. Insert “hard stops” appropriately), how much uncertainty there was, and how amazing the accomplishment really was.  I also learned that one of the reasons the astronauts’ visors were mirrored was so that aliens couldn’t see their faces. Seriously.)

Increasing probabilities for predicting and influencing employee behavior requires that we also explore options.  I can’t see how it is productive to assert that we know the answer to anything, and that we shouldn’t consider options that help us serve our clients, i.e., organizations, more effectively.

On top of all that, the most recent 3D Group benchmark study indicates that about one third of organizations DO use 360 data for some sort of administrative purpose, and that almost certainly understates the real numbers. What do we tell those organizations? That they should cease doing so since our collective wisdom says that there is no way they can actually be succeeding? That we cannot (or should not) learn from what they are doing to help their organizations make better decisions about their leaders? That a few opinions should outweigh these experiences?

I don’t get it. No stop.

©2011 David W. Bracken