I see you rolling your eyes
This is the second of a series of blog entries that I am using to respond to some questions that were submitted during my recent webinar, “Make Your 360 Matter.” Some of these questions were ones I got to during the webinar, but would like to expand on my answer and at the same time share the thoughts with others who might not have attended.
I am going to combine two questions to address one important topic:
You went over this Dave, but if you could pick 1-3 things that most 360s do poorly–and could easily do better
Rater training seems so important, but it’s hard enough to get ANY training out there. Are there streamlined ways to get the key points conveyed to raters?
For ten years or more, I (and some coauthors) have contended that rater training might well be the most important and neglected practice in 360 systems. My suspicions are that at least part of the reason for this lies in the tone of the second question, i.e., the “rolling of the eyes” every time the topic of rater training comes up due to preconceptions of what it involves.
During the webinar, I presented data on two 360 practices that appear to have major effects on the ability to create and measure behavior change. One of those design features is follow up with raters with data from the article, “Leadership is a Contact Sport,” which can be found on marshallgoldsmith.com.
The second topic was actually a combination of practices, i.e., choice of rating scale combined with rater training. I presented some data that strongly indicates the potential power of those factors in affecting the distribution of ratings, especially in reducing leniency error.
If we think about it a little, every 360 process already (hopefully) has some sort of rater training, usually in the form of directions on how to complete the questionnaire. In its simplest form, that might consist only of basic directions on how to physically complete the survey (e.g., one answer per question, must answer every question). I find it is becoming increasingly common for instructions to go a little further by providing further guidance to the rater, such as:
- Think of the leader’s behavior during the past year
- Do not give excessive weight to recent events or observations
- Use the full rating scale as appropriate. No leader is so good to deserve all “5’s” or so bad to get all “1’s”
We also often give a form of “training” in regard to write in comments:
- Be specific about what you observed and/or what you suggest
- Be constructive
- Limit you r comments to job-related behaviors
- Do not identify yourself, unless you intentionally desire to do so
In recent blogs and the webinar as well, I have expressed the view that 360 surveys are not “tests.” In the classic view of tests, we strive to identify individual differences that will help us differentiate subjects in order to predict future behavior, such as success on the job. In 360’s, we have no desire or need to measure individual differences in the raters since they are not the focus of our measurement efforts. Instead, we use training to minimize or remove individual differences in raters that we define as rater error.
So how do we deliver rater training? I have seen it come in many forms, including classroom training. But its most common design is a set of slides that the rater must, at a minimum, review prior to completing the first questionnaire. Once they have done that, they are “certified” and do not have to participate in any other training for that administration cycle.
Some of the typical content areas for rater training can include:
- Purpose of the 360 process
- How the feedback will be used
- How anonymity is protected
- Source of the behavioral items (e.g., values, competency model)
- Rating scale format/content
- When to respond “Not Observed” (Don’t Know)
- Time frame (e.g., 1 year of behavior)
- Types of rating errors and how to avoid (e.g., leniency, severity, recency, halo)
- Case studies/examples (e.g., how to use rating scale)
There is a lot of variability in both amount of content and the pace that raters go through the slides, so the time required is hard to predict. Some processes have some sort of test at the end to ensure some level of attention.
Don’t discount using group training if the setting allows it. I once did a 360 at a hospital and was able to convene groups of nurses for 20-30 minute sessions, for example. Based on their questions, I know that it improved the quality of the feedback by correcting misconceptions and/or misinformation.
Another observation I made in the webinar is that rater training is a best practice in performance management/appraisal processes and a guard against legal challenge. Since 360’s closely resemble performance appraisals, this would seem applicable to them as well.
So please stop rolling your eyes and consider the potential benefit, with little cost, in implementing some form of rater training. It can make a difference.
Please share any experience you have had with rater training, good or bad!!
©2010 David W. Bracken
Subscribe to comments with RSS.