The Importance of Importance: Chapter 1
I recently received a copy of One Page Talent Management (by Marc Effron and Miriam Ort) from Marc after our exchange in this blog regarding rating scales. Thank you, Marc! While I will certainly read the whole book, I naturally started with the 360 Feedback chapter.
I am quite certain that Marc and Miriam do not need any sort of endorsement from me to give them more credibility than they have already earned. But let me note early on that I am in agreement with most of their recommendations regarding the design and implementation of 360 processes. A couple of major topics where “we” may be in the minority in our beliefs include the support for use of results in decision making (“transparency” in their lingo) and the value of following up with feedback providers.
Using this blog to have an open discussion with Marc (and Miriam and/or others who desire) about various topics from OPTM is a practice that I hope is useful to readers. I don’t know if our previous mini-debate regarding scales worked but I was gratified that he chose to clarify his views. I also don’t propose that my views are better or more correct, but hopefully cause practitioners to pause as they make these important decisions, such as the format of the rating scale.
The question of “the importance of importance” is more than a turn on words. The issue of how to determine the importance of behaviors and subsequent actions is maybe the most significant question of all. “Importance” has multiple meanings. The organization has a need and a right to define what behaviors are most important for leaders to exhibit at a macro level. But there also needs to be flexibility to define what behavioral competencies have the highest priority for a leader for their position, level, function, and, perhaps most importantly, their stage of development in their career.
The issue of determining importance raises questions and requires decisions at multiple steps in a 360 process:
- What are the most “important” items to include in the survey?
- Should the survey instrument seek importance ratings/indications from raters?
- Should the report guide the users to the most “important” issues (e.g., top/bottom scores, strengths, blind spots)?
- How will the data be used? Who can/should guide the identification of the most “important” actions?
Each of these questions deserves a separate discussion, and I will endeavor to do so in subsequent blog entries. My goal will be to publish these in fairly rapid succession to create some thread of continuity.
For this blog entry, the issue du jour from OPTM is the first bullet as it relates length of the survey and the implications of prioritizing actions based on importance. On page 59, they write:
“First, the survey items tend to include every behavior in the competency model, rather than a narrower set of the most important behaviors… To ensure success, ask only the fifteen or twenty-five most important items.”
Who is the master decider as to what are the 15-25 most “important” items? It would seem logical to assume that a leadership competency model includes only things that are important. If some competency model content is not appropriate for 360 items (e.g., requirements that are not observable by others), then that is one thing. But arbitrarily saying some competencies are more important than others across all leadership positions seems just that, i.e., arbitrary. What criteria are used to make that decision? I suspect that Marc and Miriam have a method for identifying the “most important” behaviors but I would like to hear it. They anticipate that this “criticism” will be forthcoming, but seem to say, “get over it.”
The answer to what is a sufficient number of items in a 360 survey has no “right” answer. Who is to say that the “right” answer is 5, 10, 25, 50, 100, or 1000 items? I have attempted in my practice to be systematic by suggesting that a) every dimension be represented, and b) that there be a sufficient number of items in each dimension to satisfy some standard of reliable measurement (i.e., 3 items per dimension). This is a piece of science also that the OPTM folks might address, although it is not clear if OPTM endorses the concept of dimensions. Also, by covering each dimension in the model, we hopefully help communicate to the users (raters and ratees alike) what the organization’s definition of “effective leadership” is.
Typically, the leadership competency model has between 8 and 12 dimensions (factors) in my experience. If we have a minimum of 3 items per dimension, that gives us a survey of reasonable length. I also believe that having the same number of items in each dimension helps to communicate that all dimensions of are equal importance; that design decision is often compromised for good reasons. And, as noted in earlier blogs, the instrument should be clearly organized by dimension.
In a recent blog, “Who is the customer in 360?”, I proposed that shortening 360 content can be an indication that the raters are more important than the ratees (participants), and that we are more worried about inconveniencing raters (making it “onerous” as Marc says) than in providing quality information to the leader.
With the OPTM solution, the participant and other stakeholders are already limited about the possible list of developmental needs by the shortened list of competencies and the rater preferences. I would much rather see the broader range of competencies and resulting feedback made available to the participant and other users (boss, coach, organization) and let them decide what is most important for the given person at that given time.
But that might not fit on one page.
Our next blog will explore the wisdom of asking raters to select the most important issues for the participant.
©2010 David W. Bracken