As a follow on to my previous post on trends in US job satisfaction and morale/engagement, I was fascinated to come across the latest data from WatsonWyatt and World at Work (lets call them WatsonWyatt from now on) which indicated that engagement was down over 9% in the most recent period. Here is exactly what they said:
The 2009/2010 U.S. Strategic Rewards Study found that employee engagement levels for all workers at the companies surveyed have dropped by nearly 10 percent since last year
I had argued previously that two apparently conflicting databases (from the Conference Board and Gallup) on job satisfaction and morale/engagement had to do with what was being measured, but this new data compels me to share something else very important with you.
Here are the facts:
Jennifer Robison wrote in the Gallup Management Journal (as reprinted in The Free Library) on January 14th 2010 that :
Gallup has tracked the engagement levels of the U.S. working population for the past decade. Its most recent employee engagement research shows that 28% of American workers are engaged, 54% are not engaged, and 18% are actively disengaged…..In addition, from July 2008 to March 2009 — during the heart of the recession — Gallup tracked a large sample of employees and found only slight (1%) changes in overall engagement. In July 2008, 31% of employees were engaged, 51% were not engaged, and 17% were actively disengaged. In March 2009, these percentages had changed very minimally: 30% were engaged, 52% were not engaged, and 18% were actively disengaged.
For several years Gallup has reported that the number of US workers who are “engaged at work” has hovered between 26% and 30% for the US working population. Now the WatsonWyatt data tell us that there has been a significant drop since their last survey in 2008. Gallup data show that engagement is either up 1% or down percent in that time period, depending on which month the measurement was made, but this is nothing like the 9% drop from the WatsonWyatt group. Unlike my previous post, both these benchmarks purport to be about engagement, both cover the same time period, and both cover the same general population. None of my previous arguments are valid here and so we are faced with something which I consider to be a significant weakness in the the morale/engagement consulting field: the validity of these external benchmark databases. This is so important that I covered it in my book, at the risk of infuriating some who work in this field, but I did it because of its importance. I also had my own data in the book which had never been shared with those who did not commission the original study, a group of large utility companies. That data showed that external benchmark databases of employee morale can differ significantly from each other, for the same industry, in the same country, on exactly the same question, and for the same time period. I was so troubled by the finding that I literally gave up using such things in my own consulting practice, even though I had spent years building them. The reason was that I no longer had confidence in them and told my clients exactly why. I moved entirely to internal benchmarking and built my analysis software around that function. So my question is: if it is the case, that such differences can be found in benchmarks for a specific industry, how much will they differ when measured across many industries? This recent data gives us a window into this issue, and an answer to that question: they differ, and significantly so.
In my previous post I pointed out that methodological differences or errors were unlikely to be the cause of differences between job satisfaction and engagement data; plenty of reasons for the differences could be found in the factors being measured, not the methodology. I also said that I had confidence in both Gallup and the Conference Board; that is true with WatsonWyatt as well. (I do not know enough about World at Work but a cursory look at its website indicates it has been around a long time, is an organization with 30,000 members which certifies professionals in compensation, benefits and work-life skills, etc. and I have absolutely no reason to doubt their competence). WatsonWyatt is now TowersWatson, an organization of which the Towers part bought the well known survey research house ISR some years back, giving them a huge amount of data and survey research firepower. I will use “WatsonWyatt” here because the study was first published under that name. All these organizations are therefore large, well run, and entirely competent.
So lets look at how this happens and draw some conclusions from it. There are several reasons for the differences between these benchmark databases, in my opinion:
–worker engagement is not like water. Water is H2O. Engagement is, well it is a number of things. Each dictionary would define it differently, as might each consultant. Who can say that one is right and one is wrong? I defined it, as best I could, in my book, and compared it to morale. I repeated some of this in a recent post. Based on the definition you choose, you develop a questionnaire. The items in the questionnaire measure your engagement elements. You then refine that questionnaire, as Gallup has done, going down to only 12 questions. You test and test and test. WatsonWyatt and its partner in this latest study no doubt has its own specific methodology, its own definition of engagement, its own questionnaire.
–consulting in worker morale and engagement is very fragmented. There are stand alone research houses like Sirota Survey Research; there are global giants like Gallup, TowersWaston, Mercer and HayGroup (where I used to work)…and these firms do many things in addition; there are thousands upon thousands of smaller groups or individuals who work in this field, some of whom have their own benchmark “norms”. No one firm has a lock on all the companies in one industry, or on all the best performing companies, etc. If I am Apple and want Dell, HP, Sony, Nokia and a list of other competitors in my benchmark, I cannot find all that in one place. A “national norm” or benchmark either has to come from firms collaborating with each other and sharing client data (it has happened but I imagine it has some anti-trust issues, and in any case some clients might be wary of sharing their strategically valuable employee opinion data). It can also be created from sampling techniques of employees from non client organizations. This latter method creates its own problems: how do you get non-client employee opinion data? With great difficulty.
–sampling the national engagement level from 1000 people at work is very tricky. Look what happens with opinions polls for political elections: they are not always right! This and other reasons were why I always told clients to poll 100% of the workforce and do everything possible to make sure than 90% + would respond, which they usually did. That way I knew that the data we had really did represent the whole, and that managers deep in the organization would have enough data with which to work, which they often did not with a sample.
–as a result of the above issues, with the definition of engagement and the resulting questionnaires and with possible sampling, the companies covered by WatsonWyatt and Gallup might have been very different, in different industries than each other and they certainly received a different questionnaire. All these things affect the data. Watson Wyatt talks about the data representing “the surveyed companies” but also implies that this represents a national sample. These might be quite different, its very hard to tell.
Perhaps the clearest thing that a consultant who uses client data for national or industry-specific benchmarks could say to a client about a benchmark would therefore be this:
This benchmark is based on how we define engagement, which might be different from how others define it; it is based on our proprietary questionnaire, which is based on our definition of engagement; it is also based on a sample of our client data, and others might have different types of organizations in their databases which will affect their data and cause it to be different than ours. As a result we cannot guarantee that our engagement benchmark will match any other, even for the same geographic location, the same worker demographics and the same time period.
(one not using client data for its benchmark would have to add even more, about possible sampling errors and response rates). In any case, don’t hold your breath that we will ever see such a disclosure!
Perhaps clients of all the firms which provide external engagement and morale benchmarks are quite happy with the benchmarks they are given. Maybe they look only casually at such things, and pay less attention to them than internal benchmarks? Maybe they just look at a group of top performers as a comparison and find that useful? Whatever the case I think it behooves us as consultants to make clear that we might not have “THE answer” as to what an external trend is in worker engagement. We have ONE answer, our own. That answer might be as much as nine times (1% change versus 9% change) higher or lower than another’s answer, as we have seen in the two firms’ data shown here. For me that was not good enough, and so I focused on telling clients how they had shifted internally over time, how groups internally were different from each other,and what their scores were on an absolute basis. They all found that so valuable that they did not miss the one thing which some believe is a “must have” in this business…the external benchmark.
Tell me what you think….