by Bryon Allen, Partner/COO at WPA Opinion Research
So Ted Cruz won. That was unexpected to a lot of people, included otherwise very good pollsters like Ann Selzer (who conducts the Des Moines Register poll) and Douglas Schwartz (who is the head of the Quinnipiac poll).
The cat is pretty much out of the bag now though that it wasn’t unexpected to us and I’m betting it wasn’t a surprise to a lot of the other campaigns’ pollsters.
So what was the difference and what does it tell us about polling Iowa?
1. Sampling the Iowa caucuses is really hard.
Most of the public polls used a methodology based on calling all Iowans or all registered voters and then letting them self-select as likely to caucus
This is a good methodology for some elections. It’s a lot like what we do in a general election setting. But it’s not a good way to poll a caucus.
The problem for these polls in the caucus is that they wind up including a large number of non-voters in their samples. There is a lot of research showing that people overstate their likelihood to vote, especially people who don’t have a history of voting already. Given the high effort required to participate, overstatement in a caucus is likely even higher.
In many cases these problems don’t matter—if the unlikely voters screening into the survey have the same opinions as likely voters, the results will still be consistent with reality. But this year in Iowa was different.
Trump was increasingly rejected by traditional caucus attendees, especially after his decision to skip the final debate. But his numbers in the polls were buoyed by a group of voters who do not typically vote in the caucuses but were strongly attached to Trump.
In a case like this, the bias of the public polls toward including too many non-caucus goers in their samples became impossible to overcome leading them to substantially over-state support for Trump
2. Field period mattered more in Iowa this year.
The Des Moines Register poll, Quinnipiac, and others conducted their interviews over a period that spanned much of the last week before the caucus. I said yesterday that this may have been a problem and it turned out to be.
In most years, the arguments have all been made well in advance of the last week. While some things can change and some voters will change their minds, it’s rare for a major shake-up of the race to happen in the few days before the caucus.
But this year, once again, was different. The last week saw Trump skip the final debate, a move that cost him substantially with traditional caucus goers. It also saw the first real opportunity for voters to see the non-Trump candidates debate the issues rather than participating in a circus.
Both of these things moved the numbers and most of the public polls missed this effect by releasing data based substantially on pre-debate interviews. If they had polled post-debate and into the weekend, they would have seen what we saw and drawn very different conclusions about the state of the race.
3. No, polling is not dead or on life support or whatever.
Sometimes I worry that I am part of the most hated profession in politics (which would be the most hated profession in a hated field…grim). The only thing that’s going to keep pollsters from being first against the wall when the revolution comes is that there are still lawyers in the world.
Inevitably we’re already seeing a raft of “polling is dead” stories. But polling is not dead and the fact that some pollsters got Iowa wrong for a couple of completely explicable reasons is not an indictment of the entire field.
What last night does suggest is that the same methodologies can’t be applied across all elections. The public pollsters need to do what most campaign pollsters did years ago and develop different models, sampling protocols, and methodologies for different types of races. It’s not a one-size-fits-all world out there and things like special elections and caucuses require a more sophisticated approach than do high-turnout general elections.