But beyond the topline numbers, these polls, conducted by some of the smartest and most innovative researchers in the business, find wide variation in the views of critical subsets of voters. Pollsters, though, say such differences aren’t cause for alarm.
Public pollsters often release polling around major events, but rarely is there a convergence as there was in the week leading up to the convention. Nearly every major media outlet with a polling operation released numbers, as did two prominent independent pollsters. That timing allows for a more direct comparison of what these polls are finding about the state of the race.
And when you explore the crosstabs, it doesn’t all look the same.
Biden appears to lead among women by double-digits, but is his margin 12 points? Sixteen points? Twenty-one points? Maybe 29 points? All were findings of the major polls released in the week leading up to the Democratic National Convention’s start on Monday.
Among men, Trump could have a 16-point advantage, or perhaps it’s an even split. One poll showed Biden ahead by eight points with men.
Voters under age 30 surely break for Biden, but his support among that group ranged from 49% in one poll all the way up to 67% in another. And voters age 65 or older broke for Biden by 17 points in one poll, 10 points in another and split evenly between the Biden and Trump in another.
The picture gets even messier when the polls move from measuring age and gender to look at partisan leanings. Polls using different question wording to identify a respondent’s partisan attachments can yield different results, but in looking at political independents, polls seem to disagree about where the lean. In some polls, Biden’s lead among that group stands around 20 points or higher. In others, Trump holds a narrow edge.
Poll watchers sometimes obsess over small differences in topline numbers, even those within a poll’s reported margin of sampling error.
But those differences pale in comparison to the size of the differences noted above. For those living in a world which hangs on every 1- or 2-point shift in the polls, the range of results is jarring.
To pollsters, though, the variation isn’t all that unexpected, or alarming.
“It’s a bathroom scale, not a kitchen scale,” says Cliff Zukin, a professor emeritus at Rutgers University who wrote a guide to variance in polls for the American Association for Public Opinion Research ahead of the 2016 election. “It’s pretty good at measuring pounds, but it’s not good at ounces.”
Subgroup estimates like the ones highlighted here have larger margins of sampling error than do the overall results, and that can mean that what look like huge differences aren’t really all that huge. The estimate of Biden’s support at 49% among voters age 30 or younger carried a reported error margin of plus or minus 8 percentage points, meaning there is reasonable confidence that the true value for Biden support among that group lies somewhere between 41% and 57%.
Analysis of polling often highlights that margin of error – a calculation which accounts for the known accuracy of a sample where the researcher knows the probability that an individual within the target sample could have been selected for that poll – but rarely addresses other possible sources of error in polling, mostly because they can’t be easily translated to a number.
There are the obvious sources of variance — being in the field on different dates or using different question wording — but there are lots of other choices pollsters make in designing their surveys which can lead to differences in the results.
“There are a number of subgroup estimates that bounce around from poll to poll,” said Courtney Kennedy, director of survey research for the Pew Research Center who led AAPOR’s 2016 task force assessing the validity of 2016 election polling. “As a result, the results are sometimes incoherent when looking across polls. This variation across polls is testament to the tremendous variation in the methods used to conduct polling these days.”
The steps each researcher takes to get their sample can introduce variation which may or may not be meaningful. Of the eight major polls released in the run up to the convention — from CNN, ABC News and the Washington Post, NBC News and the Wall Street Journal, CBS News and YouGov, Fox News, NPR and PBS NewsHour with Marist College, Monmouth University, and the Pew Research Center — six were conducted by phone, two were online. The percentage of cellphones included in the base sample for those phone polls ranged from roughly 60% to 75%. Some began from a base of randomly selected phone numbers to reach a representative sample of adults, others dialed from a list of registered voters. The online surveys included one conducted using a panel of people who have signed up to take surveys, with respondents selected in a targeted way on the basis of certain traits, and one drawn from a sample recruited via telephone or mail. And those differences are all before accounting for weighting practices, the added variability that likely voter models introduce, and question wording or order effects.
Looking over the poll results, Zukin said, “I see reasonable variation and a lot of reasons for reasonable variation.”
For many outside the field of survey research, that could mean a reset of expectations.
“I think poll watchers would do well to accept that the field is in an era of transition, and one consequence of that is a frustrating amount of noise in subgroup estimates,” Kennedy said.
Zukin’s advice is to turn to the averages and focus on trends within individual polls, to ensure consistency across methods. “We’re less sure that the number is 33%. It could be 35% or it could be 31%, but if we do it the same way every time, we should know if it goes up or down.”
In other words, on this roller coaster, watch for change that makes your stomach drop.