How pollsters are trying to get it right in the Trump-Harris race



Polls underestimate Illustration Courtney JonesAlex Brandon and Matt YorkAPand Adobe Stock

Pollsters say they are employing a range of methodologies to ensure that they are not underestimating Donald Trump amid survey after survey that shows a dead heat in various swing states.

If the polls are accurate, the battle between the former president and Vice President Harris will truly go down to the wire.

But the numbers are being viewed with skepticism in some quarters given how pollsters missed how strong Trump’s support was in the 2016 and 2020 races. For Democrats, the close numbers have them hitting the panic button, fearing they need Harris to be ahead by a few points to feel safe.

“We’ve done everything we know how to do,” said Charles Franklin, the director of the Marquette Law School Poll, based in Wisconsin. “We’ve worried about it a lot. We’ve made some changes to try to address it, but we’ll only truly know in November when we get the vote count back.”

Polling has been the standard of measuring where a presidential election stands for decades, going back to the 1930s.

But the industry was rocked by the two most recent presidential elections in 2016, when Trump scored an upset victory over Hillary Clinton to be elected president, and 2020, when President Biden prevailed but by a much closer margin than some had anticipated.

Although national polling heading into the election gave Clinton a couple-point lead, a similar margin to her popular vote win, Trump outperformed expectations in the key states to win the election. And polling averages were further off four years later in the main battlegrounds even as they predicted the right winner.

But pollsters have emphasized that each election cycle is different, and observers should not just presume that Trump is performing better than polling currently shows because of the past.

John Cluverius, the assistant director of the Center for Public Opinion at the University of Massachusetts Lowell, said pollsters are always tweaking their methodologies from cycle to cycle. They must respond to external factors, he said, such as the fact that polling is becoming increasingly expensive, especially if a pollster wants to be accurate. At the same time, the country has become even more closely divided and polarized. 

Pollsters have made some specific adjustments in response to the past two presidential election results. 

Cluverius said many pollsters in 2016 did not weight their results for educational background to be representative of the population, which is not a Trump-specific issue but was relevant in the election that year, with the GOP candidate performing strongly among voters without a college degree. 

He said that issue was a simpler fix, but 2020 demonstrated how pollsters have struggled with non-response among certain voters, many of whom supported Trump that year. 

“The clearest sort of story that you can tell about 2020 is that there were specific Trump voters, not all Trump voters, the sort of very specific Trump voters who were less likely to pick up the phone in 2020, and there wasn’t an obvious way to counter that,” Cluverius said. 

Franklin said the largest change in polling methodology since 2016 has been a shift from all interviews happening by phone to hybrid sampling as people have become increasingly less likely to pick up the phone. 

“By 2022, we would dial 100 numbers in order to get one person to pick up,” he said. “That was just driving costs way, way up. And it wasn’t people refusing to do interviews. It was not picking up so that we couldn’t get the foot in the door.” 

Marquette now works off a list of registered voters, allowing them to know that those they’re contacting are registered and match up a voter to a phone number and email address so pollsters can text or email the person. This has caused 80 percent of interviews to be done online and only 20 percent by phone call, Franklin said. 

Regarding the specific circumstances in 2016 and 2020, he said pollsters are making an “extra effort” to reach the areas that lean toward Trump that were previously underrepresented in past years. 

“That’s hard to do in the sense that I can call the right people, or I can email them, but I can’t make them respond, and so that remains the challenge to the whole industry,” Franklin said. 

But he added that he believes the sample has been improved and Marquette is doing a better job of getting a representative sample. 

Cluverius noted other methods pollsters have implemented to address the issue with non-response, like specifically increasing the number of Republicans polled or using a concept called “weighting on recall vote.” 

This method works by asking respondents how they voted in the last election to weigh the poll based on the margin that one candidate won. But it, along with other tactics, has some flaws. 

New York Times chief political analyst Nate Cohn explained last weekend why the Times/Siena College poll doesn’t employ this method, including that voters may not remember or misremember how they voted and therefore it may overestimate support for the party that lost the last election. 

But Cohn said more pollsters are using this method as they believe it could be more accurate now in a more politically engaged environment. 

Cluverius said he believes if weighted properly, this method can be “fairly reliable.”

He said UMass Lowell weights the 2020 registered voter profile by self-reported 2020 votes, but that it also uses a likely voter model based on how respondents answer questions like their intention to vote, how closely they’ve been following news about the election and how often they say they vote in presidential elections. 

“In the end, it really boils down to trying to be as transparent as possible, trying to treat a poll as the start of a conversation about the state of the race, rather than the end of a conversation,” he said. 

Some pollsters maintain the best way is to stay the course, especially those who saw less of an error in 2016 and 2020. 

Jim Lee, the president and CEO of Pennsylvania-based Susquehanna Polling & Research, said his firm has made “little, if any” changes to their methodology because it was one of the firms that did not significantly undercount Trump voters in polls in the lead-up to 2016 and 2020. The firm was only polling Pennsylvania in 2016 but had begun taking national polls, too, by 2020. 

An analysis from RealClearPolitics found Susquehanna polling on average had the second smallest polling error both over the course of 2014 to 2022 and in the 2020 election. 

Lee argued that the idea of Trump voters being underrepresented was more legitimate in the past when more people were uncomfortable telling pollsters that they supported him, but he believes that was a “unique phenomenon.” 

He said weighting the polls to increase the weight of Trump voters could be a “Trojan horse” showing Trump to be in a better position than he actually is because pollsters need to make “a lot of assumptions” to use that method. 

“That obviously then puts Trump in a much better position on the head-to-head question,” Lee said. “But you have other firms like me that haven’t really had first-hand experience with that problem, and so we haven’t felt like our system was broken.” 

Cluverius said he hopes regardless of the results that this year will be an opportunity to have a “vigorous conversation” about polling and methodologies. 

“We think about democracy. It’s the worst form of government except for all the others,” he said. “Polling is the worst form of measuring weight and public opinion in the population except for all the others. We learn a lot more from high quality polling than we do from focus groups or vibes or early vote data or qualitative interviews.”



Source link

About The Author

Scroll to Top