NATIONWIDE — On the morning of Wednesday, Nov. 9, 2016, many in the U.S. woke up wondering how the pre-election polls got it so wrong when they predicted Democrat Hillary Rodham Clinton would win the race for the White House. Many blamed the media.
As a result, many voters today are asking: Why should we trust the polls in 2020 when they led us so astray four years ago?
Josh Clinton, a political science professor at Vanderbilt University and the co-director of the Center for the Study of Democratic Institutions, said before voters put their faith in polls, they first need to understand how polls work. To help voters, both pollsters and the media who disseminate their polls have a responsibility to explain their methodology.
“Polling is both an art and a science,” said Clinton, who worked with the American Association of Public Opinion Research, or AAPOR, to dissect the 2016 scenario and is keeping a keen eye on what’s happening across the country ahead of Nov. 3. (Josh Clinton is not related to Hillary Clinton.)
“There are a lot of misperceptions about what polls can and cannot do,” he said. “It’s really super important for our democracy in general for people to understand and critically assess what they are being shown.”
Here is a user-friendly guide to watching the polls in 2020.
What went wrong in 2016?
Analysts and scientists with the American Association for Public Opinion Research, a national, bipartisan research organization, found in an extensive evaluation that several major issues arose in the 2016 polling process, some of them unforeseeable.
First of all, the national polls didn’t get it wrong, the evaluation said. The polls predicted Hillary Clinton would win the popular vote with about a 3-point lead. That was pretty close: Clinton won the popular vote by 2 percentage points over Republican Donald Trump. Clinton lost because she did not have the Electoral College votes.
The state-level polls were problematic and for some unforeseeable reasons.
One reason is many voters in the three critical battleground states of Wisconsin, Florida, and Pennsylvania made their voting decision at the last minute. A large majority of those late deciders voted for Trump, but polls taken before their decisions were made were not able to reflect that.
State-level polls were off because they did not reflect the disparities between college and post-graduate educated voters and those voters with a high school education or less.
“Part of the art of polling is trying to figure out what you think the electorate is going to look like,” Clinton said.
That proved to be a challenge in 2016.
Pollsters look at previous voting trends, Census reports, and other demographic data to help to determine their methodology. Pollsters in 2016 looked at voting demographics and statistics from 2012 and 2008 when the polls predicted the presidential race’s outcome fairly accurately.
College and post-graduate educated voters in previous years tended to vote in the same manner as those with a high school or less education. Both groups tended to vote Democratic in both 2012 and 2008. So, pollsters who surveyed these two groups of voters did not see a need to adjust their methodology or to weight respondents by education level.
They more or less lumped their results together and typical Democrat voters, and the results reflected an overestimation of support for Clinton.
Pollsters could not have also known that there would be a significant increase in turnout in traditionally rural Republican areas compared to 2012. A large percentage of these voters supported Trump, affecting the outcome of the election, which did not reflect with the polls showed, according to the AAPOR report.
So what are the pollsters doing differently this year?
In the wake of the 2016 polling debacle, AAPOR created the Transparency Initiative, which encourages polling and public opinion research organizations to sign on as members and pledge to disclose their methodology in order to provide the public with an accurate picture of how their polls were collected.
The idea is that the more consumers know, the better they can understand what the polls reflect.
There will always be unpredictable variables, such as last-minute changes in a race or in the electorate, Clinton said.
The media also made mistakes in 2016. How are they doing this year when it comes to reporting pre-election polls?
The day after the 2016 election, many major news outlets took a hard look at what they got wrong. Newsrooms examined AAPOR’s detailed assessment of the 2016 polls and used that to evaluate what they did to contribute to how the public perceived the predicted outcome of the presidential election.
More often than not, today when major news outlets like CNN or The New York Times talk about poll results, they are presented with a caveat acknowledging that the polls they reported in 2016 did not reflect the final results.
“They are in the media a little more cognizant this time around when talking about the polls and careful to noted that what happened in 2016 in the state-level polls and the fact that these results aren’t necessarily what happens on election day,” Clinton said.
Some media have benchmarked how polls are doing this year compared with 2016, which is good, Clinton said.
“The margin of error is just one part of what goes into the poll. There is this expertise that pollsters use, and making voters aware of those decisions that are being made and the limitations of polls are important,” he said.
The big question: Can voters trust the polls?
In 2018, two years after Trump was elected, a survey conducted by The Hill newspaper and HarrisX showed that 52 percent of registered voters did not believe polls were accurate. Nineteen percent of respondents said polls were “almost never” accurate.
“You can trust them in the sense that most of the people who are doing them, especially the ones who are members of the AAPOR Transparency Initiative are doing their level best to give you a sense of what the electorate is looking like right now,” Clinton said.
“But that said, the polls can be …there can be error involved there,” he said.
Polling organizations aren’t trying to be misleading, but things can change at the last minute, putting the accurate results out of reach, such as what happened in 2016 when a considerable number of voters decided late in swing states.
Polls only give you a snapshot of what was true when the polls were conducted and not a guarantee of what is going to happen in the future, Clinton said.
“At the end of the day, polls are interesting, but they shouldn’t persuade someone on how to cast their vote or to vote or not,” Clinton said.
“The only poll that really, really matters is the one that happens on Nov. 3,” Clinton said.