© 2025 WFAE

Mailing Address:
WFAE 90.7
P.O. Box 896890
Charlotte, NC 28289-6890
Tax ID: 56-1803808
90.7 Charlotte 93.7 Southern Pines 90.3 Hickory 106.1 Laurinburg
Play Live Radio
Next Up:
0:00
0:00
0:00 0:00
Available On Air Stations
Learn everything you need to know about voting in the upcoming election, including how to vote in person or through the mail as well as local candidates' positions on various issues.

What do all the different polls mean for NC voters?

Ann Doss Helms
/
WFAE
Campaign signs crowd street corners across North Carolina.

In North Carolina, new polls come often. 

In early September, Vice President Kamala Harris was up one point over former president Donald Trump among registered North Carolina voters, according to Elon University polling. A week later, a Meredith College poll had both presidential candidates winning 48% of likely voters in the swing state. 

In late September, East Carolina University Center for Survey Research polled about 1,000 likely voters, 49% of whom favored Trump. Harris polled at 47%. This week, surveys conducted by Elon University and SurveyUSA, in conjunction with WRAL-TV, found a dead heat, with both candidates polling at 47% among likely voters. 

While these polling fluctuations may appear to mean something, they don’t. 

“Harris +1, Trump +2. That means 50/50,” Social Science Research Solutions chief methodologist Cameron McPhee said. “That just means it’s a tie.”

The reason is the margin of error, a number that “reflects the random uncertainty around a sample of whatever size the researcher conducted,” according to Chase Harrison, associate director of the Harvard Program on Survey Research. 

“If you imagine that you're doing different samples of 1,000 North Carolinians again and again and again and again and again, you're going to get slightly different groups of people,” he said. “Overall, on average, the people you have will be representative of the state, but any given group will not be 100% perfectly representative with zero error.”

The margin of error for the aforementioned polls ranges from 3% to 3.74%. That means that the actual opinion of the population at any given time could be lower or higher than the reported polling percentage by the margin of error. 

For example, East Carolina University’s polling, which had a 3% margin of error, would technically be correct if Trump earned between 46% and 52% of the actual vote, and Harris got 44% to 50%. That leaves room for both candidates to win. 

“I wish people would just read within the margin of error as ‘tie,’” Western Carolina University political science professor Chris Cooper said. “That's less satisfying. I understand. It feels less secure, but it is more accurate. And that is my big read of the polls this cycle — is we just don't know who's going to win.”

Margin of error is just one component of polling, particularly in a battleground state like North Carolina. Carolina Public Press spoke to half a dozen polling or political science experts about how pollsters conduct polls, when to pay attention to them and how to know when they can be trusted.

What is election polling? 

General public opinion polls are designed to educate the public and policymakers about how the populace feels about a certain topic or issue. 

But election polling is unique. 

While public opinion polls reveal what a known population thinks right now, said Patrick Murray, director of Monmouth University’s Polling Institute, election polls try to estimate what an unknown population will do in the future. 

Pollsters can guess which people are likely to vote based on past voter behavior or statements from the voter themselves, but they don’t have crystal balls. Their turnout demographics estimate could be wrong. People also might change their minds between the time they respond to a poll and when they cast their ballots. 

“We have to accept … that election polls are going to have more error than a typical public opinion poll to begin with,” Murray said. 

The different types of election polling

A range of different election polling varieties exist. Universities or academic institutions conduct many of them. Others are paid for by campaigns themselves or think tanks that may have partisan leanings. Exit polls try to measure how people are voting as they leave the polling place. 

Peter Francia, director of the Center for Survey Research at East Carolina University, said the academic pollsters have only one agenda: being correct.

“It's not to try and influence voters to vote a certain way,” he said. “Pollsters’ entire reputations rest on getting it right.”

A campaign internal poll, on the other hand, deserves suspicion, Whitney Ross Manzo, assistant director of the Meredith Poll, said. They may selectively release polling that makes them look good while not sharing negative polling results. 

Think tank pollsters may be less reliable than academic polls, since they may have partisan interests. For example, the polls conducted by right-leaning Carolina Journal, an offshoot of the John Locke Foundation, don’t appear on the top 500 polls in 538’s pollster rankings. Meanwhile, East Carolina University Center for Survey Research is ranked 25th in the nation, Elon University is ranked 38th and Meredith College is ranked 147th. 

“First thing for readers is, look at who conducted the poll,” Cooper said. “Ask yourself, did they have a vested interest in the outcome? If the answer is yes, just quit reading.” 

News organizations sometimes conduct polls, too. The New York Times/Siena College poll and the ABC News/Washington Post poll are ranked as the top two pollsters by transparency, error and bias, per 538’s rankings. CNN’s polls aren’t far behind. Of course, these rankings themselves may be considered subjective and vulnerable to bias; ABC owns 538.

RealClearPolitics’ Pollster Scorecard ranks Reuters/Ipsos polls, ABC News/Washington Post polls, FOX News polls and NBC News/Wall Street Journal polls the highest among national pollsters. Nate Silver’s Silver Bulletin is another resource that aggregates polls and ranks them on how statistically biased they are toward one political party or the other. 

These polling averages vary somewhat depending on which polls they include and how they weight various polls based on relative quality or perception of bias.

Looking at an aggregate of polls instead of a singular poll is better, anyways, Cooper said. 

“If you get a series of polls that all give you roughly the same answer, the odds that it's driven by sampling error are lower,” he said. 

Harrison recommended leaning on rankings like these to determine how reliable a particular poll may be. But if they don’t want to take a shortcut, looking at methodology is a good next step. Transparency is crucial, he said. 

“Are they giving me enough information to say, this was how we sampled, we used telephone interviewers, we sampled from a voter list, or we sampled from a list of all phone numbers?” Harrison said. “Are they willing to tell you how they weighted the data, how they've adjusted the data? Are they willing to show you all of the questions that they asked?” 

Pollsters can also join the American Association of Public Opinion Research transparency initiative, which is a voluntary commitment to methodology transparency.

“The best polls will give you as much information possible on how they do it,” Murray said. 

As a final check, readers should see if the results are within the margin of error. If they are, it’s a statistical tie, and they shouldn’t draw any conclusions from it. 

What about exit polls? Will they be reliable? 

They can be, Cooper said, but the issue is that early voting is increasingly common, so to conduct an accurate exit poll, pollsters have to work harder. 

McPhee said exit polls are mostly useful for news media outlets trying to determine whether they can start calling races for one candidate or another. 

“I don't look to the exit polls to tell me anything real, concrete,” she said. “I think they're one tool in the arsenal of Election Day news.” 

Election polling is useful for more than the top-line results, Murray said. In fact, it’s typically more reliably accurate at identifying what’s happening “underneath the surface,” he said.

It answers questions like, “Why is the election so close?” “What issues are important to the electorate?” and “What demographic groups are moving one way or the other in support of a certain candidate?”

For example, while the most recent Elon University poll was tied, 71% of respondents ranked the economy as one of their top 3 issues this election cycle, while 41% named immigration and 34% named health care as one of their top issues. 

Among the registered voters surveyed, 46% said they would vote on Election Day, while 41% said they would vote early and 11% would cast an absentee ballot. Republicans were more likely to vote in person on Election Day, while independents were more likely to take advantage of early voting. Democrats split between early voting and Election Day voting. 

How are election polls conducted?

When a group decides to conduct an election poll, it has to consider several factors. 

First, what population do they want to study? Registered voters, likely voters or all eligible adults? 

There’s a general sense that pollsters studying registered voters want a sample size between 800 and 1,000, while maybe slightly more for likely voters, McPhee said. The more people who are surveyed, the smaller the margin of error tends to be. When thinking about sample size, pollsters also need to consider how “heterogeneous,” or diverse, the population is.

“We want to make sure we have enough voters in all of those different groups, so that not only are we understanding sort of broadly what we think the likely electorate is going to be and who they're going to vote for, but what are the younger voters likely to do? What are the Black voters likely to do?” McPhee said. 

Second, a pollster must figure out how to access voters. Would they rather randomly sample from the entire population, then screen to see who is a registered or likely voter? Or instead, do they want to use public voter registration databases to randomly sample from? 

Third, pollsters have to contact the randomly selected group of people they’ve chosen. One option is live telephone interviews — although nobody answers their phone anymore, Manzo said. 

Other options are texting, emailing, mailing or using opt-in online surveys. 

McPhee expressed doubt about opt-in online panels. 

“These are online sources of data where people can go online and click a button and it says, ‘Take a poll now,’ and these are places where individuals sort of choose the survey, rather than the survey choosing the individuals,” she said. “And there's increasingly known data quality issues, including fraud and bots associated with that type of polling and research.” 

Manzo said that while the Meredith Poll uses opt-in online panels provided by an external vendor, their large sample size is close to random sampling. 

There is the potential for systematic bias, because not every type of person in the electorate would answer a survey for points or rewards, “but that's going to happen in literally any poll that you have, and so that's what the margin of error is accounting for,” Manzo said. 

Some pollsters mix methods to reach as many people as possible. Even then, response rates have become a serious problem, Francia said. 

“I think a typical response rate is somewhere between 1% and 3%, and in some cases it can even dip below 1%,” he said. “That complicates things, because then you worry that the people who do respond, are they systematically different from the people who don't?”

It may be even harder to get a response in North Carolina, Murray said. 

“The thing is that if you're in a swing state, you're getting more contacts from both campaigns and from pollsters, and what we know is that the more contact people get, the less likely they are to pick up the phone,” he said. 

Finally, after collecting all their responses, pollsters weight their data.

“That is to say, you collect demographic characteristics and then adjust the responses that you have to match population statistics,” Harrison said. 

If a pollster couldn’t get enough Hispanic male voters between the ages of 18 and 35 to respond, they might weight their individual responses higher than those of a demographic that had a higher response rate. 

Each pollster’s weighting is their “secret sauce,” Murray said. 

North Carolina pollsters have to consider the population of unaffiliated voters in the state — a larger percentage of registered voters than Democrats or Republicans — when weighting, Cooper said. 

Unique weighting methods are a major reason why polls conducted at similar times among similar populations might differ. 

For example, In 2016, New York Times conducted an experiment. It shared its polling data with four reputable pollsters and asked them to apply their own likelihood models — based on who they expected to turnout to vote — and each pollster had a different answer, from Trump +1 to Clinton +4. 

Communication methods, recent news events, question wording and the order of answers are other reasons polls might offer slightly different results, Manzo said. 

“There is a bias where people will pick the A response. It's a slight bias, but it is there,” she said. “...When people don't have a quick response off the top of their head, or they don't know the answer, then they'll default to A.” 

Can I trust pollsters after 2016? 

Election polls have gotten a bad rep recently — and undeservedly, some pollsters say. 

In 2016, Hillary Clinton was up by an average of three points over Trump across national polls.

 “And everybody went, ‘Wow, well, that was wrong. Donald Trump won, and so the polls really got it wrong,’” Francia said. “Well, those are national polls and Hillary Clinton won the national popular vote by two points…By any reasonable measurement or test, the polls in 2016, at least at the national level, were really good. They were off by one point.”

Cooper said the nation is increasingly facing situations where the candidate who wins the popular vote and the candidate who wins the Electoral College may be different. Such was the case in 2016, and it may be the case in 2024. 

But there were some issues in the battleground states in 2016, Cooper said. 

“We didn't see the so-called diploma divide developing,” he said, “Cleavages used to be more about income, and now they're more about education.” 

Not all pollsters included education levels, or education levels by race, as a demographic group in their samples that election cycle. Before, the focus was primarily on age, race, sex and region, McPhee said.

Pollsters have adjusted since, she said, by adding education and also making sure to poll different types of Republicans, from moderate Republicans to Trump supporters. She feels optimistic about 2024 polling after the 2022 midterm polling was largely successful. 

However, she can’t be entirely certain whether polls were more accurate because Trump wasn’t on the ballot or because high turnout voters both vote in midterms and are more likely to answer polls. 

Either way, Francia would like to put the narrative that polls are untrustworthy in the past. 

“I don't think polling is in a place where people should mistrust it at all,” he said. “I think that the record has been pretty good with a few exceptions.” 

When should you pay attention to polling? 

All things equal, polls conducted closer to an election tend to be more accurate. 

“In terms of election polls, you should pay the most attention, like right now, within the week of the election because before then, there's too many people that haven't made up their minds,” Manzo said. “I don't believe a poll that's done before August, because that's too far out from the election for people to even really give a true response.”

However, Manzo gave one caveat. She said she wouldn’t trust any polls conducted in North Carolina since Tropical Storm Helene hit Western North Carolina in late September. 

“You're not going to have an accurate, representative sample,” Manzo said. “I would say there's a whole swath of the state that you're not going to have access to people, and if you do have access to people, the ones who can respond to you are going to obviously be the ones in more privileged positions who got their water and electricity back faster.” 

Polls can only offer so much. 

They’re unable to measure which candidate has the better get out the vote campaign, but that could be the difference between victory and loss in North Carolina, Harrison said. They can’t determine anything within their margins of error. 

The polls are doing their job at the moment, Murray said, “which is telling us that the national election is extremely close and could go either way.” 

“But if you need to predict who's going to win by one point or the other, the polls can't tell you that.” 

This article first appeared on Carolina Public Press and is republished here under a Creative Commons license.

_