Home » President's Corner, President's Invited Column

Change Is Afoot in the World of Election Polling

1 October 2014 1,575 views One Comment

With the U.S. mid-term elections to be held next month, I invited Scott Keeter, director of survey research for the Pew Research Center in Washington, DC, to write a column about election polling. Keeter is a past president of the American Association for Public Opinion Research (AAPOR) and has been an election night analyst of exit polls for NBC News since 1980. His published work includes books and articles about public opinion, political participation and civic engagement, religion and politics, American elections, and survey methodology. A native of North Carolina, he earned a PhD in political science from The University of North Carolina at Chapel Hill. ~Nathaniel Schenker

Scott Keeter

Scott Keeter

Election polls provide one of the most visible applications of statistics at work. Polls—especially those in presidential elections—have been quite accurate over the past several election cycles, thus providing an important public confirmation of the validity of sampling and statistical methods in general.

But this is a time of great challenges to polling. It is also a time of great opportunities. While traditional telephone polls are becoming much more difficult to do, new approaches to conducting surveys may help solve some of their current problems and even improve the quality of data. However, these new approaches are highly controversial within the polling profession, and much of the controversy revolves around statistical theory and practice.

Traditional polling has relied on probability sampling for the past 60 years or so. Most polls today are still conducted by telephone using either random digit dialing or random samples of voters from registration lists. Perhaps the biggest challenge faced by political polls, as well as nearly all surveys, is growing nonresponse. While large, well-funded, face-to-face surveys can achieve high response rates, most political and opinion polls—even those that make multiple efforts to contact respondents—have rates in the single digits or low-mid teens. Battling against nonresponse increases the costs and time required to complete a survey.

Low response rates also raise issues of credibility. Even though there is good evidence that polls still produce representative samples of the public, we pollsters are often asked how data from such surveys can be valid if the assumptions underlying probability sampling are violated so badly. The answer is twofold. First, the performance of the polls in previous elections tells us the nonrespondents are largely missing at random with respect to the most important variables of interest, such as candidate choice, partisanship, and ideology. Second, weighting adjustments continue to be effective at correcting biases in demographic characteristics related to these central variables of interest—in part because the demographic biases in obtained samples are not particularly severe on most variables.

Another challenge that confronted pollsters using telephone surveys in the past decade is coverage of the population of interest. According to data from the National Health Interview Survey, 39% of adults had only a cell phone during the latter half of 2013, up from about 5% in 2004. Adults who have only a cellphone are younger, poorer, and more likely to be renters, to live with unrelated adults, and to be Hispanic than those who also have a landline phone. Samples based only on landlines thus have significant coverage biases. For example, only about 5% of adults in our landline samples are under age 30, compared to the population parameter of 21% for this age group.

On this challenge, there is actually some good news. Pollsters have found that people will take surveys on cellphones, and most major survey organizations now use dual-frame samples (e.g., Pew Research now interviews 60% of its respondents on cell phones and 40% on landlines in most surveys). It is more expensive to call cell phones than landlines because of federal regulations that require cellphones to be dialed manually. Yet the rise of cellphones has actually reduced overall coverage error: Because cellphones have enabled lower-income adults to afford telephone service, dual frame samples now cover about 98% of the U.S. population, which is higher than landline coverage during the heyday of landline telephone surveys.

Despite the success pollsters have had in maintaining a high level of accuracy in the face of these challenges, most agree that this situation probably won’t continue indefinitely. And there may be better ways to survey the public. Online polls based on opt-in, nonprobability panels of respondents are growing in popularity and already dominate the market research world. Self-administered surveys, in general, have been shown to have advantages in terms of data quality for certain kinds of questions, including those on sensitive topics. Online surveys, in particular, are more convenient for respondents than interviewer-administered surveys. They also make it possible to incorporate pictures, graphs, and videos in the interview. But perhaps the biggest comparative advantage of online polls based on nonprobability samples is that they are far less expensive than telephone surveys.

Some online nonprobability polls have compiled a good record of predicting election outcomes, actually outperforming traditional polls in some elections. But skeptics point to the absence of a theory, such as the one underlying probability sampling, that provides a basis for expecting traditional polls to work well under different conditions. The memory of the failure of the Literary Digest’s straw poll in 1936—after successfully predicting the presidential vote in several elections—looms large for pollsters. There is no practical sampling frame of email addresses, or any other way to draw a sample of online adults in which every adult has a known probability of selection.

Online polls have one other obvious limitation: Not everyone is online. However, this problem is steadily shrinking. According to recent Pew Research Center data, 89% of adults now use the Internet, though it’s important to remember that the people who are not online are very different from those who are.

As a consequence of these and other concerns about online panels and related methods, many pollsters and news organizations have drawn a bright line between probability and nonprobability sampling and refused to cross it. For decades, most major news organizations refused to publish polls based on nonprobability samples. This changed in a big way in July when The New York Times and CBS News announced that they would begin using online survey panels from YouGov as part of their election coverage. The Times also took down their polling standards document, which limited the publication of data based on nonprobability samples, and replaced it with a statement that their polling standards are under review.

The action by the Times and CBS sparked a vigorous debate in the industry. The leadership of AAPOR criticized the move and then found themselves the target of criticism from members. One pair of critics wrote, “We worry that this traditionalism is holding back our understanding of public opinion, not only endangering our ability to innovate but putting the industry and research at risk of being unprepared for the end of landline phones and other changes to existing ‘standards.’”

The AAPOR leadership listened to the critics and decided to take a big step in the direction of reconciling the competing perspectives. They formed a task force to reassess the current state of survey methods and provide guidance to practitioners and end users. I am a co-chair of this group, along with Mike Brick and Reg Baker, who were co-chairs of the AAPOR Task Force on Nonprobability Sampling. The new task force includes several researchers who are also members of the ASA.

It’s important to remember that election polls are not about just predicting elections. I believe that, at their best, polls help us understand the meaning of elections—and of public opinion more generally. Critically, polling is an important channel providing a voice to people who otherwise might not be heard. Political scientist Sidney Verba put it eloquently: “Surveys produce just what democracy is supposed to produce—equal representation of all citizens.”

The debate over probability vs. nonprobability samples is about representation. Acquiring a representative sample is more important than ever in an era of growing inequality in income, wealth, and voice in the political process. This debate is no mere spat over methodology; the stakes are much, much bigger. If polling loses credibility, either because its methods become outdated or because new approaches lead to serious mistakes, our democracy will be worse off.

1 Star2 Stars3 Stars4 Stars5 Stars (1 votes, average: 5.00 out of 5)
Loading...

One Comment »

  • Myung Soon Song said:

    I think that this article provides readers with insight to rethink about conventional wisdoms in statistics.