American pollsters’ anxieties

The great embarrassment of American pollsters, in failing to predict the election win of Donald Trump in 2016, was not fully erased by their success in predicting the victory of Joseph Biden in 2020. This was my main takeaway from the panel of speakers on “Polling and Democracy: the Road Ahead,” set up by the Roper Center for Public Opinion Research (ropercenter.cornell.edu) last 1/21/21.

The Roper Center, established in 1947 and now based at Cornell University, is the world’s oldest, and I believe largest, opinion poll archive. Its data, contributed by pioneers in the private survey biz, go back to the 1930s, and even include American wartime opinions about whether to liberate the Philippines first or to leapfrog it and invade Japan as soon as possible.

Roper’s executive director, Peter K. Enns, was panel moderator. Other speakers were the director of polling at CNN, the senior survey advisor of the Pew Research Center, and professors specializing in opinion research from Duke University, the University of Delaware, and Stanford University.

Trump’s win in 2016 was on the fringe, not in the center, of pollsters’ expectations. During his presidency, he pretended to be popular, but in truth he never was (“The unpopular Donald Trump,” Opinion, 2/22/20). He must have seen that he was the underdog in the 2020 campaign (“Trump’s looming defeat,” 10/24/20), but he couldn’t accept it and just called it “fake news.” Sadly, he is still believed by a significant minority of Americans.

For ordinary people all over the world, the ability to predict elections has been the litmus test of the quality of opinion polling. The positive argument is, if you believe the elections, then you should believe the polls. On the negative side, Trump loyalists who stop believing in elections, likewise stop believing in polls. Republican politicians undermining the last election are likewise undermining opinion polls. This will affect the cooperation of the general public, and most likely make future polls harder to do.

An opinion poll about voting is based on a sample of the voters, while the election is based on the full population of voters. It is the full vote count—assuming, of course, an honest count—that judges the accuracy of the sample result, not the other way around.

A sample that scientifically represents the population is obtained when everyone in the sample has an equal probability of being selected. The Roper panel confirmed that one should rely only on polls using probability sampling. Otherwise, stay away, regardless of how large the sample is supposed to be.

Don’t bother with opt-in polls, or those whose respondents volunteer to be part of the sample. Be aware of the existence of professional survey respondents who fill out survey questionnaires as an occupation, for making a living. Such respondents-for-pay are being openly recruited everywhere, including the Philippines.

In probability sampling, bigger does mean better, in terms of having a lower expected error of estimate for the full sample. However, it does not mean better for each demographic slice of the sample. For instance, to make findings about black voters equally as accurate as the findings of white voters, their sample sizes should be equal in absolute number; this will entail “over-sampling” the group that is the minority in the study area. (Thus, SWS usually takes probability samples of the same size—300 respondents each—in Metro Manila, Balance Luzon, Visayas, and Mindanao, so that the area results can be read with equal confidence. In effect, this over-samples Metro Manila and under-samples elsewhere in Luzon.)

From one country to another, there are differences in the social issues of current contention, and in the resources for learning the people’s opinions about the issues. In the end, the scientific principles of opinion research are the same.

——————

Contact mahar.mangahas@sws.org.ph.

Read more...