Taking Polls Apart

Human Complexity Foils Electoral Predictions

Is opinion polling today a science or merely naturalism applied to politics?

The Pew polling group admits it was stumped by last November's U.S. presidential election. The results "came as a surprise to nearly everyone who had been following the national and state election polling." Most pollsters put Hillary Clinton's chances of defeating Donald Trump at 70 to 99 percent.1

Few will care if fashion critics call the hemlines wrong this season. But election pollsters consider their work both important and scientific: "Polling is an art, but it's largely a scientific endeavor," says Michael Link, president and chief executive of the Abt SRBI polling firm in New York City and former president of the American Association for Public Opinion Research.2 That perception may help explain preeminent science journal Nature's account of scientists being "stunned" and reacting to the results with "fear and disbelief."3

But the scientists' response raises a question: Was the badly missed prediction a failure of the scientific method, or is opinion polling just not a science anyway? Or does the answer lie in the many shades of nuance in between?

Persistent Problems & Unforced Errors

Some problems are new to the polling industry worldwide. First, there are many more pollsters out there today, due to crashed entry costs. All one needs to get in the game is a novel and defensible measurement idea. Second, the massive switch from landline phones to cell phones means that many people simply do not answer survey calls from unrecognized numbers.

Nature's Ramin Skibba quotes Cliff Zukin, a political scientist at Rutgers: "The paradigm we've used since the 1960s has broken down and we're evolving a new one to replace it—but we're not there yet."4 Zukin's statement is, on reflection, puzzling. Cell phones have been around for quite a while now. Why were there no rewards for capturing a more accurate sample—especially now, when polling is big business and aggregates like FiveThirtyEight, RealClearPolitics, and Huffington Post share data for better forecasts?

Polls in the United States in particular feature persistent problems: U.S. elections turn out only 40-55 percent of the electorate, low for a Western country. As a result, "likely voter" is a commonly cited but fuzzy category, treated by rival pollsters as a proprietary secret of their hoarded lists. That metric failed, for example, in 2014 and 2016.

Even so, some pollsters' errors appear unforced, one being that they often rely on unrepresentative groups, such as university students or veterans (convenience sampling), especially if current media buzz matters more than accuracy. The local "hamburger poll" is probably just as useful and entertains us for free over lunch. Polls' accuracy spikes, of course, when an election looms and more sophisticated players commit more resources to the field.5

One problem dogged the 2016 U.S. election in particular: both leading candidates, Hillary Clinton and Donald Trump, had very low (stated) approval ratings. We were warned not to trust the presidential polls because traditional methods prove iffy in such cases.6

Pew's response the morning after the election is worth pondering. "A likely culprit," we were informed, is nonresponse bias: "We know that some groups—including the less educated voters who were a key demographic for Trump on Election Day—are consistently hard for pollsters to reach."7 Again one wonders, if pollsters knew that, why did they not make a greater effort to reach these people? Were there no rewards for getting it right?

The Ones Who Got It Right

So did any pollster get the outcome right? And if so, was its success an accidental alignment of prediction and numbers? Or can we learn something from it?

As Jessica McBride points out at Heavy,8 some pollsters did get it right, and some of them were roundly criticized by the industry for their methods. Did different methods provide useful information? Or was it just coincidence?

Helmut Norpoth, a Stonybrook University political science professor, predicted the outcome correctly months beforehand, using primaries and the election cycle. Allan J. Lichtman, an American University historian, predicted a Trump win in September, using thirteen true-or-false statements regarding the incumbent party; he averred that, based on his track record, six or more "falses" predicted a change.

Yale economist Ray C. Fair got it right based on macroeconomic modeling (analyzing the economy mathematically). USC economist Arie Kapteyn succeeded as well, though he was "strongly criticized" for his approach: repeatedly interviewing the same people instead of seeking new participants. IBD/TIPP—which came closest to the final results—predicted on the basis of greater Republican turnout.

At Vox, political scientists Jacob Montgomery of Washington University and Florian Hollenbach of Texas A&M got the presidential result right at first, but then changed their prediction at 4:00 on election morning. They applied a "Trump Tax" rather than going with "what their own numbers were saying." Emory College political scientist Adam Abramowitz's Time for Change model forecast the outcome accurately using mid-year incumbent approval levels, second-quarter real GDP growth, and the number of terms the party in power held the White House. But then he, too, backed away from his numbers, saying that a non-traditional candidate "would lose Republican support."

Possibly, the economy was more important to voters than to pollsters. Could a growing economic gap mean that the voter daren't risk honesty with the pollster when choosing one unpopular candidate over another? The industry has considered that possibility; it is called the "shy Trump effect"9

Robert Cazaly at Trafalgar Group, a low-ranking, Republican-leaning pollster ("a 'quirky' firm of just seven"), decided to test this thesis by adding a "neighbor" question: Whom do you think your neighbors will vote for? The pollster reasoned that the voter would not feel "judged" merely by reporting on what he thought others might think. Cazaly's firm was among those who called it right.

People Aren't Particles

Polling firms have recently vowed to become much more online savvy and Big Data-oriented in order to produce more accurate forecasts in the future by "creating a much richer picture."10 But if things continue as they are, chances are that the picture will not be "much richer" but merely "data heavier." The underlying assumption of most pollsters is that the human subject is like a particle—under the right pressure, the subject must yield secrets to the investigator.

But a particle has no mind or will, or awareness of or interest in its situation. Interviewing a human being can, by contrast, be a duel of minds. The human subject can evade measurement not only in the unconscious way that a quantum particle does but also in the fully conscious way that only a human being can: by intentionally withholding sensitive information.

Pew unintentionally underlined the problem that afflicts polling today by dismissing the possibility that Trafalgar raised: "If this were the case, we would expect to see Trump perform systematically better in online surveys, as research has found that people are less likely to report socially undesirable behavior when they are talking to a live interviewer."

But wait! Pew is assuming that the respondent himself thinks the behavior undesirable. What if he doesn't? What if he merely wishes to conceal it from those who do think it so? Intelligent minds may originate their own thoughts, unbidden by natural processes, and not necessarily even accessible to them. It gets complex.

Pew is planning to report back on the polling debacle in May 2017.11 Safe prediction: if the firm's consultants do not accept the reality of the independent minds of other human beings, resulting in subtle, intelligent responses, whatever happens in 2020 will not be much enlightened by their information. •

is a Canadian journalist, author, and blogger. She blogs at Blazing Cat Fur, Evolution News & Views, MercatorNet, Salvo, and Uncommon Descent.

This article originally appeared in Salvo, Issue #40, Spring 2017 Copyright © 2024 Salvo | www.salvomag.com https://salvomag.com/article/salvo40/taking-polls-apart

Topics

Bioethics icon Bioethics Philosophy icon Philosophy Media icon Media Transhumanism icon Transhumanism Scientism icon Scientism Euthanasia icon Euthanasia Porn icon Porn Marriage & Family icon Marriage & Family Race icon Race Abortion icon Abortion Education icon Education Civilization icon Civilization Feminism icon Feminism Religion icon Religion Technology icon Technology LGBTQ+ icon LGBTQ+ Sex icon Sex College Life icon College Life Culture icon Culture Intelligent Design icon Intelligent Design

Welcome, friend.
Sign-in to read every article [or subscribe.]