Wednesday, July 18, 2018 |
Column: Deprogram —
Topic: Civilization —
Taking Polls Apart
Human Complexity Foils Electoral Predictions
by Denyse O'Leary
Is opinion polling today a science or merely naturalism applied to politics?
The Pew polling group admits it was stumped by last November's U.S. presidential election. The results "came as a surprise to nearly everyone who had been following the national and state election polling." Most pollsters put Hillary Clinton's chances of defeating Donald Trump at 70 to 99 percent.1
Few will care if fashion critics call the hemlines wrong this season. But election pollsters consider their work both important and scientific: "Polling is an art, but it's largely a scientific endeavor," says Michael Link, president and chief executive of the Abt SRBI polling firm in New York City and former president of the American Association for Public Opinion Research.2 That perception may help explain preeminent science journal Nature's account of scientists being "stunned" and reacting to the results with "fear and disbelief."3
But the scientists' response raises a question: Was the badly missed prediction a failure of the scientific method, or is opinion polling just not a science anyway? Or does the answer lie in the many shades of nuance in between?
Persistent Problems & Unforced Errors
Some problems are new to the polling industry worldwide. First, there are many more pollsters out there today, due to crashed entry costs. All one needs to get in the game is a novel and defensible measurement idea. Second, the massive switch from landline phones to cell phones means that many people simply do not answer survey calls from unrecognized numbers.
Nature's Ramin Skibba quotes Cliff Zukin, a political scientist at Rutgers: "The paradigm we've used since the 1960s has broken down and we're evolving a new one to replace it—but we're not there yet."4 Zukin's statement is, on reflection, puzzling. Cell phones have been around for quite a while now. Why were there no rewards for capturing a more accurate sample—especially now, when polling is big business and aggregates like FiveThirtyEight, RealClearPolitics, and Huffington Post share data for better forecasts?
Polls in the United States in particular feature persistent problems: U.S. elections turn out only 40-55 percent of the electorate, low for a Western country. As a result, "likely voter" is a commonly cited but fuzzy category, treated by rival pollsters as a proprietary secret of their hoarded lists. That metric failed, for example, in 2014 and 2016.
Even so, some pollsters' errors appear unforced, one being that they often rely on unrepresentative groups, such as university students or veterans (convenience sampling), especially if current media buzz matters more than accuracy. The local "hamburger poll" is probably just as useful and entertains us for free over lunch. Polls' accuracy spikes, of course, when an election looms and more sophisticated players commit more resources to the field.5
One problem dogged the 2016 U.S. election in particular: both leading candidates, Hillary Clinton and Donald Trump, had very low (stated) approval ratings. We were warned not to trust the presidential polls because traditional methods prove iffy in such cases.6
Pew's response the morning after the election is worth pondering. "A likely culprit," we were informed, is nonresponse bias: "We know that some groups—including the less educated voters who were a key demographic for Trump on Election Day—are consistently hard for pollsters to reach."7 Again one wonders, if pollsters knew that, why did they not make a greater effort to reach these people? Were there no rewards for getting it right?
The Ones Who Got It Right
So did any pollster get the outcome right? And if so, was its success an accidental alignment of prediction and numbers? Or can we learn something from it?
As Jessica McBride points out at Heavy,8 some pollsters did get it right, and some of them were roundly criticized by the industry for their methods. Did different methods provide useful information? Or was it just coincidence?
Helmut Norpoth, a Stonybrook University political science professor, predicted the outcome correctly months beforehand, using primaries and the election cycle. Allan J. Lichtman, an American University historian, predicted a Trump win in September, using thirteen true-or-false statements regarding the incumbent party; he averred that, based on his track record, six or more "falses" predicted a change.
Yale economist Ray C. Fair got it right based on macroeconomic modeling (analyzing the economy mathematically). USC economist Arie Kapteyn succeeded as well, though he was "strongly criticized" for his approach: repeatedly interviewing the same people instead of seeking new participants. IBD/TIPP—which came closest to the final results—predicted on the basis of greater Republican turnout.
At Vox, political scientists Jacob Montgomery of Washington University and Florian Hollenbach of Texas A&M got the presidential result right at first, but then changed their prediction at 4:00 on election morning. They applied a "Trump Tax" rather than going with "what their own numbers were saying." Emory College political scientist Adam Abramowitz's Time for Change model forecast the outcome accurately using mid-year incumbent approval levels, second-quarter real GDP growth, and the number of terms the party in power held the White House. But then he, too, backed away from his numbers, saying that a non-traditional candidate "would lose Republican support."
Possibly, the economy was more important to voters than to pollsters. Could a growing economic gap mean that the voter daren't risk honesty with the pollster when choosing one unpopular candidate over another? The industry has considered that possibility; it is called the "shy Trump effect"9
Robert Cazaly at Trafalgar Group, a low-ranking, Republican-leaning pollster ("a 'quirky' firm of just seven"), decided to test this thesis by adding a "neighbor" question: Whom do you think your neighbors will vote for? The pollster reasoned that the voter would not feel "judged" merely by reporting on what he thought others might think. Cazaly's firm was among those who called it right.
People Aren't Particles
Polling firms have recently vowed to become much more online savvy and Big Data-oriented in order to produce more accurate forecasts in the future by "creating a much richer picture."10 But if things continue as they are, chances are that the picture will not be "much richer" but merely "data heavier." The underlying assumption of most pollsters is that the human subject is like a particle—under the right pressure, the subject must yield secrets to the investigator.
But a particle has no mind or will, or awareness of or interest in its situation. Interviewing a human being can, by contrast, be a duel of minds. The human subject can evade measurement not only in the unconscious way that a quantum particle does but also in the fully conscious way that only a human being can: by intentionally withholding sensitive information.
Pew unintentionally underlined the problem that afflicts polling today by dismissing the possibility that Trafalgar raised: "If this were the case, we would expect to see Trump perform systematically better in online surveys, as research has found that people are less likely to report socially undesirable behavior when they are talking to a live interviewer."
But wait! Pew is assuming that the respondent himself thinks the behavior undesirable. What if he doesn't? What if he merely wishes to conceal it from those who do think it so? Intelligent minds may originate their own thoughts, unbidden by natural processes, and not necessarily even accessible to them. It gets complex.
Pew is planning to report back on the polling debacle in May 2017.11 Safe prediction: if the firm's consultants do not accept the reality of the independent minds of other human beings, resulting in subtle, intelligent responses, whatever happens in 2020 will not be much enlightened by their information. •
Notes 1. Andrew Mercer et al., "Why 2016 election polls missed their mark," Pew Research Center (Nov. 9, 2016): http://pewrsr.ch/2fTgkUH. 2. Ramin Skibba, "The polling crisis: How to tell what people really think," Nature (Oct. 19, 2016): nature.com/news/the-polling-crisis-how-to-tell-what-people-really-think-1.20815. 3. Jeff Tollefson et al., "Donald Trump's US election win stuns scientists," Nature (Nov. 9, 2016): http://go.nature.com/2hE6KBW. 4. Skibba, "The polling crisis," Note 2. 5. Meghana Ranganathan, "Where are the Real Errors in Political Polls?" Scientific American (Nov. 4, 2014): http://bit.ly/2fRDpHk. 6. Justin Hawkins, "Why You Shouldn't Trust the Presidential Polls," Townhall (Nov. 8, 2016): http://bit.ly/2hQ0ASr. 7. Mercer et al., "Why 2016," Note 1. 8. Jessica McBride, "2016 Election Oracles: These People Predicted Trump Would Win," Heavy (Nov. 12, 2016): http://heavy.com/news/2016/11/2016-final-election-results-predictions-helmut-norpoth-abramowitz-michael-moore-nate-silver-vote-count-turn-out-electoral-college-maps-donald-trump-hillary-clinton-polls-forecasting-pennsylvania-michi. 9. Skibba, "The polling crisis," Note 2. 10. Ibid. 11. Mercer et al., "Why 2016," Note 1.
Denyse O'Leary is a Toronto-based author, editor, and blogger and the co-author of The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul.
More on Civilization from the Salvo online archives.
Department: Collateral Damage — Salvo 40
Morality as Story
The False Charity of Modern Journalism by Rebekah Curtis
Department: Opening Salvo — Salvo 43
Wreckers in the Dark
Social Ills & Opposition to Safe Harbor Lights by James M. Kushiner
Column: Deprogram — Salvo 40
Human Complexity Foils Electoral Predictions by Denyse O'Leary
YOU SHOULD SUBSCRIBE!
Salvo subscribers have full access to the online archives!
Salvo magazine unblushingly offers an honest, rational, and respectful perspective to hard questions about SCIENCE, SEX, and SOCIETY.
Full access to the Salvo online archives. Only $15.99.
Get 4 issues + full access to the online archives. Only $29.99.
Consider ordering a bulk subscription for a reading group or a small group!
The Current Issue—Summer 2018
A Salvo Fake Ad
Visit the blog of Salvo author Robin Phillips
Salvo 44—Spring 2018
Grounded Faith: Sinking Roots for Youth Ministry in an Age of Advanced Skepticism by Terrell Clemmons
Spit Marks: The Afterlife of Those Popular DNA Tests May Surprise You by Paige Comstock Cunningham
The Unthinkable Universe: It Strangely Points Where Materialists Dare Not Boldly Go by Regis Nicoll
Silicon Debauchery: More Evidence the Hookup Culture Is Human Malware by Nancy R. Pearcey
Salvo 43—Winter 2017
A Boy's Life: 5 Recommendations for Shielding Our Sons from the Anti-Culture—And Setting Them Towards Manhood by Anthony Esolen
Optimal Optics: Evolutionists Don't Know a Good Eye When They See One by Jonathan Wells
Up for Grabs: In Science, When 'Anything Goes,' Everything Goes by Denyse O'Leary
Revolution 101: How the 'New Civics' Is Fomenting Civil Unrest by Terrell Clemmons
Salvo 42—Fall 2017
Engendered Confusion: The Chaos of Postmodern Sexuality by Laurie Higgins
Zombie Killer: The "Icons of Evolution" Have Joined the Ranks of the Undead by Denyse O'Leary
Mutant Destruction: Does Cancer Really Innovate? by Jonathan Wells
The Darwin Tales: It's Time to Remit Darwinian Storytelling to the Annals of History by Terrell Clemmons
Eye Openers: Eight Common Factors for Atheists Changing Their Minds About God by Matt Nelson
Tuning Out the Universe: How Naturalism & Post-Fact Science Ignore the Evidence We See by Denyse O'Leary
Improbably So: Fine-Tuning Is Unlikely, but Unlikely Things Happen All the Time by Tim Barnett
Deep-Seated Rights: What They Are & Why You Have Them by Steve Jones
4 issues of Salvo PLUS full access to the online archives!
• Give a Gift Sub
• Manage Sub Account
• About Salvo
• The Fake Ads
• Login for Full Access
• Touchstone Magazine
• The Fellowship of St. James
All material Ⓒ 2017. Salvo is published by The Fellowship of St. James.