We hear poll results reported to us on a daily basis.
“Most Americans believe this, or most Americans are in favor of that.”
“This politician is leading by so many points, or that politician is dropping in the polls.”
But, are these poll results worthy of even being reported anymore?
The Pew Research Center says, “Polling is not ‘broken.’”
Of course, what would we expect them to say?
That’s what they do…, they poll public opinion.
They’re not going to undermine their own industry, are they?
In an attempt to validate their own value, I believe they have done just that.
Let’s take a look at Pew’s defense of “survey methodology.”
“A comprehensive review of polling accuracy published in 2018 found that ‘relying on vote intention polls from more than 200 elections in 32 countries over a period of more than 70 years, there is no evidence that poll errors have increased over time….’”
What about the 2016 presidential election polls?
“In 2016, problems with polls in a few key Midwestern states led many people to underestimate the chances of a Donald Trump victory. As a consequence, the immediate post-election assessment was that there had been a complete polling meltdown.”
And rightly so.
“But that ‘insta-narrative’ turned out to be oversimplified. The 2016 election was not, in fact, an industry-wide failure for the polls.”
I beg to differ.
The 2016 election WAS, INFACT, “an industry-wide failure for the polls.”
“Rigorous national surveys – designed to measure the popular vote rather than capture the effects of the Electoral College – were quite accurate by historical standards.”
Is that so?
Then where did all of the “Donald Trump has NO path to victory” talk come from?
“An average of the final, publicly released national polls suggested that Hillary Clinton would win the overall popular vote by 3 percentage points, and she ultimately won by 2 points.”
I do not recall any polls reporting that Donald Trump was anywhere near 2-3 points within catching Hillary Clinton.
This seems like a bit of revisionist history to me.
Based on the polls, it was almost made to seem that a Trump voter was wasting their time even bothering to vote.
“Front and center among these problems is the fact that many state pollsters didn’t adjust their 2016 polls to reflect that college graduates are more likely to take surveys than adults with less formal education.”
And what flawed survey did THAT information come from?
How would you “adjust” your poll in this case?
“This mattered more than in previous years, when there weren’t big partisan differences between the two groups. In 2016, however, college grads broke for Clinton while high school grads broke for Trump. State polls that didn’t adjust – or weight – their data by education were left with a biased sample.”
Sooooo, you have to be able to anticipate your poll results beforehand in order for you to be able to correctly adjust your poll results?
“Looking ahead to 2020, election junkies can expect to see some high-quality polling done at the national level and in many states.”
Why would we expect that?
Because that’s what you want us to believe for your own sake?
Survey says…, BINGO!
“The polling industry was founded using mail and face-to-face interviews before it adapted to the rise of telephone connectivity. It is in the midst of another metamorphosis, changing once again to meet the spread of internet access. This means we are in a period of great variety in survey methods. With that comes innovation, risk, creativity and challenges.”
“While evidence suggests that well-funded, telephone-based surveys still work [And what evidence would that be?], they have become much more difficult and expensive to conduct. Difficult because the swarm of robocalls Americans now receive, along with the development of call blocking technologies, means that lots of people don’t answer calls from unknown numbers.”
“Response rates have gone from 36% in 1997 to 6% today.”
“Response rates have gone from 36% in 1997 to 6% today.”
And Pew doesn’t see this as a serious problem?
Or, just a problem they are willing to overlook?
“The good news is that Pew Research Center studies conducted in 1997, 2003, 2012 and 2016 found little relationship between response rates and survey accuracy, and other researchers have found similar results.”
“Little relationship,” huh?
That would be “good news,” if it were true!
“The bad news is that it’s impossible to predict whether this remains true if response rates go down to 4%, 2% or 1%, and there is no sign that this trend is going to turn around as peoples’ technology habits continue to evolve.”
“It’s impossible to predict,” huh?
Again, I beg to differ.
The “bad news,” for Pew is, I think it’s completely reasonable to predict that such low response rates would definitely affect survey accuracy…, even more than they do now.
The question is, if the use of phone polls are fading out of these surveys, what is filling the response gap?
Pew Research Center says, “The internet.”
“As digital access became the norm, pollsters began to look for a way to reach respondents online. This method has a number of upsides [And a number of serious downsides.]. People can take the survey in private and at their convenience, pollsters don’t have to hire and manage roomfuls of live interviewers or pay phone bills, and survey methodologists have found that there are measurement advantages to self-administration. Market research surveys moved en masse to the web, and academics were drawn to the combination of low costs and ease of experimentation.”
“There is, however, one significant challenge. While there are ways to draw random samples of the U.S. population offline using master lists of people’s home addresses or phone numbers (thanks to the U.S. Postal Service and Federal Communications Commission, respectively), there is not yet a way to do this through the internet.”
NO WAY TO DRAW RANDOM SAMPLES THROUGH THE INTERNET!
Oh…, that’s the only “challenge?”
That’s a pretty significant challenge, I would say.
“Traditional survey research is aggressively based on the statistical theory of the random sample, where every member of the population has an identical (or at least known and nonzero) chance of being included. This produces surveys that reflect the country in all its racial, ethnic, religious and income diversity. Low response rates can erode the randomness of the sample.”
The methodology, that Pew describes here, blows the whole concept of “random sampling” out of the water…, making their surveys, and the surveys of others, virtually worthless.
It’s the complete randomness factor that lends any level of validity to any of these polls. Without that, we are really left with nothing worth reporting…, unless, of course, the poll seems to be in your favor.
Thank you to The Pew Research Center, and these contributors to their report:
Claudia Deane, Vice President, Research, Courtney Kennedy, Director, Survey Research, Scott Keeter, Senior Survey Advisor, Arnold Lau, Research Analyst, Nick Hatley, Research Analyst, Andrew Mercer, Senior Research Methodologist, Rachel Weisel, Senior Communications Manager, Hannah Klein, Communications Manager, Calvin Jordan, Communications Associate, Andrew Grant, Communications Associate, and Travis Mitchell, Copy Editor.
If you’re not already “following” me and you liked my blog(s) today, please “click” on the comment icon just to the right of the date at the bottom of this article. From there you can let me know if you “like” my blog, leave a comment or click the white “FOLLOW” button at the bottom of that page, which will keep you up to date on all of my latest posts.
We’re all entitled to our opinions. I value yours and your feedback as well.
I’d love to hear from you!
Thank you, MrEricksonRules.