This post addresses a subject that I have wanted to bring to the attention of readers for some time, so I will finally get around to doing so. It is on the topic of polling and why recent polls have so often failed to predict the correct outcomes. Polls have failed to predict the correct outcomes in national elections in Israel, the United Kingdom, and the United States. They also unanimously failed to accurately predict the outcome in the British Brexit vote. How can polls be so wrong so often? If my memory is accurate, polls used to rarely be wrong in previous decades. Now they are often in error. What has gone wrong in polling, and could it be that pollsters’ own biases are causing these erroneous forecasts?
The first link, third link and fourth link analyze the pollsters’ errors in the US presidential and national elections in 2016. The first link mentions just how badly some of the pollsters and predictors were in failing to accurately predict Trump’s win in the presidential race. In the aftermath of the election, Trump delighted in noting to his huge post-election rally audiences that The Washington Post had predicted that Trump had “no path” to an electoral college win and that a Clinton “blowout” win was possible (second link). In a stark example of just how wrong the pre-election polls and pundits were, The Washington Post column flatly stated that Trump had no chance of winning Michigan, Wisconsin and Pennsylvania (Trump won all three). The first link also reveals one reason polls can be wrong. Polls do subjective “weightings” of the raw data from their polls. Polls likely never report the true raw data of any poll. They adjust the raw data by giving greater weight to which party or demographic group they think (or would like to think) will be more heavily represented among the actual voters in an election. For example, if they think that Democrats are more energized and likely to vote, they will intentionally skew the polling data to include more Democratic responses in the poll’s result than the raw data would call for. You can see how a poll’s results can easily be manipulated by how a poll’s raw data is intentionally “weighted” or skewed to favor the pollsters’ assumptions about the electorate in any given election.
I think the intensity factor was hugely ignored in the 2016 election. I think most polls must have assumed Democrats were more energized so polls were “adjusted” in that direction. However, Trump had no problem drawing 20,000-30,000 people to his election rallies, while Clinton could scarcely draw a mere fraction of that number. The pollsters should have seen these rally attendance figures as indicating that Trump supporters were far more energized. This should have caused the pollsters to adjust the polls’ predictions in Trump’s favor in their adjustments of the raw data, but they apparently went the other way and ended up with much egg on their faces. Why did so many pollsters get it so wrong? One reason is that almost all major polls are sponsored by/paid for by Leftist media outlets. ABC, CBS, NBC, CNN, MSNBC, The Washington Post, The New York Times, etc. are all Leftist media outlets and the polls they sponsor or choose to report will, predictably, skew the results of polls to favor their own Leftist biases. This obviously happened in the 2016 US election, as several of the links indicate.
Another factor explained in these links is the difference between the sampled population that answers various pollsters’ questions. Many polls sample “all adults” and these are inherently subject to inaccuracies because they can include respondents who are illegal aliens, foreign students and visitors and unregistered voters. Such groups cannot legally vote in an election, but they can answer pollsters’ questions if they are of adult age. It is my opinion that polls of “all adults” aren’t worth reading. Another category is a poll of “registered voters.” This is a better sampling group than “all adults,” but it still has inherent problems because so many registered voters will not vote on election day. The best polls identify “likely voters” and confine their sampling group to only those responders who are regular voters in elections. It is not difficult to skew a poll’s results by avoiding the more scientific “likely voter” polling sample. The fourth link shows the results of many polls regarding Trump’s popularity rating in early 2017. Notice that only two national polls in that link were of “likely voters,” and Trump’s approval ratings were good in both polls. Notice how much worse his ratings were in the polls that surveyed “all adults” or “registered voters.”
Given that so many polls are sponsored by very Leftist or globalist media outlets, I think one reason their polls have been so often wrong in recent years is that the pollsters are skewing their polls to try to shape public opinion rather than merely to report it. In doing so, they may be inadvertently helping conservative/nationalist candidates. Let’s consider the 2016 US election. Many liberals who read Leftist-sponsored polls were being strongly led to believe the “election was in the bag” for Hillary Clinton. Thinking that way could lead many to conclude: “There is no need for me to vote because the polls say she has it won already.” This may have depressed the liberal turnout on election day. In very close states, it could have made a difference in the result.
There is another scarcely-discussed reason why polls can be inaccurate. The fifth link has another analysis about polls’ margins of errors, but buried deep in the article is a remarkable admission. It states that as of 2012, pollsters were finding that 90% of people called were refusing to answer pollsters’ questions! How many are refusing to answer pollsters’ questions today? This begs the question: If polls are only sampling the opinions of about 10% of the overall adults or voters, how accurate can these polls be? Given the many revelations of how every scrap of people’s private data is being scooped up, compiled and sold by Facebook and other social media and internet companies, is it any wonder the vast majority of people are refusing to answer pollsters’ questions? People likely assume (accurately, I would surmise) that everything everyone tells pollsters’ about their political and social viewpoints is going into the “great database in The Cloud” so everyone can access it from then on. No wonder the vast majority of people don’t want to volunteer any personal information to pollsters.
So where does Trump stand now in the polls? The sixth link is a daily tracking poll by Rasmussen Reports, and it shows that throughout April, Trump’s popularity has hovered just below or just above the 50% approval rating (see the two columns on the right side of the screen). I suggest you follow this link daily if you want to get a reasonably-accurate sampling of Trump’s popularity rating. If poll methodologies have remained the same as in previous years, tracking polls will call the same people on a regular basis to see if they have retained or changed their views on candidates or issues (since the poll in the sixth link is a daily tracking poll, they may use a new sample of people each day). I’ll share a secret: About two decades ago, I was a member of the public that a national political poll called in their tracking polls during a presidential election. I’d receive a call from them about every two weeks to see if I had changed my candidate preferences or if I’d changed the level of my approval/disapproval of various candidate positions on the issues. I then thought it was neat to be one of the Americans called in a national poll’s tracking poll during a presidential election, but I would be very reluctant to participate in any poll today given how invasive the data-collection activities now are.
I trust that readers do not mind my including a non-biblical topic occasionally in my blog. Polling is such a ubiquitous part of the political landscape of every major democracy today that I thought readers in all nations might be interested in this discussion of some of the weaknesses in polling techniques and why modern polls seem to be becoming increasingly untrustworthy. This truism remains: the only poll that matters is the one taken on election day when people vote. In a close election even exit polls the day of an election can be wrong as one of the cited links reports that a number of people participating in exit polls will not tell exit pollsters the truth about how they voted. When it comes to modern poll results, the wisdom of the old adage, “caveat emptor” definitely is worth remembering.