Polling is getting a lot of attention today. Bad attention.
This is the second election cycle in a row where the polls were pointing towards a solid win for the Democratic Presidential candidate, but the actual vote went the other way.
We call the issue is called Sampling Bias: meaning, the people who participated in the survey don’t accurately reflect the population the poll is supposed to be studying.
Now, the (former) Gold Standard of polling forecasts, Nate Silver’s 538, will tell you that all the results are within the poll’s margin of error. And while the results were super close, I don’t agree with him.
Take a look at this PDF of 538’s final forecast compared with the current results (as of the afternoon of November 4th).If it were merely sampling error, we would see the error go both ways. It is not doing that. The error is consistently under projecting support for President Trump. Sometimes by a lot.
This means that the polls have a serious methodological problem.
As a survey researcher, this is deeply troubling – it means whatever weighting “fixes” pollsters like 538 applied to mitigate the 2016 errors simply did not work. It is also troubling because it further erodes confidence in any survey research findings. Now, whatever the result end up being in this exceptionally tight election, the real loser is polling sciences.