Today’s New York Times Upshot column is almost always worth reading, but today’s was exceptionally good: “We Gave Four Good Pollsters the Same Raw Data. They Had Four Different Results.”
The gist of the article is this: Opinion polls rely on assumptions such as how to determine who is a likely voter and what the ethnic composition of the electorate will be.
The raw results from the poll are weighted using these assumptions in order to arrive at the final conclusion.
For example, if your survey has 66% White respondents, but you think the electorate will be 68% White, then you will give the answers from White respondents a little more weight, since they are under-represented in your survey.
But if you think the electorate will be 65% White, then you down-weight the answers from White respondents.
As the title of the column indicates, even with the same raw data, four reputable pollsters came up with different results.
In fact, the differences between them are large enough to swamp the “margin of error” typically reported with poll results.
The “margin of error” is based on the uncertainty generated by a random sample. As the article points out, there are other sources of error that don’t get reported along with the results.
The column is a quick read and definitely worth the time.