Monday was an orderly day in the polls.
National polls showed a modestly favorable trend for President Obama, allowing him to gain slightly in our forecast. (Mr. Obama's chances of winning the Electoral College are now 66.0 percent, according to the FiveThirtyEight model, up from 63.4 percent on Sunday.) But the movement toward him was not anything extraordinary, serving only to offset some of the decline he experienced in the polls late last week, and to bring the national polls more in line with state-by-state surveys.
The state polls themselves were decent for Mitt Romney. But there weren't all that many of them, and the trend that they showed - a four-point gain for Mr. Romney, on average, since the Denver debate on Oct. 3 - was in line with our previous understanding about the magnitude of his gains.
No one of the polls published on Monday really ought to fit the definition of an outlier. Some were slightly m ore favorable or slightly less so for the respective candidates, but in a way that is consistent with unavoidable statistical variation and the methodological differences between different polling firms.
Let me first you show you the trend in national polls; there were 10 of them published on Monday.
Mr. Obama made gains in 5 of the 10 polls as compared with the previous version of the survey, which in most cases postdated the Denver debate. Mr. Romney gained in one poll, although by less than a full percentage point. The others were exactly unchanged.
On average, Mr. Obama gained slightly less than a percentage point, going from about half a point behind in the previous version of the polls to half a point ahead instead.
That is a modest difference, to be sure, but it is getting late enough in the campaign - and the election is so close - that these modest differences can potentially matter when averaged across a number of surveys.
A half-point advantage for Mr. Obama in the national polls is also a little easier to reconcile with the state polls than a half-point deficit. The estimate of the national popular vote from our ânow-cast,â which uses both state polls and national polls, shows Mr. Obama up by one percentage point.
Mr. Obama's position in the ânow-castâ is improved by 0.7 percentage points since Friday. That modest overall gain matches the small gain that he made in the national polls on Monday, suggesting that the national polls may be coming into line with the state polls rather than the other way around.
Unlike on other recent days, the state polls did not constitute a moving target. As I mentioned, Mr. Romney gained about four points on average from the predebate baselin es in the same surveys. That is very consistent with where the ânow-castâ pegs Mr. Romney's debate bounce.
Note that all of the state polls were from swing states; the argument that Mr. Obama would somehow be immune from seeing his swing-state numbers decline was pretty well discredited by late last week, at least in my view. On the other hand, there is equally little evidence that Mr. Obama's decline has been especially large in the swing states.
(One can make a straight-faced argument that Mr. Obama's decline has been slightly larger than average in Florida and slightly smaller than average in Ohio, but even those differences could easily be caused by statistical noise.)
In general, it shouldn't be surprising when you see new polls showing a decline for Mr. Obama from his predebate averages. For instance, the Muhlenberg College poll in Pennsylvania, which showed Mr. Obama's lead declining to four points from seven before the debate, got quite a lot of attention on Monday afternoon. But that was pretty much exactly what you would expect based on the way everything else has trended.
At this point, the more useful question may be how Mr. Obama and Mr. Romney are polling relative to their postdebate numbers. Is Mr. Obama losing further ground still? Are there signs that Mr. Romney's standing has peaked?
Still, even a fairly calm day in the polling can give people opportunities to see what they want to see in the data.
The most egregious form of this is if you cherry-pick the three or four polling results that you like best for your candidate. Every now and then, a candidate's polls are so abysmal that even this exercise will fail to yield satisfying results (Friday was such a day for Mr. Obama, for example). But the vast majorit y of the time, you can find a couple of results that you like.
If you looked at only the three best national polls for Mr. Obama on Monday, you would conclude that he was three points ahead in the national race. If you looked at only Mr. Romney's three best polls, you would say that he was ahead by two points instead.
Most people avoid this sort of mistake, however. It's just too flagrant a case of cherry-picking, when there are 20 polls published in a day and you're discussing only two or three of them.
There is a more subtle form of bias, however, that a lot more of us are prone to. (I'm sure I'd be prone to it myself, which is why I like having a computer program that looks at all the polls and has consistent rules by which it does so.) That bias is to look at all the data - except for the two or three data points that you like least, which you dismiss as being âoutliers.â
If you're a Democrat, for example, and throw out Mr. Romney's three most favorable polls from the 10 national surveys published on Monday, you'll claim that Mr. Obama is ahead in the race by 1.3 percentage points. If you're a Republican and do the same thing, dropping Mr. Obama's three best polls, you'll have Mr. Romney ahead by one point instead.
That is not quite as biased as cherry-picking the best results - but it gets you halfway there, and it is a heck of a lot easier to rationalize. There is something that can be critiqued about almost every poll: the methodology, or the demographics, or the sample size, or the pollster's history, or something else.
Often, these critiqu es have a grain of truth in them: I'm not a relativist who says that Gravis Marketing polls are as good as what Gallup or The Washington Post puts out.
But what people often do is come up with reasons (valid or otherwise) to avoid looking at the polls they don't like - while giving a pass to those they do.
I think claims about polls âoversamplingâ Democrats or Republicans are deeply misguided, for the most part. But if you're going to do it, you ought to do so consistently. If you're critiquing the partisan split in Monday's Washington Post poll, for example, you probably ought to have done the same thing for last week's Pew poll, which also had a partisan split that was different from the consensus.
If this sort of error can be hard to avoid, however, there is a different type that is much less forgivable. That is in making too much of demographic or geographic subsamples within a poll.
For example, Monday's Washington Post poll had Mr. Obama per forming better in what it termed swing states than in the country as a whole; the Gallup poll showed just the opposite.
This data is largely useless. A typical national poll might interview 1,000 people, of which perhaps 250 or 300 will live in swing states, depending on exactly how it defines them.
The margin of error on a 250- or 300-person subsample is enormous: about plus or minus six percentage points. (The swing state sample from the Gallup poll was somewhat larger, but still small as compared to the 3,000 or so voters that it interviews for each instance of its national tracking poll.)
In contrast, in the state polls, there are often tens of thousands of people interviewed in polls of battleground states on a given day. (There were about 2,500 on Monday, for example, despite its having a relatively low volume of state polling.)
There is just no reason at all to care about what 250 or 300 people say when you can look at what 2,500 or 3,000 do in stead. If you're going to indulge this habit, then look at the Rasmussen Reports tracking poll of swing states, which at least has a decent sample size. (And which, not coincidentally, generally shows results similar to Rasmussen's overall national figures.)
Even this can be problematic, however, because there is not a clear delineation between what is a swing state and what isn't. If you include Republican-leaning states like Arizona and Missouri on the list of swing states, while excluding Democratic-leaning states like Minnesota - or if you do just the opposite - you would expect to see some persistent differences.
A related point is that some of the swing states are a lot more important than others. Ohio alone, for instance, is likely be the pivotal state in the election more than 40 percent of the time, according to our tipping-point analysis: about as much as the next four or five states combined.
Nevada, despite having a much smaller population, actu ally ranks higher on our tipping-point list than Florida, and yet Florida (because of its larger population) will have a much greater influence on a battleground state subsample.
The same problem occurs when people focus too much on demographic subsamples within the polls. You'll often see whole news articles focused around themes like: Did you see how bad Mr. Obama's numbers were among Latinos in the Pew poll? How good they were among working-class women in the Quinnipiac survey of Ohio?
It is bad enough to focus on one poll when 20 of them are published on a given day. It is much worse to focus on one of the 20 demographic subsamples within one of the 20 polls, which gives you 400 options to pick from.
These stories are fake news that have no value to consumers. But it is often the outliers that make for better arguments, and better headlines.
No comments:
Post a Comment