I was surprised by how many people seemed stunned by last
Tuesday’s presidential election outcome. People like Beth Cox, for example, described on the front page of today's Washington Post.
The thousands of polls taken over the past several months, or more likely the spin of those polls by pundits, apparently did an outstanding job of convincing everyone of exactly whatever it was they wanted to hear.
The thousands of polls taken over the past several months, or more likely the spin of those polls by pundits, apparently did an outstanding job of convincing everyone of exactly whatever it was they wanted to hear.
Newt Gingrich predicted a landslide win for Romney.
Republican operative Dick Morris explained
on Fox News that the polls showing Obama ahead erred because they seriously oversampled
Democrats (wrong) and Obama’s core constituency wasn’t enthusiastic (wrong
again). So rather than being behind by a few points, Morris thought Romney was
actually ahead by 5 or 6 points.
Rush Limbaugh predicted a Romney win and threatened to move
to Canada if he lost. He had also threatened to move to Costa Rica if the Affordable Care Act passed into law. Of
course, he didn’t.
(Why does Rush always threaten to emigrate to countries with
universal health care? But, I digress . . .)
Meanwhile, over on CNN, progressive show host Chris Matthews
said every night, “It’s razor thin out there!”
It wasn’t.
It wasn’t.
And a pair of professors from the University of Colorado
predicted a huge Romney win based on economic analyses, only to have to later
apologize for their dismal performance.
Along the way, Nate Silver was widely
abused by conservatives because his FiveThirtyEight blog’s
model predicted a big Obama win. Silver correctly called every state in the 2008
presidential election. After Politico’s Dylan
Byers called Silver a “One-Term Celebrity” and MSNBC’s “Morning Joe Scarborough”
told Nate that he was looking at the election all wrong, Silver went on to correctly
call every state in the 2012 election, as well, and had the closest prediction
of all.
Sam Wang uses similar statistical techniques at Princeton Election Consortium and his
predicted probabilities for an Obama win were much higher than Nate Silver’s,
but Wang’s lower profile didn’t draw as much conservative ire. Wang had Obama
with a 99% probability of winning weeks before the election (higher than
Silver). His prediction of electoral votes was actually closer than Silver’s.
Prediction markets predicted Obama, too. Online services
like Intrade have a pretty good prediction record. They allow people to bet
actual money on the outcome and had Obama holding a 70% chance of winning for
several weeks before the election. In the final few days, Intrade probabilities
for an Obama win went up fairly dramatically to match those of Silver and Wang.
Slate has a nifty
graphic showing prediction accuracy for the 2012 elections. Hover over the
darts for the data.
So, what should you consider four years from now to understand
how the election is actually going?
First, realize that there are several groups of players in this game. There are pollsters, like Gallup and Quinnipiac, who collect data from relatively small groups of potential voters and they sell this data. But individual polls are like pixels in a broader picture.
There are pundits, like Chris Matthews and Bill O'Reilly, who interpret poll results for their media audiences. Their primary objective is to influence voters, so their interpretations of the poll results can be quite biased.
Then there are statisticians like Nate Silver and Sam Wang. They don't poll potential voters themselves, they use statistics and probability to put together the larger picture I mentioned. Their goals, apparently, are simply to prove that they're smarter than everyone else and make money doing that. They have more incentive to be correct than to favor a party.
And, there are prediction markets. They take bets on who will win and their goal is to make money managing the market. They win no matter which party gets elected, so they have less reason to be biased, too.
First, realize that there are several groups of players in this game. There are pollsters, like Gallup and Quinnipiac, who collect data from relatively small groups of potential voters and they sell this data. But individual polls are like pixels in a broader picture.
There are pundits, like Chris Matthews and Bill O'Reilly, who interpret poll results for their media audiences. Their primary objective is to influence voters, so their interpretations of the poll results can be quite biased.
Then there are statisticians like Nate Silver and Sam Wang. They don't poll potential voters themselves, they use statistics and probability to put together the larger picture I mentioned. Their goals, apparently, are simply to prove that they're smarter than everyone else and make money doing that. They have more incentive to be correct than to favor a party.
And, there are prediction markets. They take bets on who will win and their goal is to make money managing the market. They win no matter which party gets elected, so they have less reason to be biased, too.
Ignore Pundits
The most important thing you can do is to ignore political
pundits on all sides. Whether they are conservative, progressive, or otherwise,
they have terrible prediction skills. To paraphrase Warren Buffet, political
pundits were created to make fortunetellers look good. (Economists are even worse forecasters.)
Even if they have some skill at prediction, they can’t tell
you what they actually believe. They are paid to convince their followers that
their candidate is winning but still needs your vote.
After the election, Chris Matthews apologized to his
audience and said he and the other pundits were wrong and Nate Silver had had
it right all along. A lot of people should apologize to Nate.
Look at the charts below from FiveThirtyEight and compare them to the story pundits were spinning on both sides. Romney momentum? It was gone before the second debate started. Christie and Sandy? Hardly caused a blip.
The pundits had it all wrong. They usually do.
Look at the charts below from FiveThirtyEight and compare them to the story pundits were spinning on both sides. Romney momentum? It was gone before the second debate started. Christie and Sandy? Hardly caused a blip.
The pundits had it all wrong. They usually do.
Ignore Individual Polls
Hardly a day passed in the past few months when I didn’t
hear Chris Matthews say something like, “A new poll out of Quinnipiac shows
that President Obama is losing support among left-handed upland bird hunters.”
This is pure noise. It’s like watching a ticker tape of
stock prices fly by and trying to figure out if the market is moving up or down
as a whole. Ignore it.
Silver and Wang combine all the polls in a way that improves
their predictive power. Their predictions will include the intentions of both
left- and right-handed upland bird hunters, and most everyone else.
Perhaps the best-known poll, the Gallup Poll, performed exceptionally
poorly in 2012 elections.
Forgive Silver and
Wang for the Dems Having Won
If you read Nate Silver’s writing, you will conclude that he
personally leans slightly left, but that he isn’t really all that
political. Still, many of my progressive
friends love Silver because he consistently picked Democrats to win the past
two elections.
But he did that because he was right, not because he is
“left”.
The problem for my friends is that he will also pick the
next Republican winner correctly, after which Democrats will love him less but
he will still be just as accurate.
In his recent book, The
Signal and the Noise, Silver suggests that he will probably sell his model
to someone after the 2012 election and move on, having now conquered baseball and
politics. The FiveThirtyEight model
should still work with Silver gone, and presumably Sam Wang will still be
around and using a similar process.
Ignore Strange Models
Kenneth Bickers and Michael Berry, professors at the
University of Colorado, used an economics
model to predict the 2012 presidential election and missed by 124 electoral
votes. They predicted a huge Romney victory right to the bitter end. Only Karl Rove hung on longer.
They claimed that their model “would have predicted” the
last eight elections. Good models would have predicted the present from the
past, but they also need to predict the future.
You can find millions of patterns that “would have
predicted” the past
— like Redskin victories, hemlines, Super Bowl victories and
the University of Colorado economics model
— that don’t predict the future.
Turns out the future is harder to predict than the past.
Turns out the future is harder to predict than the past.
It’s hard enough to predict an election outcome by asking
people how they will vote, as polls do. Assuming how they will vote based on how they might
react to local economic conditions can only be much more difficult and unreliable.
Putting Their Money
Where There Mouth Is
Then there are the prediction markets. At places like Intrade, people vote with their wallets
instead of their hearts. Studies have shown that the prediction markets are
pretty good. Some experts believe that Intrade actually outperformed
Silver’s FiveThirtyEight blog in 2008.
Intrade showed Obama with a 70% probability of winning in 2012 for several
weeks before the election.
But, 70% isn’t certainty. It often rains when there is only a
30% chance of precipitation and this probability means that if the election
were held 10 times, Romney would have won three of them.
Prediction markets don’t have the political pressures that
pundits do to spin their findings and they have a fairly large sample size.
When people “vote” on Intrade, they lose money if they’re wrong, so you tend to
get honest opinions.
The Popular Vote
Isn’t the Scoreboard
You could track both teams’ field goal percentages during a
basketball game and often have a pretty good idea of who’s winning, especially
if there is an unusually large disparity, but that isn’t the scoreboard, and the
scoreboard decides who wins.
In presidential elections, the popular vote doesn’t
determine the outcome; the Electoral College does. Al Gore won the popular vote
and lost the presidency in the Electoral College. Nate Silver determined after
the election that, given initial conditions in the states for this election,
Romney could have won the popular vote by 2% and still lost the Electoral
College.
So, when Chris Matthews was shouting, “It’s razor thin! It’s
razor thin!” because the popular vote appeared close, he was unknowingly saying
that Obama was ahead, because razor thin in this particular election would have been all Obama needed.
Though it appeared thin after considering the
margin of error for the polls, the popular vote wasn’t actually terribly close.
Obama won it by three million votes after all the results were in. “W” beat
John Kerry by the same margin in 2004. Popular votes can be within the margin
of error and still have large victories for one side.
What to Do Next Time
My suggestion would be to ignore the pundits next time and
understand they are acting out of self-interest (keeping their jobs). I would
bet on Silver and Wang’s interpretations, but if you have convinced yourself that they
are left-leaning because they correctly called two presidential elections for
Democrats, then go with the prediction markets.
Go with state polls and the Electoral College, which have
proven to have more predictive power than national popular vote polls. The
popular vote, if it is reasonably close, doesn’t tell you who will win in every
election.
Look at a few prediction markets, including Intrade, and
remember that a 70% chance of winning isn’t a certain chance of winning. On the
other hand, if your favorite news network is predicting a landslide for your
candidate and Intrade gives him a 20% chance, you might want to factor that into your thinking.
If the poll aggregators like Wang and Silver combined with the prediction markets like Intrade show some sort of consensus, they're probably onto something.
With Wang predicting a 100% chance for an Obama win, Silver
predicting over 90% and Intrade predicting over 90% days before the election, I
wasn’t surprised by the outcome, regardless of what Newt said he thought. I
thought a Romney win was possible, but I would have been surprised.
The information you need is out there in black and white, but you have to filter out the red and blue.
The information you need is out there in black and white, but you have to filter out the red and blue.