The election results are in and … **the geeks won**!

What does this victory mean? That mathematical models can no longer be derided by "gut-feeling" pundits. That Silver's contention -- TV pundits are generally no more accurate than a coin toss -- must now be given wider credence.

The great thing about a model like Silver's (and that of similarly winning math nerds, such as Sam Wang of the

Princeton Election Consortium) is that it takes all that myopic human bias out of the equation. The ever-present temptation to cherry-pick polls is subverted.

You set your parameters at the start, deciding how much weight and accuracy you’re going to give to each poll based purely on their historical accuracy. You feed in whatever other conditions you think will matter to the result. Then, you sit back and let the algorithm do the work.

Silver may be a registered Democrat, but he learned back when he was doing baseball analysis that he'd never get anywhere if his models weren't absolutely neutral, straight down the line between feuding teams.

At least I hope this is the real takeaway … time will tell.

We all have the tendency to literally put our finger over information we find unpalatable, be it an inconvenient poll result or bad news on iPad mini sales prospects. Any aggregation of polls that's agnostic toward the polls it includes will automatically remove these biases.

The science here was simply weighting the polls correctly. The date of the sample, the size of the sample, the lean of the pollster, and the overall quality of the pollster all factor in. So long as the formula accounts for all this (and other factors I'm not thinking of off the top of my head), it's going to give a reasonably accurate estimate, with the proviso that polls look slightly backwards and thus won’t pick up last-minute changes.

The potential flaw was systemic bias in the polling itself -- kind of a garbage in/garbage out situation. If the polls were pretty much all off, then clearly a system that basically aggregates the results will miss the call. And pollsters all had to make assumptions about the demographics of the actual electorate. It was certainly possible that they systemically overweighted or underweighted groups, although that wasn't enormously likely, given that the pollsters didn't use uniform assumptions. In fact, that's actually the whole point of aggregating and weighting .

But hey, it could have still spit out bad results anyway, for some unforeseen reason. And that wouldn't have "proven" the model was dysfunctional, any more than the fact he got it right proves that it's the perfect prediction tool. It's not, it's just another way of taking the same data everyone is using and combining it into one simple result. Process trumps outcome, and the process seemed sound. The algorithm came before the data that filled it this go-around.

So, what can we learn? The easy answer is more likely correct than the convoluted one. One individual poll can make all sorts of mistakes, but an aggregation has a better chance of correcting those mistakes. The model will likely improve with each iteration. Silver now has an entire cycle and bazillion polls and the end results to work with. We'll likely also see lots of math geeks taking this idea and tweaking it and creating their own models … perhaps better ones.

Net-net, it's a GOOD thing. More Math! Fewer pundits pulling predictions out of their craws.

Oh, and note to self. Set up an Intrade account before the next election. The long-Obama-on-Intrade versus long-Romney-anywhere-else spread got better and better as the election closed in.

*Disclaimer: The views represented on this blog are those of the individual authors only, and do not necessarily represent the views of Schaeffer's Investment Research.*

permanent link