What is the point of FiveThirtyEight?
Forecasts and predictions are a growing part of our lives, but are we missing the point of them?
“This is not an election where, like, the model is going to tell you anything that interesting. Biden is way ahead in the polls, and yet there is some chance - not particularly high but some non-trivial chance - that Trump could win. And if you were betting on the outcome, wagering on it, then whether Trump’s chances are, say, 5% or 25% are pretty relevant, that’s a big spread; but as a practical participant in the outcome I’m not sure that’s necessarily a terribly useful distinction.”
Nate Silver, speaking on the FiveThirtyEight podcast before the election.
Silver’s caution didn’t stop people from completely losing their minds on election night. Twitter has two modes, certainty or panic, and Florida falling into the Trump column sent people into total meltdown, raging against pollsters, aggregators, forecasters, and basically anyone with a calculator. One tweet I saw called for ‘all data journalists’ to be fired.
FiveThirtyEight’s model suggested that Biden would survive a Clinton-level polling miss, and a narrow win was well within the fat part of its probability distribution, but somehow that message was lost on much of their readership. The forecast couldn’t provide the certainty its audience of refreshers craved so they manufactured it themselves, taking the 89% figure reported and translating it in their own minds into a certain landslide.
Which raises an important question. If a model can't tell you “anything that interesting” when a candidate has a national polling lead in double digits, and if its audience will interpret the output however they want to regardless of your cautions and caveats, then what exactly is it for?
Predictions are suddenly everywhere in our lives. Data-driven models were a niche pursuit until relatively recently, for the simple reason that nobody outside certain specialist fields had any data to speak of. Then data fell upon our world like Noah’s flood, saturating every aspect of our lives - sport, politics, culture, health, even love - and where data flowed, some kind of forecasting inevitably followed, from the machine learning algorithms of Big Tech to the ‘wetware’ approach of the superforecasters.
All these efforts are driven by the same belief, rarely questioned: that rational, data-driven forecasts help us make better choices that improve our lives or businesses. If I’m a farmer, then it’s useful to know of an imminent drought. If I’m a market trader, then better predictions about stocks and commodities will help me make more money. If I’m a streaming service, then the more accurately I can predict which content users will like, the more they’ll engage with - and spend on - my product.
This data-driven world sounds great until it isn’t. Quant funds seemed like a good place to put your money until the ‘quant winter’ arrived in 2018. Content recommendation systems worked brilliantly until they turned Uncle Derek into a paranoid QAnon obsessive. You don’t have to look far to find people, businesses, even whole governments, who were data-driven right off a cliff.
The models are innocent in all this, of course. Models never hurt anyone - they’re just imperfect evaluations of probabilities based on limited data. It’s the people who interpret models, and try to use those results within some kind of decision-making framework or system, who cause the big screw-ups.
The first problem is interpretation. Probabilities are famously counter-intuitive. The models themselves are often opaque, and presented in unhelpful or confusing ways that draw people to the wrong conclusions. As Natalie Jackson wrote in a recent evaluation of election models: “when probability outputs are broadcast with great fanfare to the general public, most people will misunderstand them, or will make incorrect assumptions about what they mean, or will filter them through their own biases.” Natalie Jackson, Ph.D. [https://centerforpolitics.org/crystalball/articles/poll-based-election-forecasts-will-always-struggle-with-uncertainty/]
I would love to see user testing on this, but I suspect that most people associate a higher percentage on FiveThirtyEight’s forecast with a larger win for that candidate. They would expect a candidate with an 89% probability of winning to get significantly more electoral college votes than one sitting at 70%, to perhaps even win the contest on election night, and they would be disappointed with the model if this didn’t happen. That’s not at all how the model works - the college could be very close, but a candidate might have an overwhelming advantage in the tipping point state - but explaining all this with a number is a tall order. ’89%’ has no meaning in the real world, so audiences are inclined to make up their own.
The second problem is that even if the models are good and people can understand them, their predictions don’t always have a practical use. Who cares whether a candidate has a 25% chance or a 75% chance of winning? If you’re a foreign diplomat, or a business, or a voter, or a lobbyist, the answer is ‘very little’. You’re faced with a binary situation in which either side could lose, and whatever strategy you have for the next few years needs to take this into account; otherwise your strategy is fragile, over-optimised and liable to fail.
This is true of most things in life and business. Good strategy isn’t about having a perfect plan based on superior foresight. The elaborate masterplans of fictional heroes and villains are rarely works of genius; they’re fragile things that succeed only because of the divine interventions of their writers, ensuring each piece falls conveniently into place. Successful strategy in real life starts with the acceptance that much is unknown, then seeks to mitigate that by managing risks and refining knowledge over time.
So if you want to go for a run tomorrow, your plan shouldn’t depend on whether it rains or not - the important thing is to be prepared whatever the weather, to manage the risk. If you’re launching a new product for the first time, you can try to predict which marketing strategy will work the best and go all in, but you’re better off trying lots of things, refining your understanding and doubling down on the approaches that work.
That doesn’t mean forecasts aren’t useful; but they should be treated as part of an ongoing process, not gospel from on high. The act of building and refining a model, and truly appreciating its imperfections, is more important than whatever the output happens to be at any given moment. Which makes it ironic that this is exactly the opposite of how forecasts are usually presented to us. Hell, in many cases we’re lucky to even see that output: often, the results are fed into some automated decision-making process that guides our lives - what we watch, who we date, what we buy - in ways that are as obscure to us as a laser pointer is to a cat.
The greatest value of FiveThirtyEight’s model isn’t the number that it spits out before an election, it’s the impact that the process of modelling has on their thinking throughout the election campaign. The act of recording their predictions, being publicly accountable for them, of revisiting and revising them, is by and of itself incalculably valuable. It helps Silver and his writers to improve their own understanding, keep honest, and produce some of the most robust analysis of US politics.
The beauty of this is that anyone can make predictions, and so anyone can benefit from this approach. You don’t need fancy models, although by all means use them if you like: you just need the ability to write several predictions down in a list, publish them somewhere, and check in on them from time-to-time. They can be about anything that matters in your life, from who gets the first promotion in your team to how long your friend’s new relationship is going to last. Don’t just be a passive consumer of other people’s forecasts: start making some of your own, and see how much sharper it makes your thinking over time.