I think people need to stop thinking that “most likely outcome” = prediction. They gave Trump a 1/4 chance of winning in 2016, which is far from impossible and better than most were saying. Their latest trackers have really emphasized the probability aspect of things, rather than the expected vote share.
They gave him a 1/4, with a bunch of caveats like “If we see these midwest states start trending red, that’s a good sign for Biden”. And then Hillary lost Pennsylvania, and 538 basically called it for Trump on the spot.
But polling in 2016 was generally stronger, because we had more professional pollsters and fewer partisan polling operations. Modern polling is increasingly polluted by unreliable narrators, push polls, and polling-as-propaganda for partisan news sites. The problem with 538, structurally speaking, was that it got people to stop doing their own polls and fixate on aggregates to the exclusive of internal research. This, combined with the ongoing consolidation of domestic media markets, means we have fewer and fewer people doing professional polling research.
So the data firms like 538 use has degraded. The interest in their results has faded, as a consequence. And the trend towards eye-polling click-bait headlines has resulted in pollers being defunded in favor of automated screen scrappers and headline generator scripts.
538 has been unreliable for several election cycles, though…
I think people need to stop thinking that “most likely outcome” = prediction. They gave Trump a 1/4 chance of winning in 2016, which is far from impossible and better than most were saying. Their latest trackers have really emphasized the probability aspect of things, rather than the expected vote share.
They actually did a project about this. Here’s how close they were with US House predictions: https://projects.fivethirtyeight.com/checking-our-work/us-house-elections/ (you can look up other elections but since there are so many to work with here I thought it was a good place to start)
They gave him a 1/4, with a bunch of caveats like “If we see these midwest states start trending red, that’s a good sign for Biden”. And then Hillary lost Pennsylvania, and 538 basically called it for Trump on the spot.
But polling in 2016 was generally stronger, because we had more professional pollsters and fewer partisan polling operations. Modern polling is increasingly polluted by unreliable narrators, push polls, and polling-as-propaganda for partisan news sites. The problem with 538, structurally speaking, was that it got people to stop doing their own polls and fixate on aggregates to the exclusive of internal research. This, combined with the ongoing consolidation of domestic media markets, means we have fewer and fewer people doing professional polling research.
So the data firms like 538 use has degraded. The interest in their results has faded, as a consequence. And the trend towards eye-polling click-bait headlines has resulted in pollers being defunded in favor of automated screen scrappers and headline generator scripts.
I mean, they were only actually reliable in 2008, and that’s looking more and more like a fluke.
deleted by creator
True.
Are they more accurate than other analyses, though? What is the magnitude of the error?