In the dark hours of December 2016, we noticed something out of the ordinary. In one of the first elections following the previous month’s political apocalypse, a Democrat running for state Senate in Iowa ran up a margin 31 percentage points better than Hillary Clinton had in that same district.
A month later, we saw a 34-point overperformance in another Iowa district, this time for the state House.
We sat up and took notice.
And it wasn’t just a bounce-back from a Democratic downturn in Iowa, which had swung hard toward Donald Trump. These two Democrats also outran Barack Obama’s 2012 margin by double digits. This was not something we’d often seen in special elections over the previous several years.
Two weeks later, there was yet another massive overperformance, this time in Minnesota. Immediately, we swung into action. Which means, of course, we started a spreadsheet. I mean, we are Daily Kos Elections!
After a few months of tracking results, we started to wonder what their significance was. We dove deep into the elections archives—all the way back to 1989—and were able to show for the first time that special elections, when you analyze elections for both the U.S. House and state legislatures, are an important signal of the political environment: They’re correlated with the House popular vote. Thanks to our innovation, you can now regularly find coverage of special elections and their meaning in national news outlets.
But three cycles later, how is this relationship holding up? Quite well, thank you. Special election results do, indeed, open a window into the future.
Uphill both ways, in the snow
Before we dive into the details, the reason this entire analysis was possible in the first place was due to the heroic efforts of Jeff Singer and the Daily Kos Elections team to calculate the presidential election results in nearly every legislative district in the country. Perhaps not exactly uphill both ways in the snow, but certainly from PDFs of faxes of spreadsheets.
This work also would also have been impossible without the meticulous collection of historic election results found at Our Campaigns, for which we are very grateful. We’d also like to give a shout-out to the sources we’ve used for most of the 2020 presidential results for state legislative districts: Dave’s Redistricting App and the team at VEST, which provided DRA with election data.
In our day-to-day analyses, we focus on the averages of special election results compared with presidential numbers. In our spreadsheets, we include all elections with one Democrat running against one Republican. (Races where the Democrat and Republican combine for less than 90% of the vote remain in the spreadsheet but are excluded from calculations.) Spreadsheets for all cycles can be found at dailykosdata.com; the 2023-24 cycle is here.
When we want to compare cycles, we need to compare special elections with a neutral baseline. For example, in the 2020 cycle, Democrats running in special elections performed 4.8 points better than Clinton’s margin in 2016 and 1.1 points better than Obama’s margin in 2012. We can normalize these numbers by adding in the presidential popular vote: Since Clinton won nationally by 1.8 points and Obama by 3.9 points, these numbers transform to a political environment of D+6.6 and D+5.0, respectively; averaged together, the calculation for the 2020 cycle is an environment of D+5.8.
This figure representing the overall political environment is plotted below for the past five cycles. For each cycle, special election margins are compared with prior presidential cycles, going back to 2012. (With one exception: For 2024, we finally decided to drop the 2012 results, given how long ago that race was.)
Over the past five cycles, these calculations correlate very well with the House popular vote. Four out of five times the calculations were slightly higher than the actual result, but a predictable deviation is a useful deviation.
We can easily see this if you plot a simple trend line through the data. While there are only five points, the conclusion is (as you can see just below): so far, so good.
But what about the longer track record? We have to shift to a different approach.
The original-flavor correlation
Back in 2018, when we were trying to figure out whether special elections were, in fact, important, we designed a different measure of the political environment: the Special Elections Index. Instead of comparing special election results with presidential election results (which were available at the district level going back only a few years), the SEI compares results with previous legislative elections held in the same district. This has several significant drawbacks (many legislative seats go uncontested, for example), but it does mean we can peer decades into the past. You can read about how the index was developed here.
After we updated the SEI data, then, this is what we get (also seen at the top of this post):
The first thing to notice is that overall, there’s still a long-term relationship between the SEI (the green circles with a dotted line) and the House popular vote (the thicker purple line). However, over the last several cycles, you’ll spot a gap: The SEI has, most recently, almost always been above the purple line of the popular vote.
Some of that gap simply comes down to the available data. The widest spread came in 2022, which saw one extremely unusual special election that was a major outlier but contributed heavily to the index; conversely, noncompetitive districts were underrepresented in the index (something that is inherent to the design of the index but results in additional error for 2022 in particular). Fortunately, neither problem was seen in the 2022 calculations based on presidential results above. (Further discussion of 2022 can be found in a forthcoming post.)
Because the SEI compares special elections only with other elections held in the exact same district, the system essentially resets every time redistricting takes place. So let’s take a look at the correlation between SEI and popular vote during different decades—that is, during the lifetime of a given set of districts.
Some pretty clear differences seem to exist across decades. (Note that 1998 is excluded from this analysis; see the discussion here.) But what about 2024? Will new districts bring a new trend? We will have to see—though the recent past offers some clues.
One possible explanation for the differing trend lines in the graph above is the change in voting behavior of certain demographic groups over time.
Here’s the age gap, from (admittedly imperfect) exit polling data: the difference in Democratic margin between the oldest and youngest voters.
We can see a huge jump as millennials joined the electorate under President George W. Bush and their grandparents red-pill themselves on Fox News. This gap has been more or less sustained at high levels for the past handful of cycles.
Meanwhile, more recently, an education gap opened up in the Trump era:
If younger people and those with lower levels of educational attainment are less likely than average to vote in lower-turnout elections such as special elections, we have a plausible mechanism for the differences between the SEI and the popular vote seen above. That is to say, as Democrats began to depend more on younger voters in the first decade of the century, special election results became more Republican than the environment would otherwise indicate. But conversely, as the Republican coalition shed educated voters, special elections swung the other way and became more Democratic than they theoretically ought to be.
A rough calculation—assuming that college-educated presidential voters (a group already skewed to higher education attainment than the population as a whole) turn out at two times the rate of those without a degree in special elections—yields a nationwide Democratic overperformance of around 3 points higher than you’d expect. This is at least consistent with what we observe. (Counterintuitively, the effect would be lower in the most educated districts, something that may partially explain why the smallest overperformances in the 2018 cycle were in the districts with the least change in presidential numbers from 2012 to 2016.)
If this is the case, what does it mean going forward? Given the outright hostility to learning espoused by conservatives, the education gap may very well continue to grow. This would mean the gap between numbers calculated from special elections and the popular vote may increase as well. However, as-yet unforeseen changes in the parties’ coalitions could always counter this effect.
What about a late change in the political environment?
Using special election results to determine the November election environment depends on a relatively constant political environment over the course of an election cycle. But what happens if there’s a big change? Didn’t we see just that in 2022?
Yes, we did. Special elections were able to show us a massive shift after the Supreme Court axed abortion rights! And yet the average overperformance of special elections over the entire cycle still was able to predict the eventual November results in a reasonable fashion. Exploring that contradiction is the topic of a forthcoming piece.
The big picture
So six years on, special elections have indeed proven to be useful in analyzing the election environment. There is still a good long-term correlation between the results of special elections and November elections. We can also say that comparing special elections with presidential results appears to work slightly better than comparing them with the results of other legislative elections in the same district (the Special Elections Index). And we can even manage to pluck additional insights out of the data if we’re careful.