Skip to main content
Forecast methodology

From human judgment to probabilities to seat counts.

Our forecast starts with expert ratings — not regression outputs — then uses mathematics to answer the question every campaign asks: what are the odds?
Rating tiers
9
From Safe D through Safe R
Simulations
10,000
Correlated Monte Carlo runs per forecast
Offices covered
Senate · Gov · House
Each with its own wave parameter
Step 1

The rating system

Every race gets a rating on a nine-tier scale. The tiers are not arbitrary — each one maps to a specific probability of the Democrat winning.
Rating tiers → win probability

Nine tiers from Safe D to Safe R. Click a tier to see what it means.

100% D win50 / 50100% R win
Tilt D
60% Dem win · 40% Rep win
Click any tier above to switch

Barely in favor of the Democrat. Either candidate could win in a normal environment.

Step 2

Ratings become probabilities

Why probabilities, not predictions
A binary prediction (Democrat wins / Republican wins) loses information. A Tilt D race is genuinely closer to 60/40 than to 95/5. We preserve that nuance by storing probabilities and reasoning in terms of odds, not certainty.
Reading the 60% number
If you ran a hundred cycles under identical conditions, the Democrat would win roughly sixty of them. A 60% favorite can still lose — and often does, exactly when the environment shifts against them.
Step 3

Probabilities become seat distributions

Once each race has a probability, we can calculate the odds of every possible total — Democrats winning 48 Senate seats, 49, 50, and so on.
Exact math, not simulation
We compute the full probability distribution using dynamic programming — essentially adding up every combination of race outcomes. For 435 House races, this finishes in milliseconds.
The 80% confidence interval
We report an 80% range: the seat counts covering 80% of the probability mass. 'D 213–235' means 80% of outcomes land in that range. The other 20% are the tails — upsets, waves, unexpected events.
Step 4

The national wave

Races are not independent. When one race shifts, correlated races shift with it. This is where the forecast earns its keep.
Why correlation matters
If ten tilt races had independent 60% D probabilities, the odds of all ten going D would be tiny. In reality, they move together. If the environment breaks toward Democrats, many tilt races flip at once. Our model captures this.
How we do it
We run 10,000 simulations. Each one draws a national wave from a distribution, shifts every race's probability in the same direction, then samples outcomes. The result is a distribution with realistic fat tails — genuine blowout and wave-year possibilities.
National wave simulator

Drag the slider to shift every race by a correlated national wave. Watch the whole seat distribution move. This is an illustration built on a synthetic 435-district baseline, not our live forecast.

National wave0.00
Strong R waveEvenStrong D wave
EnvironmentEven national environment
Expected D seats206
P(D majority)0%
80% range202 – 211

Illustrative only. Our actual forecast uses 10,000 correlated simulations — not a fixed wave setting.

Probability of each D seat count under the current wave.
Step 5

Why different offices behave differently

Each chamber gets its own wave calibration — but the direction, not the exact parameters, is what matters

Senate races are the most volatile per race. There are only 33–35 in a given cycle, each one a statewide contest, and an individual flip swings chamber control by one full seat. Senate waves are calibrated to reflect that higher per-race variance.

Governor races sit between Senate and House. They are statewide, but voters judge governors on state-specific records more than national mood. Our model assigns them a moderate wave — correlated with national partisanship, but less tightly than the Senate.

House races are 435 smaller contests. Individual upsets are common; the aggregate distribution is relatively tight. The House wave is calibrated to reflect the tighter aggregate behavior.

We do not publish the exact wave parameters. The conceptual ordering — Senate > Governor > House in per-race volatility — is what drives the forecast.

What we don't do

Our edge is not a regression

We do not package up polling averages, fundamentals, and incumbency into an algorithmic prediction. That space is crowded.
No polling-average-driven model
Polls inform ratings as one input among many. We do not mechanically weight polls by house effects and age the way a polling-average-driven forecast does.
No fundamentals regression
Fundamentals models extrapolate from historical patterns — economy, incumbency, GDP growth. Those patterns break in realignments. We rely on ratings that can respond to this cycle's conditions.
Our edge
165 years of granular election data lets us see patterns others miss. Expert ratings anchored in that history. Probability simulation that respects correlation. The combination is the product.
We show our work
Every forecast links back to ratings, and every rating is tied to the data supporting it. Transparency is the point.
Limitations

Where we're honest

Forecasts are not predictions. They are structured uncertainty.

Ratings reflect human judgment. Good judgment most of the time, but not all the time. Races get miscategorized. We publish rating changes, not quiet edits.

The wave parameters are calibrated on history. A cycle that behaves unlike any recent cycle — a realignment, an unprecedented event — will stretch the model. Our 80% range is not a ceiling.

Individual races can defy the aggregate pattern. A strong candidate in a hostile environment, or a weak incumbent in a safe seat, will beat or break the tier. Grades, polling, and race-specific context still matter.