Most sales forecasts are wrong by the second week of the quarter. The methodology was either too simple to capture reality or too complex for anyone to actually trust.
This post walks through five forecasting methods that real B2B sales orgs use. For each, you will see when to use it, the formula, the accuracy you should expect, and a worked example with numbers.
Why forecasting methodology matters
Forecasting is not a math problem. It is a decision making problem.
The number you commit to drives hiring, capacity planning, board updates, and bonus calculations. A forecast that swings 30 percent week over week is not a forecast. It is a guess wearing a spreadsheet.
The right method depends on your data quality, your sales motion, your deal cycle length, and how much variance you can tolerate. There is no single best method. Most mature orgs use two or three together and triangulate.
Method 1: Historical forecasting
The simplest method. Take last quarter, last year, or a multi quarter average and project it forward.
When to use it. Stable, mature businesses with low churn and predictable seasonality. Consumer goods. Established product lines with several years of clean data.
Formula.
Forecast = Prior period revenue x growth rate
Worked example. Last Q3 you closed $4.2M. Annual growth has been 18 percent. Q3 forecast is $4.2M x 1.18 = $4.96M.
Accuracy. High in stable businesses. Useless if your motion, market, or product changes meaningfully. Historical forecasting cannot see a new competitor, a pricing change, or a category shift coming.
Failure mode. Anchoring. The number feels safe because it came from history, but the world has moved.
Method 2: Weighted pipeline forecasting
Each open opportunity gets a probability weight based on its stage. Multiply, add, done.
When to use it. B2B SaaS. Mid market and enterprise. When you have at least 12 months of stage to close conversion data to set the weights honestly.
Formula.
Forecast = Sum of (deal value x stage probability) for all open opportunities
Worked example. Suppose your stages have these historical close rates:
| Stage | Win rate |
|---|---|
| Discovery | 10% |
| Qualified | 25% |
| Proposal | 50% |
| Negotiation | 75% |
| Verbal | 90% |
You have these open deals for the quarter:
- Acme Corp, $200K, Negotiation = $200K x 0.75 = $150K
- Globex, $400K, Proposal = $400K x 0.50 = $200K
- Initech, $150K, Qualified = $150K x 0.25 = $37.5K
- Umbrella, $300K, Verbal = $300K x 0.90 = $270K
Weighted forecast = $657.5K.
Accuracy. Decent if your stage definitions are tight and reps update them honestly. Falls apart when stages become opinion fields ("we feel good about this one") instead of evidence based gates.
Failure mode. Stage inflation. Reps push deals to higher stages to look better in pipeline reviews. The weighted forecast then overstates reality.
Method 3: Opportunity stage forecasting (commit, best case, pipeline)
Reps categorize each open deal into three buckets that reflect confidence, not stage.
Commit. I will close this. Bet my number on it. Best case. It could close. Reasonable upside. Pipeline. Open but not in this quarter's call.
When to use it. When stage data is unreliable but rep judgment is calibrated. Common in enterprise sales with longer cycles and where each deal needs to be discussed individually.
Formula.
Low forecast = Commit
Likely forecast = Commit + (Best case x historical best case win rate)
High forecast = Commit + Best case
Worked example. Reps submit:
- Commit = $1.8M
- Best case = $1.4M
- Historical best case win rate = 40%
Low forecast = $1.8M Likely forecast = $1.8M + ($1.4M x 0.4) = $2.36M High forecast = $3.2M
Accuracy. Strong when the sales leader actively challenges commit and best case calls in pipeline reviews. Weak if reps sandbag commit or stuff best case to look optimistic.
Failure mode. No discipline at the review. Reps move deals between buckets based on mood instead of evidence.
Method 4: AI and machine learning forecasting
A model ingests historical CRM data, activity data, deal characteristics, and external signals. It predicts close probability and dollar amount per deal.
When to use it. High volume sales motions where you have thousands of historical opportunities to train on. SMB and mid market are typical fits. Enterprise is harder because deal counts are lower and each deal is more bespoke.
Formula. Whatever the model says. The vendor or your data team owns the math. Common inputs:
- Stage, age, last activity date
- Engagement signals (emails, meetings, multi threading)
- Account characteristics (industry, size, geography)
- Deal characteristics (product mix, ACV, contract length)
- Rep level patterns
Worked example. Conceptual rather than numeric. The model assigns each open deal a probability between 0 and 1. Sum the probability weighted ACVs for the forecast. The advantage over Method 2 is that the probability is not just stage based. It pulls in signal from activity and history.
Accuracy. Often better than weighted pipeline by 10 to 20 percent in mid market motions. Often worse than human judgment in enterprise where deals turn on factors the model cannot see.
Failure mode. Black box trust. Sales leaders need to understand why a deal is at 30 percent or 80 percent. If the model cannot explain itself, reps will ignore it and the forecast becomes a parallel exercise that nobody acts on.
Method 5: Multivariable forecasting
Combine multiple methods and weight them. Add macro variables. Run scenarios.
When to use it. Mature revenue ops teams. Public companies and pre IPO companies that need to defend the number to a board or to investors.
Formula.
Forecast = (w1 x Historical) + (w2 x Weighted pipeline) + (w3 x Rep commit) + adjustments
The weights are tuned based on which method has been most accurate in recent quarters.
Worked example. Last 4 quarters of forecast accuracy by method:
| Method | Average error |
|---|---|
| Historical | 12% |
| Weighted pipeline | 8% |
| Rep commit | 15% |
| AI model | 6% |
You weight inversely to error. The AI model gets the highest weight, then weighted pipeline, then historical, then rep commit. Add a macro adjustment if the quarter has a known headwind (a major customer pause, a regulatory event, a competitive pricing move).
This quarter:
- Historical = $9.0M (weight 0.20)
- Weighted pipeline = $9.4M (weight 0.30)
- Rep commit = $8.6M (weight 0.10)
- AI model = $9.7M (weight 0.40)
- Macro adjustment = -$200K (key account delayed renewal)
Forecast = (0.20 x 9.0) + (0.30 x 9.4) + (0.10 x 8.6) + (0.40 x 9.7) - 0.2 = 1.8 + 2.82 + 0.86 + 3.88 - 0.2 = $9.16M
Accuracy. Highest of any method when calibrated. Demands disciplined post mortems each quarter to keep weights honest.
Failure mode. Process overhead. If updating the model takes longer than running the quarter, nobody will keep it current.
How to choose a method
Three questions.
How much historical data do you have? If less than four clean quarters, lean on weighted pipeline and rep commit. Historical and AI methods need more data to be useful.
How long are your deals? Short cycles (under 30 days) favor historical and AI. Long cycles favor weighted pipeline and rep commit because each deal carries enough weight to discuss individually.
How calibrated are your reps and stages? If stage data is clean, weighted pipeline is your fastest win. If stage data is opinion fields, fix that first or lean on commit categories.
Most healthy B2B SaaS orgs run weighted pipeline plus rep commit, then add an AI model layer once they have the data and can defend it.
What forecasts cannot tell you
A forecast says how much. It does not say why or what to do.
A weighted pipeline of $9.4M is just a number. The useful version is: of the open deals, which ones are single threaded, which ones have a multi threaded buying committee, which ones have a verified economic buyer, which ones have a real close plan. That is the work that moves the forecast from a guess to a commitment.
This is where account planning and forecasting connect. The account plan tells you whether the deal is real. The forecast tells you what to expect if the work in the plan continues.
Related reading
Bring this into Salesforce with CRUSH
Forecasts are only as good as the account intelligence behind them. Our CRUSH platform sits inside Salesforce and keeps stakeholder maps, close plans, and whitespace visible at the deal and account level so reps and managers can defend their commit calls with evidence.
When the account plan lives in the CRM, the forecast review is not a guessing game. The deals at risk are the ones with shallow stakeholder coverage and missing close plans. The deals that are real have multi threaded buyers, named economic buyers, and clear next steps.