Bayesian A/B Sample Size Planner
Estimate the traffic each arm of a Bayesian A/B test needs before you can credibly claim a minimum uplift. Enter your baseline conversion rate, target lift, credibility threshold, and optional Beta priors to get per-variant session requirements plus the implied posterior after observing that lift.
Approximation for planning—simulate with your actual prior and success metrics before launch.
Examples
- 3.00% baseline, 15% lift, 95% credibility, flat prior ⇒ ≈41,100 sessions per variant (82,200 total) and a posterior mean near 3.45%
- 5.50% baseline, 8% lift, 90% credibility, α=20 β=340 prior, 3 variants ⇒ ≈14,200 sessions per variant (42,600 total)
FAQ
How do informative priors reduce sample size?
Informative priors add weight to the posterior, effectively contributing α+β pseudo-sessions. When historical performance is stable, they shorten tests; otherwise keep priors weak to avoid bias.
What if I have more than one challenger?
Enter the total number of variants (control plus challengers). The calculator multiplies per-variant traffic to show total sessions required across all arms.
Does this assume fixed-horizon tests?
Yes. Sequential monitoring or Bayesian power curves require simulation. Use this output for planning the initial launch, then monitor posterior probabilities as data arrives.
Can I change the success metric?
The formula assumes a binary conversion. For revenue-per-visitor or average order value, model variance directly or convert the metric into a Bernoulli success indicator.
Additional Information
- Beta(α,β) priors encode past learnings as pseudo-conversions (α−1) and pseudo-non-conversions (β−1).
- Bayesian power focuses on achieving a desired posterior probability instead of rejecting a null hypothesis.
- Lift inputs should be relative (e.g., +10%)—the planner converts them to an absolute delta from the baseline rate.
- Traffic estimates assume equal allocation; reweight traffic manually if you plan asymmetric splits.