Sample Ratio Mismatch Calculator

Detect sample ratio mismatch in A/B tests by comparing observed vs expected group sizes with a chi-square test.

Share this calculator

Expected traffic split

Result

No SRM

The observed traffic split is consistent with the expected ratio. Traffic allocation appears healthy.

P-value
0.1552
Chi-square
2.02
Expected A
9,900
Expected B
9,900
Observed ratio
50:49
Expected ratio
1:1

Interpretation

No Sample Ratio Mismatch detected (p = 0.1552). The observed ratio 50:49 is consistent with the expected 1:1. Traffic allocation appears healthy.

What is SRM?

A Sample Ratio Mismatch occurs when the actual traffic split in an A/B test deviates from the intended allocation. Common causes include buggy assignment logic, bot traffic, redirects dropping users, or data pipeline issues. A p-value below 0.01 indicates the mismatch is unlikely due to chance.

Also in Statistics

A/B Testing

Sample ratio mismatch calculator: detect traffic split issues in experiments

A sample ratio mismatch (SRM) calculator checks whether the observed traffic split in an A/B test matches the expected allocation ratio. SRM is a critical data quality check — if the actual ratio of users in each variant differs significantly from the planned split, the experiment results may be biased and unreliable.

How SRM detection works

The calculator uses a chi-square goodness-of-fit test to compare observed sample sizes against expected proportions.

For a two-variant test with expected ratio r and total sample N: expected counts are E_A = N × r/(1+r) and E_B = N × 1/(1+r). The chi-square statistic is χ² = Σ(O − E)²/E with 1 degree of freedom. A p-value below 0.001 (the standard threshold) indicates a significant mismatch.

χ² = Σ (Oᵢ − Eᵢ)² / Eᵢ

Chi-square goodness-of-fit statistic for sample counts.

Frequently asked questions

What p-value threshold indicates SRM?

The standard threshold is p < 0.001. This is more conservative than the typical 0.05 used in hypothesis testing because SRM is a data quality check — you want to be very sure the mismatch exists before invalidating experiment results.

Can I still trust my experiment if SRM is detected?

Generally no. SRM indicates a systematic bias in how users were assigned to variants, which means the treatment and control groups are not comparable. You should investigate and fix the root cause before drawing conclusions from the experiment.

Related

More from nearby categories

These related calculators come from the same leaf category, nearby sibling categories, or the same top-level topic.