An honest crash game produces multipliers that follow a specific distribution — a truncated power-law with a house edge. Most rounds crash low; a small fraction reach high multipliers. This article explains the shape of that distribution, shows what it looks like on real data from three audited games, and explains how to spot a game whose distribution doesn't match what it should.
What shape should the multiplier distribution have?
In a fair crash game with house edge h, the probability that the crash multiplier is at least x is:
P(multiplier ≥ x) = (1 - h) / x for x ≥ 1
This is an inverse relationship: as x doubles, the probability halves. The result is a distribution that is heavily weighted toward low values and has a long right tail of rare high values.
For a game with 3% house edge (97% RTP), the cumulative probabilities are:
| Multiplier | Probability of reaching it | Meaning |
|---|---|---|
| 1.00× | 97% | 3% of rounds crash instantly |
| 1.50× | 64.7% | About 2 in 3 rounds reach 1.5× |
| 2.00× | 48.5% | About half of rounds reach 2× |
| 3.00× | 32.3% | About 1 in 3 |
| 5.00× | 19.4% | About 1 in 5 |
| 10.0× | 9.7% | About 1 in 10 |
| 20.0× | 4.85% | About 1 in 20 |
| 50.0× | 1.94% | About 1 in 50 |
| 100× | 0.97% | About 1 in 100 |
| 1000× | 0.097% | About 1 in 1,000 |
This table describes what "normal" looks like. If you play 1,000 rounds on a fair 97% RTP game, you should see approximately 30 instant crashes, approximately 485 rounds reaching 2× or higher, approximately 97 rounds reaching 10× or higher, and approximately 10 rounds reaching 100× or higher.
These numbers are not guarantees — they are statistical expectations. In any specific 1,000-round sample, the actual counts will vary. But they should not vary wildly from these expectations, and the pattern — many low, few high — should be unmistakable.
Where does the power-law shape come from?
The shape is not arbitrary. It is a direct mathematical consequence of how crash points are generated.
In most crash game implementations, the crash point is derived from a uniform random variable (the hash output) through an inverse transformation. If U is uniformly distributed between 0 and 1, the crash point is:
multiplier = (1 - h) / U
When U is close to 1 (high hash value), the multiplier is close to 1 — a low crash. When U is close to 0 (low hash value), the multiplier is very high — a rare big round. The uniform distribution of U maps to the inverse distribution of multipliers.
This is why the distribution looks the way it does:
- Most of the probability mass is at low multipliers because most of the uniform range (say, 0.5 to 1.0) maps to multipliers between 1× and 2×.
- High multipliers are rare but possible because a small slice of the uniform range (say, 0.001 to 0.01) maps to multipliers between 97× and 970×.
- The house edge appears as instant crashes — a fixed percentage of rounds (equal to h) crash at exactly 1.00× before any cash-out is possible.
The beauty of this construction is its simplicity: one formula, one parameter (the house edge), and the entire distribution falls out mathematically. There are no "cold streaks" programmed in, no "hot cycles," no "loosening" or "tightening." There is only the formula, applied independently to each round's random input.
For a technical deep dive into how the random input is generated, see How Crash Game RNGs Work. For how the provably fair system ensures the formula is applied honestly, see our provably fair explainer.
What does a "normal" distribution look like on real data?
When you plot a histogram of 10,000+ crash multipliers from a fair game, it should show:
- A tall spike at or near 1.00× (instant crashes, roughly equal to the house edge percentage)
- A steep decline from 1.00× to about 3.00× (this range contains the majority of all rounds)
- A long, thin tail extending to the right (occasional high multipliers)
- No gaps, clusters, or periodicities in the distribution
What you should NOT see in a fair distribution:
- A secondary peak at any multiplier above 1× (would suggest the formula has multiple modes)
- A sudden cutoff at a specific multiplier (would suggest a maximum multiplier cap below the theoretical max)
- Periodic clusters (e.g., a spike every 10× or 50×) that are not consistent with random variation
- A distribution that looks different before and after server seed rotations
Clash Watchdog AI's Column B audit formally tests the observed distribution against the theoretical distribution using chi-squared goodness-of-fit and Kolmogorov-Smirnov tests. These tests quantify the probability that the observed data came from the expected distribution. A p-value below our threshold triggers a Watchlist flag.
For the specific games under our review — Stake Crash, BC.Game Crash, Roobet Crash — distribution analysis is a core component of every audit report.
What does an abnormal distribution look like?
An abnormal distribution is one where the observed data deviates from the theoretical prediction in a statistically significant way. There are several patterns that indicate potential problems:
Excess instant crashes. If the declared house edge is 1% but 3% of rounds crash at 1.00×, the game is retaining more than advertised. This is the simplest form of parameter fraud — the operator increases the instant-crash rate beyond the declared value.
Deficit in mid-range multipliers. If the distribution shows fewer 5×–20× outcomes than expected, the game may be using a modified formula that compresses the mid-range. This shifts value from players (who often target mid-range cash-outs) to the house without visibly affecting the extreme tails.
Truncated tail. If the distribution shows a sharp drop-off at a specific multiplier (say, 500×), the game may be implementing a hidden maximum multiplier that is not disclosed. This steals probability mass from the tail and redistributes it, effectively lowering the RTP.
Time-dependent distribution. If the distribution looks different during high-traffic periods versus low-traffic periods, or before and after seed rotations, the game may be adjusting parameters dynamically. In a fair game, the distribution should be stationary — it should look the same regardless of when you measure it.
Each of these patterns is detectable through statistical testing, but each requires different tests and different sample sizes. Our evidence tiers reflect this: higher tiers require larger samples and more comprehensive distribution testing.
How does RTP interact with the distribution shape?
RTP and the distribution are two views of the same mathematical object.
A game's RTP is the expected value of a $1 bet across all possible outcomes. The distribution tells you the probability of each possible outcome. The RTP is the integral of the distribution — the probability-weighted average of all multipliers.
Changing the RTP changes the distribution in a specific way: a lower RTP compresses the distribution toward lower values. Concretely:
- A 99% RTP game has 1% instant crashes and reaches 10× about 9.9% of the time.
- A 97% RTP game has 3% instant crashes and reaches 10× about 9.7% of the time.
- A 90% RTP game has 10% instant crashes and reaches 10× about 9.0% of the time.
The differences are small in percentage terms but large in experience over hundreds of rounds. And the differences are largest at the extremes — the probability of reaching 100× or 1000× scales directly with the RTP.
This is why RTP matters even for players who target low cash-out multipliers (say, 1.5×). A lower RTP does not just reduce the rare big wins — it increases the frequency of instant crashes that eat your bet before you can act. A 10% instant crash rate versus a 1% instant crash rate is the difference between losing one bet per hundred rounds and losing ten.
How do we detect distribution anomalies in audits?
Our Column B audit uses three statistical tests, each designed to catch a different type of anomaly:
Chi-squared goodness-of-fit. We divide the multiplier range into bins (1.00–1.50, 1.50–2.00, 2.00–3.00, etc.) and compare the observed count in each bin to the expected count. The chi-squared statistic quantifies the total discrepancy. A large chi-squared value (low p-value) indicates the observed distribution is unlikely to have come from the theoretical one.
Kolmogorov-Smirnov (KS) test. We compare the entire cumulative distribution function — not just binned counts — against the theoretical CDF. The KS test is sensitive to deviations at any point in the distribution, including the tails. It is particularly good at detecting shifts (the distribution is offset from where it should be) and compressions (the distribution is narrower than it should be).
Runs test on binned residuals. After computing the difference between observed and expected in each bin, we test whether the positive and negative residuals are randomly distributed or whether they cluster. Clustering suggests a systematic deviation (e.g., the game consistently under-delivers in the 5×–10× range) rather than random noise.
These three tests together provide comprehensive coverage. A game that passes all three is producing a distribution consistent with its declared parameters. A game that fails any one is flagged for deeper investigation.
The evidence tier determines the sample size and therefore the sensitivity: Tier 1 tests detect gross deviations (3%+ RTP error). Tier 3 Gold tests detect subtle deviations (0.5% RTP error). See our methodology for the exact parameters, and our game listings for which games have been tested.