Most CRM teams report campaign ROI using total revenue. That number looks great in a slide deck, but it is lying to you. It counts money that players would have spent anyway, and that means every decision you make from it is based on inflated results. Budgets go to campaigns that look like winners but barely break even. Strong performers get cut because the “real” numbers seem underwhelming.
This guide covers how to measure CRM campaign ROI properly: incremental revenue, control groups, outlier management, and static margin. These are the methods and formulas I have used across years of running CRM at scale in iGaming. No theory, no fluff. Just what works and why.
IN THIS ARTICLE
- 01. How do you measure CRM campaign ROI?
- 02. What is incremental revenue and why does it matter?
- 03. How do you calculate incremental revenue? Two methods compared
- 04. How do you handle outliers in CRM campaign measurement?
- 05. What should you measure beyond campaign ROI?
- 06. What does good CRM measurement look like in practice?
- 07. Frequently asked questions
01 How do you measure CRM campaign ROI?
This is the number that answers your manager’s Monday morning question. But the way most teams calculate it is wrong. They use total revenue from the campaign, which credits money that players would have spent anyway. The right way is to use incremental revenue.
And if your leadership team is making campaign decisions based on those inflated numbers, you are building strategy on bad data. Budgets get allocated to campaigns that look great on paper but barely break even in reality. Good campaigns get cut because they “only” returned 80% when measured properly. That is a problem that gets worse every quarter it goes unchecked.
Formula: Campaign ROI = ((Incremental Revenue – Total Cost of Campaign) / Total Cost of Campaign) x 100
If a casino spends €10,000 on a deposit bonus campaign and the incremental revenue (the extra revenue above what players would have spent without the campaign) is €25,000, the ROI is 150%.
Two things most teams also get wrong: they undercount costs and they skip the control group.
In theory, total campaign cost includes bonus payouts, creative work, platform fees, staff time, and any extra support costs. In practice, it is very hard to calculate all those additional costs and assign them to a single campaign. How do you split your CRM platform fee across 30 campaigns? How do you allocate designer time spent on one email? Most operators cannot do this at the campaign level, and trying to force it creates more confusion than clarity.
That is why the most common practice is to use bonus cost as your campaign cost. It is the one number you can track cleanly per campaign. But always keep in mind that there is more behind it. If a campaign shows a positive ROI but the margins are very thin, it is most likely not profitable once you account for all the hidden costs on top. Use bonus cost for your day-to-day campaign reporting, but apply common sense: a campaign that barely breaks even on bonus cost alone is losing money in reality.
02 What is incremental revenue and why does it matter?
The basic ROI formula above has a blind spot. It assumes that all the revenue from your campaign group was caused by the campaign. But what if those players would have spent money anyway, even without the campaign?
That is where incremental revenue comes in. Incremental revenue is the extra money your campaign brought in on top of what would have happened without it. It is the true measure of whether your campaign actually changed player behaviour or just rewarded people for doing what they were already going to do.
How to calculate it
To measure incremental revenue, you need a control group (also called a holdout group). This is a small portion of your target audience that does not receive the campaign. Everything else stays the same. The control group acts as your baseline: it tells you what would have happened if you had done nothing.
- Step 1: Split your target audience. Send the campaign to most of them (the campaign group), and hold back a small portion who get nothing (the control group). A 90/10 split is common.
- Step 2: After the campaign ends, measure the average revenue per player in both groups.
- Step 3: Calculate the difference.
Formula: Incremental Revenue = (Revenue per Player in Campaign Group – Revenue per Player in Control Group) x Number of Players in Campaign Group
Worked example
You want to send a deposit bonus campaign to 1,000 players. Before you send it, you split them into two groups:
- Campaign group (900 players): They receive the bonus campaign.
- Control group (100 players): They receive nothing. They just continue playing as normal.
Both groups come from the same audience. The only difference is whether they got the campaign or not.
After one week, you measure the results:
- Campaign group: €45,000 total revenue. That is €50 per player on average (€45,000 / 900 = €50).
- Control group: €3,500 total revenue. That is €35 per player on average (€3,500 / 100 = €35).
Now here is the key step. The control group is a sample. Its only job is to answer one question: how much does a typical player in this audience spend when they get no campaign? The answer is €35 per player.
You use that €35 figure to estimate what the 900 campaign players would have spent if they had also received nothing. You multiply €35 by 900 (not by 100) because you are projecting the control group’s behaviour onto the larger campaign group.
- Expected revenue from 900 players without the campaign: 900 x €35 = €31,500
- Actual revenue from 900 players with the campaign: €45,000
- Incremental revenue: €45,000 – €31,500 = €13,500
That €13,500 is the extra revenue your campaign created. The other €31,500 would have happened anyway, campaign or not.
Now look at the ROI. Say the campaign cost €5,000:
Wrong way (using total revenue): ((€45,000 – €5,000) / €5,000) x 100 = 800%
Here €45,000 is the total revenue from all 900 players, €5,000 is the campaign cost, and the difference (€40,000) is treated as profit. This looks amazing on paper, but it counts money players would have spent anyway.
Right way (using incremental revenue): ((€13,500 – €5,000) / €5,000) x 100 = 170%
Here €13,500 is only the extra revenue the campaign actually created, €5,000 is the campaign cost, and the difference (€8,500) is the real profit. 170% ROI means the campaign returned 1.7x what it cost. That means for every €1 you spent, you got €1.70 back. That is a thin return. If your bonus costs were slightly higher or your control group spent a bit more, this campaign could easily flip to break-even or a loss.
The gap between 800% and 170% is the difference between reporting what leadership wants to hear and reporting the truth.
Using total campaign revenue instead of incremental revenue means that teams report, for example, 500%+ ROI on campaigns that, when measured with incremental revenue, barely broke even. That means the campaign was most likely giving bonuses to players who would have deposited anyway.
03 How do you calculate incremental revenue? Two methods compared
You may see incremental revenue calculated in two different ways. Both are valid, but they answer different questions.
Option A: Based on the campaign group only
Incremental Revenue = (Revenue per Player in Campaign Group – Revenue per Player in Control Group) x Number of Players in Campaign Group
Using our example: (€50 – €35) x 900 = €13,500
This counts only the extra revenue from the players who actually received the campaign. You spent money on 900 players (bonuses, delivery, etc.), so you measure the return from those 900 players. This is the right approach for campaign-level ROI, because it matches your revenue to your actual costs.
When to use it: When you need to know if a specific campaign was profitable. When you are reporting ROI to your manager. When you are deciding whether to run this campaign again or cut it.
Option B: Based on the total targeted audience
Incremental Revenue = (Revenue per Player in Campaign Group – Revenue per Player in Control Group) x Total Players Targeted
Using our example: (€50 – €35) x 1,000 = €15,000
This projects the uplift onto the full audience, including the 100 control group players who did not receive the campaign. The logic is: those 100 players were held back on purpose for measurement, not because they were ineligible. If you had sent the campaign to all 1,000, those extra 100 players would have likely generated the same per-player uplift. This approach is used by platforms like Optimove for their CRM Contribution metric, which measures how much of your total revenue was driven by CRM activity.
How does this connect to CRM Contribution? Here is a full example
Say your company made €500,000 in total revenue this month from all players. During that month, your CRM team ran three campaigns, each with a control group. Using Option B, you calculate the incremental revenue for each:
- Campaign 1 (deposit bonus): €15,000 incremental revenue
- Campaign 2 (reactivation offer): €8,000 incremental revenue
- Campaign 3 (VIP reward): €12,000 incremental revenue
- Total incremental revenue from CRM: €35,000
CRM Contribution = Total Incremental Revenue from CRM / Total Company Revenue x 100
CRM Contribution = €35,000 / €500,000 x 100 = 7%
That means 7% of your total monthly revenue exists because your CRM team ran those campaigns. The other 93% (€465,000) would have happened anyway without any CRM activity.
Important: This calculation only works when all your campaigns have control groups. The €35,000 in incremental revenue above comes from three campaigns that each had a control group, so you know the uplift is measured, not guessed.
In practice, not every campaign will have a control group. If only some of your campaigns are measured this way, you can estimate the total CRM uplift by adjusting for coverage.
For example, if 80% of your campaigns had control groups and those generated €35,000 in measured uplift, you can estimate the total uplift across all campaigns as €35,000 / 0.80 = €43,750. This is an approach recommended by Optimove in their CRM Contribution methodology: divide the measured uplift by the share of campaigns that had control groups to get a fair estimate of the full picture.
The more campaigns you measure with control groups, the more accurate your CRM Contribution number becomes.
This is the number that justifies your CRM team’s budget and headcount. If your CRM team costs €20,000 per month and generates €35,000 in incremental revenue, the team pays for itself and then some. If leadership ever asks “what is CRM actually worth to us?”, this is the answer.
You use Option B here because you want the full picture of what CRM created, not just what happened within each campaign group. The control group players were held back for measurement, but the value they represent is real.
When to use it: When you are reporting the total value of CRM to leadership. When you need to answer “what percentage of our revenue did CRM create?” When you are justifying CRM budget or headcount across all campaigns and segments.
Which is better?
Neither is better overall. They serve different purposes. Option A is better for campaign decisions because it compares real costs against real results. Option B is better for strategic reporting because it shows the full value CRM delivers, including the revenue you left on the table by holding back a control group. If you use Option B for campaign ROI, you will overstate your return because you are counting revenue from players you did not spend money on. If you use Option A for total CRM reporting, you will understate the value of your team’s work.
The simple rule: Option A for campaign ROI. Option B for total CRM contribution.
Quick comparison: Option A vs Option B
| Option A (Campaign Group) | Option B (Total Audience) | |
|---|---|---|
| Multiplied by | Number of players in campaign group | Total players targeted (including control) |
| Best for | Campaign-level ROI | CRM Contribution reporting |
| Answers | “Was this campaign profitable?” | “What % of revenue did CRM create?” |
| Risk if misused | Understates total CRM value | Overstates individual campaign ROI |
| Example result | €13,500 incremental revenue | €15,000 incremental revenue |
04 How do you handle outliers in CRM campaign measurement?
You run a deposit bonus campaign for 500 players. One player hits a €25,000 jackpot. That single payout wipes out the revenue from the other 499 players. Without that one result, the campaign returned 120% ROI. With it, the ROI is negative 40%.
This is the outlier problem. One big win or one heavy loss can move your campaign numbers more than hundreds of other players combined.
It works both ways. A high-value player who loses big during your campaign makes a weak offer look great. You run it again next month, that player is not there, and the “proven” campaign breaks even.
The smaller the group, the worse it gets. One outlier in a VIP campaign of 50 players can move the average by 20% or more. The same outlier in a mass campaign of 10,000 barely changes anything.
If you decide which campaigns to keep or cut based on numbers shaped by one or two players, you end up cutting good campaigns and scaling lucky ones. Over a few quarters, your CRM strategy is built on noise.
How to combat it
There are two ways to handle outliers, and the best teams use both together.
Approach 1: Clean your real data first
Before you calculate campaign ROI from actual results, reduce the impact of extreme values. Here is how:
1. Cap extreme values before calculating ROI. Set an upper and lower limit for player-level revenue, usually at the 95th and 5th percentile. Anyone above the upper limit gets capped to that value. Anyone below the lower limit gets raised to it. This is called winsorization. You keep all players in the data but limit how much one result can move the average.
2. Show median next to mean. Mean (average) revenue per player is what most teams report. But one outlier can pull it way up or down. Median is the middle value when you sort all players by revenue. It is not affected by extremes. If your mean ARPU is €80 but your median is €35, a few extreme results are changing the picture. When mean and median are far apart, the average is misleading.
3. Show ROI with and without outliers. Report both numbers. If your campaign shows 200% ROI but drops to 40% when you remove the top and bottom 2% of players, leadership needs to see that. The 40% is closer to what you should expect next time.
4. Use bigger control groups. A control group of 20 players is too small. One outlier in the control group changes the baseline just as much as one in the campaign group. Use at least 100 players in your control group. For VIP or high-roller segments, use more.
How many players do you actually need? It depends on two things: how noisy your data is and how big a difference you are trying to find between test and control. A large difference (30%+ lift) is easy to spot with smaller groups. A small difference (5% lift) needs thousands of players to prove.
In iGaming, player-level revenue has very high variation because of jackpots and big losses. That means you need more players than you might expect:
- Mass campaigns: at least 1,000 players in each group (both test and control).
- Mid-tier segments: at least 500 per group.
- VIP segments: this is the hard part. You often do not have enough VIP players for a single run to give you a reliable answer. That is exactly why running VIP campaigns multiple times matters so much.
Using static margin (Approach 2 below) helps here too. Because static margin removes the big revenue swings, the data is less noisy, and you need fewer players to reach a trustworthy result.
If removing 2 or 3 players from your test or control group flips the result, your groups are too small to trust.
5. Run campaigns more than once before deciding. One campaign run is one data point. If you run the same campaign three months in a row and it returns 80%, 110%, and 95%, that is consistent. If it returns 300%, negative 50%, and 120%, that is random variation, not a real result.
After cleaning your data with these methods, the incremental revenue formula stays the same: Incremental Revenue = Cleaned Campaign Group NGR – Cleaned Control Group NGR. You are still using actual results, just without the extreme values. This tells you what the campaign really delivered this period.
Approach 2: Add static (theoretical) margin as a second view
Static margin looks at campaign performance from a different angle. Instead of using actual revenue (which swings when one player hits a jackpot), you calculate what the campaign should produce over time based on the mathematical edge of the games.
The formula: Expected Revenue = Turnover x Static Margin %
Example: a slot has a 4% house edge. Your campaign group generated €500,000 in turnover. The expected revenue is €500,000 x 4% = €20,000. It does not matter if one player won €25,000 or lost €25,000. The expected revenue stays the same because it is based on the long-term mathematical average, not the actual short-term result.
With static margin, turnover becomes your main campaign metric. The formula is Revenue = Turnover x Margin. If the margin is fixed, the only question is: did the campaign generate more betting activity? A good campaign increases turnover per player. A bad one does not. The jackpot that destroyed your actual NGR does not affect this calculation.
How to calculate incremental revenue with static margin: subtract the control group turnover from the campaign group turnover, then multiply by the static margin. If your campaign group generated €500,000 in turnover and the control group (scaled to the same size) generated €400,000, the extra turnover is €100,000. At a 4% static margin, the incremental revenue is €4,000. This removes the biggest source of noise: the direct revenue swing from a single win or loss.
One thing to keep in mind: static margin removes the outlier impact on the revenue side, but turnover can still be affected.
A player who hits a big jackpot now has more money in their account and might keep playing, placing more bets and pushing the group’s turnover higher. A player who loses everything early stops betting and generates less turnover. This effect is much smaller than the direct NGR swing from a jackpot, but it is there.
Static margin reduces the outlier problem. It does not remove it completely.
This approach works well for casino, where game margins are known and stable (3-5% for slots, 2-3% for table games). For sportsbetting, margins change by event and market, so you need to use average margin by bet type (pre-match vs. live, football vs. niche sports).
Important trade-off: Static margin removes variance, but it does not account for bonus costs. Bonuses are real money out of your budget. Always subtract actual bonus costs when calculating campaign ROI. The formula becomes: Campaign ROI = ((Turnover x Static Margin) – Bonus Cost) / Total Campaign Cost x 100. If you can track delivery costs per campaign (SMS fees, push notification costs, etc.), subtract those too.
Static margin also assumes that actual results will match the theory over time. Over thousands of bets, they will. Over one weekend campaign with 50 VIP players, they might not.
Why you need both views, not just one
Cleaned real data answers: “What did this campaign actually deliver this period, without the extreme results?”
Static margin answers: “What should this campaign produce over time, based on the math?”
When the two numbers are close, your campaign is performing as expected. When they are far apart, something is worth looking into. Maybe players moved to lower-margin games. Maybe bonus costs were higher than planned. Maybe the campaign attracted a different type of player than usual. The gap between your cleaned real results and the expected result is where the useful insight is.
Use cleaned real data for current reporting and short-term decisions. Use static margin for long-term planning. Show both to leadership so they see the full picture.
Here is a scenario that happens more often than you think. A reactivation campaign makes money seven out of eight months. In the one bad month, a returning player in the control group hits a big win, and suddenly the control group looks better than the campaign group. Someone looks at that month’s numbers and wants to kill the campaign. If the team had cleaned the data or checked the static margin view, they would have seen the campaign was bringing in more betting activity every single month. One outlier in one month nearly kills a campaign that works.
Never decide to keep or kill a campaign based on a single run without checking for outliers first.
Clean your real data, check the static margin view, and compare both before you decide.
What if your team does not have the tools to clean the data?
Not every CRM team has a data analyst or advanced reporting tools. If you cannot cap extreme values or run percentile calculations, you can still get a clear picture of campaign performance.
Report both views next to each other: your actual incremental revenue (Campaign Group NGR minus Control Group NGR, even with outliers in it) and the static margin view (extra turnover multiplied by the house edge). Then run the campaign more than once.
Three runs tell you more than one cleaned number ever could. If your actual results jump up and down between runs but the static margin view stays stable, the campaign works and the swings are just random variation. If both views are low across all three runs, the campaign is not working. You do not need advanced tools to see that. You need two columns in a spreadsheet and the patience to run the campaign more than once before making a call.
05 What should you measure beyond campaign ROI?
Not every team runs control groups, and setting them up takes effort. But without them, you are guessing. Start with your highest-cost campaigns first. If a campaign costs a lot in bonuses and the incremental lift is small, you know where to cut.
Beyond incremental vs. basic ROI, the most useful analysis breaks results down by three things:
Segment performance: Which player tiers, game types, or channels gave the best return? Pay extra attention to how your highest-value players respond. A campaign that generates a 200% ROI from your VIP segment and breaks even for everyone else is a very different story than a campaign that performs the same across all segments. Your top players drive a large share of revenue, and their behaviour in campaigns deserves its own line in your reporting.
Time-to-profit: How fast did the campaign pay back its costs?
Retention impact: Did the players stay active 30, 60, 90 days later, or did they take the bonus and disappear?
A campaign with 150% ROI that brings in players who churn in two weeks is worse than a campaign with 80% ROI that brings in players who stay for six months.
06 What does good CRM measurement look like in practice?
Picture a CRM team that reports campaign results like this: incremental revenue next to total revenue, cleaned data next to raw data, static margin next to actual NGR. Leadership sees a 170% ROI instead of an inflated 800%. At first, the numbers look smaller. But within a few months, something shifts.
The team stops cutting campaigns that actually work. They stop scaling campaigns that only looked good because of one lucky month. Budget goes to the offers that actually change player behaviour, not the ones that reward players for doing what they were already going to do.
After two quarters, the CRM team can point to exactly how much revenue their work created. Not estimated, not assumed, but measured. When budget conversations happen, they walk in with a CRM Contribution number backed by control groups across every major campaign. That changes the conversation from “what does CRM do for us?” to “how do we give CRM more resources?”
That is the difference accurate measurement makes. It does not just improve your reporting. It improves every decision that comes after.
07 Frequently asked questions
What is a good ROI for a CRM campaign?
There is no universal benchmark because it depends on your industry, player value, and campaign type. In iGaming, a campaign ROI of 100-200% measured on incremental revenue (not total revenue) is solid. Anything below 50% after bonus costs deserves a closer look. The key is to compare against your own historical performance, not an industry average that may use a completely different calculation method.
How big should my control group be?
For mass campaigns, aim for at least 100 players in the control group. For high-value or VIP segments where revenue per player varies a lot, use a larger holdout (15-20% of the audience if possible). The more variation in your data, the bigger the control group needs to be to give you a reliable baseline. If removing two or three players from your control group changes the result dramatically, it is too small.
Can I measure CRM campaign ROI without a control group?
You can calculate a basic ROI using total revenue minus costs, but you cannot measure true incremental impact without a control group. Without one, you do not know how much of the revenue would have happened anyway. Some teams compare campaign periods to non-campaign periods, but this introduces seasonal and other biases. Control groups are the only reliable way to isolate what the campaign actually caused.
What is the difference between CRM ROI and CRM Contribution?
CRM ROI measures whether a specific campaign was profitable: did it earn more than it cost? CRM Contribution measures the total value CRM creates for the business: what percentage of total company revenue exists because of CRM activity? ROI uses Option A (campaign group only). CRM Contribution uses Option B (total targeted audience). They answer different questions and both matter.
Should I use actual revenue or static margin to calculate campaign ROI?
Use both. Actual revenue (cleaned for outliers) tells you what the campaign delivered this period. Static margin tells you what the campaign should produce over time based on the mathematical edge. When the two numbers are close, your campaign is performing as expected. When they diverge, investigate why. Use actual revenue for current reporting and static margin for long-term planning.
