Experiments
An experiment splits traffic between multiple funnel variants within a single campaign. Each variant is a different funnel — you can test entirely different page layouts, copy, pricing, or conversion flows. The experiment tracks which variant performs better and lets you declare a winner with statistical confidence.
How It Works
When a campaign has a running experiment, incoming visitors are assigned to a variant instead of seeing the campaign’s default funnel. Each variant links to a separate funnel and has a traffic weight that controls what percentage of visitors see it. One variant is designated as the control (your baseline).
Insert image: The experiment overview page showing variant cards with traffic weights, session counts, and the control/challenger labels
Setting Up an Experiment
Navigate to Experiments
Open your campaign and go to the Experiments tab.
Create the experiment
Click Create Experiment and fill in:
| Field | Description |
|---|---|
| Name | A label for your reference (e.g., “Pricing page redesign”). |
| Hypothesis | Optional. What you expect to learn (e.g., “Shorter quiz increases purchase rate”). |
| Primary metric | The metric used to evaluate the winner: ARPU (average revenue per user) or Purchase CR (purchase conversion rate). |
Add variants
Each variant links to a funnel in your project. You need at least two variants.
- Control — one variant must be marked as the control. This is typically your current production funnel.
- Challenger(s) — the funnel(s) you are testing against the control.
You can either link an existing funnel or clone the control funnel to create a copy that you can modify independently.
Cloning duplicates the entire funnel — pages, components, variables, products — so you can make changes without affecting the original.
Set traffic weights
Assign a weight to each variant. Weights must sum to 100%, with a minimum of 1% per variant. For a standard two-variant test, a 50/50 split is common. If you want to limit risk, try 90/10 (control/challenger).
Insert image: The experiment setup form showing two variants (Control and Challenger) with their linked funnels, traffic weight sliders set to 50/50, and the Control toggle
Start the experiment
Click Start. The experiment status moves from DRAFT to RUNNING and traffic begins splitting immediately.
You cannot add or remove variants while an experiment is RUNNING. Pause
the experiment first if you need to adjust variants.
Traffic Allocation
First-time visitors
When a visitor has no existing session (no cookie), they are assigned a variant via weighted random selection based on the current weights. The session is created on their first event, and the experiment assignment is recorded.
Returning visitors
When a visitor returns with an existing session, assignment is deterministic — the same visitor always sees the same variant.
Sticky assignment
Assignment is sticky. If you change weights mid-experiment, returning visitors keep the variant they were originally assigned. Only new visitors are affected by the updated weights. This prevents statistical contamination from visitors switching variants partway through their journey.
Reading Results
The experiment analytics dashboard shows per-variant metrics with 95% confidence intervals.
Metrics table
| Metric | Description |
|---|---|
| Sessions | Total sessions assigned to the variant. |
| Conversions | Unique paying customers from sessions in the variant. |
| Revenue | Total revenue from the variant’s sessions. |
| ARPU | Average revenue per user (revenue / sessions). Shown with a 95% CI. |
| ARRPU | Average revenue per paying user (revenue / conversions). Shown with a 95% CI. |
| Purchase CR | Purchase conversion rate (conversions / sessions). Shown with a 95% CI. |
| Paywall CR | Percentage of sessions that reached a paywall page (paywall views / sessions). Shown with a 95% CI. |
Confidence intervals
All rate and average metrics display 95% confidence intervals. When confidence intervals for two variants do not overlap, the difference is statistically significant.
Confidence intervals require a minimum sample size to be meaningful. With fewer than 5 sessions, intervals are not computed. Let the experiment run until you have enough data before drawing conclusions.
Time series
Daily charts show sessions, conversions, and revenue per variant over the experiment’s duration. Use these to spot trends, check for day-of-week effects, and confirm that results are stable over time rather than driven by a single day’s spike.
Insert image: The experiment results dashboard showing daily time series charts for sessions, conversions, and revenue per variant, with the metrics table above showing ARPU, Purchase CR, and confidence intervals
Pausing an Experiment
You can pause a running experiment at any time. While paused:
- New visitors see the campaign’s default funnel (no experiment assignment).
- Existing assigned visitors who return will still see their assigned variant.
- You can add or remove variants and adjust weights.
Resume by clicking Start again.
Completing an Experiment
When you have statistically significant results, declare a winner:
- Select the winning variant.
- Confirm the action.
What happens:
- The winning variant’s funnel becomes the only funnel linked to the campaign.
- All other variant funnels are unlinked from the campaign. They are not deleted — they remain in your project and can be reused or linked to other campaigns.
- The experiment status moves to
COMPLETEDwith a timestamp. - All historical experiment data (sessions, metrics, variant assignments) is preserved for future reference.
Completing an experiment is irreversible. The experiment cannot be restarted. If you want to run another test on the same campaign, create a new experiment.
Experiment Lifecycle
| Status | Description |
|---|---|
DRAFT | Created but not started. You can add/remove variants and adjust weights. |
RUNNING | Live. Traffic is being split between variants. |
PAUSED | Temporarily stopped. No new assignments. Can be resumed. |
COMPLETED | Winner declared. Historical data preserved. Cannot be restarted. |