Create an experiment to test discount effects
Experiments enable you to A/B test two incentives in one campaign. By splitting your session traffic between two variants, you can identify which effects most effectively drive customer behavior.
We'll walk through creating an experiment, from forming a hypothesis to turning your results into a full-scale campaign.
Plan your experiment
Before creating an experiment, you need a clear hypothesis. To get reliable results, your experiment should compare two variants to test one variable, such as a discount type. This ensures that any difference in performance is only attributed to the specific incentive type, and not other factors.
For this tutorial, let's test the discount variable in two variants using the following hypothesis:
A fixed $5 discount drives a higher average session value than a 10% discount. This is because customers often perceive fixed values as more tangible and do not reduce their cart size.
This hypothesis provides a specific benchmark to measure against when you analyze your results later.
Create the experiment
Set the experiment length
For best results, we recommend setting an experiment length that takes into account various factors affecting customer behavior.
Customer behavior changes based on the day of the week, for example, they may be more likely to spend on weekends than on weekdays. When you create the experiment, use the Schedule settings to run your experiment for at least two weeks. This ensures you account for variations in customer behavior over time.
Set the variant assignment type
In Experiment type, there are two ways to assign customers to a variant:
- Random variant assignment: Talon.One randomly assigns customers to a variant and
uses their
integrationIdto ensure a sticky assignment. This way, customers see the same incentive even if they change devices or refresh the page. - External variant assignment: Choose this option if you use a third-party tool, such
as Optimizely or Braze, to assign variants. By passing the
experimentVariantAllocationsobject in your session updates, you can ensure Talon.One applies the correct effects for each customer.
For this tutorial, let's use the Random variant assignment type.
Create the rule
After you have created the experiment, let's create a rule in the Rule Builder.
Condition
In our case we need the following condition:
-
Click Add condition and select Check attribute value.
This condition allows you to check the value of an attribute against another value, or another attribute.
- Click Add an attribute (
).
- Select the Session Total (Current Session) attribute.
- Click Add an attribute (
-
Select is greater than.
-
In the field right of is greater than, type
50and press enter.
In this condition, we are checking the value of the Session Total (Current Session) attribute. If this value is greater than $50, this condition is true and Talon.One triggers the discount effect.
Keep your first experiment simple by using fewer conditions. This ensures a larger pool of eligible customers for each variant, helping you get statistically significant results much faster.
Variant split
In Variant split, you can name your variants and define how Talon.One identifies and allocates traffic to each one.
- In Variant name, make the following changes:
- Replace
Variant Awith10% off. - Replace
Variant Bwith$5 off.
- Replace
- Keep the Allocation fields set to 50% each.
Effects
For this tutorial, let's set a discount session total effect for each variant.
While this example focuses on comparing two incentives, you can also use a control group with no effects. This allows you to measure the baseline performance of your campaign without any incentives applied.
Let's set the Discount session total effect for the variant named 10% off:
- Set the Discount Name to
10% off. - Set the Discount value to [Session.Total]
*10%:- Click Add an attribute (
).
- Select the Session Total (Current Session) attribute.
- Back in the Discount value field, type
* 10%to complete the discount value.
- Click Add an attribute (
Let's also set the Discount session total effect for the variant named $5 off:
- Set the Discount Name to
$5 off. - Set the Discount value to
5.
Start the experiment
To maintain the integrity of your data, Talon.One locks the conditions, effects, and variant split after the experiment is activated. These settings cannot be edited while the experiment is running to preserve the accuracy of your results. To change the experiment after activation, disable the experiment and create a new one by copying it to ensure your final data is reliable.
To copy an experiment:
-
In the experiment list, to the right of the experiment, click .
tipYou can also copy an experiment in the experiment dashboard by clicking Copy Experiment to the right of the experiment.
Evaluate the experiment
After your experiment is live and gathering data, you can monitor its performance in the experiment dashboard.
Understand the result and metrics
Talon.One uses Welch's two-sided t-test to compare variants and provide Confidence scores. These scores determine if the experiment has yielded a winning variant or if the data is inconclusive.
To ensure statistical significance, Talon.One requires at least 100 closed sessions per variant before displaying confidence levels. If your customer traffic is low, your experiment needs more time to reach statistical significance.
Talon.One recalculates the confidence levels every five minutes. To ensure statistically significant results, we recommend waiting until the Confidence scores for at least two metrics reach 90% in the experiment dashboard. The winning variant can then provide a reliable foundation for your next campaign.
While all metrics provide insight into the performance of your experiment, Talon.One calculates the Confidence scores using only the three metrics that express averages: Avg. Session Value, Avg. Discounted Session Value, and Avg. Items per Session. These averages determine whether a difference in results is a repeatable trend or a result of random variance.
To help you interpret the overall performance of your variants, the evaluation table provides the following metrics for closed sessions:
| Metric | Description |
|---|---|
| Avg. Session Value | The average pre-discount order value. |
| Avg. Discounted Session Value | The average post-discount order value. |
| Avg. Items per Session | The average number of cart items. |
| Sessions | The number of closed sessions. |
| Revenue | The pre-discount value of revenue generated from all purchases. |
| Discounts Given | The value of all given discounts. |
| Coupons Redeemed | The number of successful coupon redemptions. |
Avoid overgeneralizing your results. If 10% performs better at a $50 minimum session value, it doesn't mean this incentive is the better choice for every scenario. For higher-value carts, a flat $20 discount might be more compelling than a 10% reduction. Use these results to help you plan more targeted experiments for other customer segments and cart values.
Apply your findings
After you have identified a winning variant, such as the $5 discount outperforming the 10% discount with a 98% confidence score, follow these steps to scale your results:
-
Stop the experiment: Disable the experiment to finalize your results. Note that this immediately disables the experiment's incentives for all customers.
To disable an experiment:
- On the left-side menu of the experiment, click Dashboard.
- Click Disable Experiment.
-
Create a standard campaign from the experiment: You can directly convert the winning variant into a campaign.
To create a campaign from an experiment:
- In Dashboard, to the right of the experiment, click Create Campaign from Experiment.
-
Iterate: Use your findings to form a new hypothesis. If the results showed you that a $5 discount works better than a 10% discount for this audience, your next experiment might test that $5 discount against a free shipping offer to see if you can improve your results even further.