Skip to main content

Create an experiment to test discount effects

Experiments enable you to A/B test two incentives in one campaign. By splitting your session traffic between two variants, you can identify which effects most effectively drive customer behavior.

We'll walk through creating an experiment, from forming a hypothesis to turning your results into a full-scale campaign.

Plan your experiment

Before creating an experiment, you need a clear hypothesis. To get reliable results, your experiment should compare two variants to test one variable, such as a discount type. This ensures that any difference in performance is only attributed to the specific incentive type, and not other factors.

For this tutorial, let's test the discount variable in two variants using the following hypothesis:

A fixed $5 discount drives a higher average session value than a 10% discount. This is because customers often perceive fixed values as more tangible and do not reduce their cart size.

This hypothesis provides a specific benchmark to measure against when you analyze your results later.

Create the experiment

Set the experiment length

For best results, we recommend setting an experiment length that takes into account various factors affecting customer behavior.

Customer behavior changes based on the day of the week, for example, they may be more likely to spend on weekends than on weekdays. When you create the experiment, use the Schedule settings to run your experiment for at least two weeks. This ensures you account for variations in customer behavior over time.

Set the variant assignment type

In Experiment type, there are two ways to assign customers to a variant:

  • Random variant assignment: Talon.One randomly assigns customers to a variant and uses their integrationId to ensure a sticky assignment. This way, customers see the same incentive even if they change devices or refresh the page.
  • External variant assignment: Choose this option if you use a third-party tool, such as Optimizely or Braze, to assign variants. By passing the experimentVariantAllocations object in your session updates, you can ensure Talon.One applies the correct effects for each customer.

For this tutorial, let's use the Random variant assignment type.

Create the rule

After you have created the experiment, let's create a rule in the Rule Builder.

Condition

In our case we need the following condition:

  1. Click Add condition and select Check attribute value.

    This condition allows you to check the value of an attribute against another value, or another attribute.

    1. Click Add an attribute ( Plus sign. ).
    2. Select the Session Total (Current Session) attribute.
  2. Select is greater than.

  3. In the field right of is greater than, type 50 and press enter.

In this condition, we are checking the value of the Session Total (Current Session) attribute. If this value is greater than $50, this condition is true and Talon.One triggers the discount effect.

tip

Keep your first experiment simple by using fewer conditions. This ensures a larger pool of eligible customers for each variant, helping you get statistically significant results much faster.

Variant split

In Variant split, you can name your variants and define how Talon.One identifies and allocates traffic to each one.

  1. In Variant name, make the following changes:
    1. Replace Variant A with 10% off.
    2. Replace Variant B with $5 off.
  2. Keep the Allocation fields set to 50% each.

Effects

For this tutorial, let's set a discount session total effect for each variant.

note

While this example focuses on comparing two incentives, you can also use a control group with no effects. This allows you to measure the baseline performance of your campaign without any incentives applied.

Let's set the Discount session total effect for the variant named 10% off:

  1. Set the Discount Name to 10% off.
  2. Set the Discount value to [Session.Total]*10%:
    1. Click Add an attribute ( Plus sign. ).
    2. Select the Session Total (Current Session) attribute.
    3. Back in the Discount value field, type * 10% to complete the discount value.

Let's also set the Discount session total effect for the variant named $5 off:

  1. Set the Discount Name to $5 off.
  2. Set the Discount value to 5.

Start the experiment

To maintain the integrity of your data, Talon.One locks the conditions, effects, and variant split after the experiment is activated. These settings cannot be edited while the experiment is running to preserve the accuracy of your results. To change the experiment after activation, disable the experiment and create a new one by copying it to ensure your final data is reliable.

To copy an experiment:

  • In the experiment list, to the right of the experiment, click .

    tip

    You can also copy an experiment in the experiment dashboard by clicking Copy Experiment to the right of the experiment.

Evaluate the experiment

After your experiment is live and gathering data, you can monitor its performance in the experiment dashboard.

Understand the result and metrics

Talon.One uses Welch's two-sided t-test to compare variants and provide Confidence scores. These scores determine if the experiment has yielded a winning variant or if the data is inconclusive.

To ensure statistical significance, Talon.One requires at least 100 closed sessions per variant before displaying confidence levels. If your customer traffic is low, your experiment needs more time to reach statistical significance.

note

Talon.One recalculates the confidence levels every five minutes. To ensure statistically significant results, we recommend waiting until the Confidence scores for at least two metrics reach 90% in the experiment dashboard. The winning variant can then provide a reliable foundation for your next campaign.

While all metrics provide insight into the performance of your experiment, Talon.One calculates the Confidence scores using only the three metrics that express averages: Avg. Session Value, Avg. Discounted Session Value, and Avg. Items per Session. These averages determine whether a difference in results is a repeatable trend or a result of random variance.

To help you interpret the overall performance of your variants, the evaluation table provides the following metrics for closed sessions:

MetricDescription
Avg. Session ValueThe average pre-discount order value.
Avg. Discounted Session ValueThe average post-discount order value.
Avg. Items per SessionThe average number of cart items.
SessionsThe number of closed sessions.
RevenueThe pre-discount value of revenue generated from all purchases.
Discounts GivenThe value of all given discounts.
Coupons RedeemedThe number of successful coupon redemptions.
note

Avoid overgeneralizing your results. If 10% performs better at a $50 minimum session value, it doesn't mean this incentive is the better choice for every scenario. For higher-value carts, a flat $20 discount might be more compelling than a 10% reduction. Use these results to help you plan more targeted experiments for other customer segments and cart values.

Apply your findings

After you have identified a winning variant, such as the $5 discount outperforming the 10% discount with a 98% confidence score, follow these steps to scale your results:

  1. Stop the experiment: Disable the experiment to finalize your results. Note that this immediately disables the experiment's incentives for all customers.

    To disable an experiment:

    1. On the left-side menu of the experiment, click Dashboard.
    2. Click Disable Experiment.
  2. Create a standard campaign from the experiment: You can directly convert the winning variant into a campaign.

    To create a campaign from an experiment:

    • In Dashboard, to the right of the experiment, click Create Campaign from Experiment.
  3. Iterate: Use your findings to form a new hypothesis. If the results showed you that a $5 discount works better than a 10% discount for this audience, your next experiment might test that $5 discount against a free shipping offer to see if you can improve your results even further.