Skip to main content

Managing experiments

After activating an experiment, you can monitor its performance, manage its state, and evaluate results. Managing an experiment's state works in a similar way to managing a campaign's state.

An experiment's schedule affects its state similarly to that of a campaign schedule.

note

Be aware of overlap when running multiple experiments or campaigns that target the same customer group. Overlapping promotions can dilute the statistical significance of your results, making it difficult to distinguish which incentive is driving customer behavior.

Evaluating experiment results

You can monitor the performance of an experiment in the experiment dashboard.

To open the experiment dashboard:

  1. From the left-side menu, click Experiments to open the experiment list.
  2. Click the name of an experiment.
  3. On the left-side menu of the experiment, click Dashboard.

To the right of the experiment's name, the experiment's state is displayed, for example, Running.

note

While Talon.One provides data-driven recommendations, we recommend reviewing the detailed metrics before making business decisions. For example, you can compare metrics, such as revenue gains versus discount costs.

The following details are shown below the summary:

  • Experiment ID: The ID of the experiment.
  • Campaign ID: The ID of the campaign that was generated upon experiment creation.
  • Created: The date and time when the experiment was created.
  • Start Date: The date and time when the experiment starts, as set under Schedule.
  • End Date: The date and time when the experiment ends, as set under Schedule.

The evaluation table displays the metrics and values used to evaluate your experiment's performance and generate data-driven recommendations. It includes the following columns:

ColumnDescription
MetricsAll data points that are evaluated.
ConfidenceValues that indicate whether the performance difference between Variant A and Variant B is a statistically significant and repeatable result, or is due to random chance. The confidence score values are calculated using Welch's two-sided t-test and displayed as percentages.

Note: You can only view the confidence score for metrics that express averages: Avg. Session Value, Avg. Discounted Session Value, and Avg. Items per Session.
Variant A/BThe respective variant's values for the data points listed under Metrics. These values are displayed as either currency or integers, depending on the metric type. If one variant has a best overall score, that variant is displayed as the Winning variant.
Difference (B vs A)The difference in results between the variants. In the left column, the difference values are displayed as either currency or integers, depending on the metric type. In the right column, all values are displayed as percentages.

Note: If Variant A has the best overall score, these columns appear next to the Variant A column and their name is shown as Difference (A vs B).