Load testing is an important phase of every project and must be conducted with care and in coordination with Talon.One.
If you are using a prospect deployment, do not perform load testing.
Before launch, you may have less resources in Talon.One than what you will have once in production. Ask Talon.One to configure your deployment with production-ready resources.
To do so, send an email to firstname.lastname@example.org or to your Technical Account Manager and include the following information:
- Planned date / time and duration of test
- Type of test (Capacity, Stress, Soak, ...)
- Max requests per second
- The endpoints you plan to send requests to, and in what proportions
- Your availability for a call before testing starts
Once Talon.One has configured your deployment, you can start planning your load test, respecting the following best practices.
- Use Dry Runs when appropriate. For more information, see Dry requests.
- Use a realistic number of requests reflecting your peak traffic, or a reasonable multiplier over expected traffic.
- Send Update a Customer Session requests to Talon.One only when needed, for example when a customer adds an item/coupon to a cart.
- Randomize profileIDs and sessionIDs to avoid concurrent session updates. Due
to data integrity safety, Talon.One always processes one session update per
profileID at a time. Under normal usage of the API, multiple
Update a Customer SessionAPI calls for a given customer should not be submitted in parallel. If you send concurrent sessions with the same profileID, they will stack up and interfere with the load test.
Do more with one call: The
Update customer session
endpoint offers a
responseContent property that you can use to save API calls.
For example, you can use this property to retrieve the customer
profile information without having to use another endpoint.
Reduce response time: When you query the
Update customer session endpoint but
do not require to run rules, set the
runRuleEngine parameter to
false to skip Rule
Engine execution and get even faster response times.