A Service Level Objective (SLO) is a specific performance goal that a service provider must meet as part of their Service Level Agreement (SLA) with their clients.* For example, a provider might commit that a web-based platform will be available to their client 99% of the time, or a discount will be applied to the client's bill. Both the vendor and the client have an interest in precisely tracking the actual availability of the platform in the event that a dispute arises over the SLA.
*The terms XLO/XLA (Experience Level Objective/Agreement) have gained popularity as well. We use the terms SLO/SLA in our portal and documentation, and consider them to be interchangeable with XLO/XLA.
Catchpoint's SLO feature enables you to measure and track whether your services (or services provided by your vendors) are meeting defined SLOs.

Definitions
There are a few similar terms that are often used interchangeably and may sometimes be confusing:
- SLA/XLA (Service/Experience Level Agreement): a contract between a vendor and a client defining the level of performance the vendor must provide to the client, and the consequences if performance does not meet defined thresholds.
- SLO/XLO (Service/Experience Level Objective): a specific performance goal that a vendor must meet. (e.g. "99% uptime")
- SLI/XLI (Service/Experience Level Indicator): the exact metric(s) and threshold(s) that will be used to determine whether an SLO has been violated (e.g. "Test failure rate must not exceed 1% during any one-week period.").
SLO Measurement with Catchpoint
Catchpoint's SLO feature enables you to define one or more Objectives and apply them to your tests so that you can easily track whether your services (or services provided to you by your vendors) are meeting the objectives defined in your SLAs.
In Catchpoint, an Objective consists of a Metric, a Violation Condition, and a Goal.
The Metric is the performance characteristic we are tracking, and may be any of the following:
- Availability
- Test Time
- DNS
- Wait
- Response
- First Contentful Paint
- Largest Contentful Paint
- Cumulative Layout Shift
- Time to Interactive
Use Availability if your SLO simply measures whether the service is up and available. Use one of the other metrics to measure that specifc performance characteristic.
The Violation Condition defines how many nodes must measure performance outside the threshold within a specific timeframe in order to count against the Objective.
The Goal is the percentage of time that the test must not be in the violation condition, as defined in the following section. If test performance falls below the goal for a given timeframe, then an SLO Violation will be indicated.
SLO Calculation
Catchpoint evaluates a test's performance against an Objective by calculating the percentage of time that the test was not in the violation condition during a given time interval. Whenever a test run meets the violation condition, we begin counting the minutes until a subsequent test run no longer meets the violation condition. Those minutes are then counted against the Objective.
For example, suppose an SLO is tracking website availability using a test that runs on a single node every five minutes. One day at 9:00, a test run fails to connect, indicating the website is not available. We begin counting minutes against the SLO. The test run at 9:05 also fails, but then the test run at 9:10 is successful. This would result in a total of 10 minutes of downtime counted against the SLO.
We recommend applying SLOs to tests configured to run relatively frequently and/or on multiple test nodes. Suppose the test in the previous example were only running on a single node every 30 minutes. It would have failed at 9:00, and then presumably it would have succeeded at 9:30, but this would have been counted as 30 minutes of downtime instead of 10.
It would also be possible for a test running at 30 minute intervals to completely miss a 10 minute period of actual downtime if the downtime occurred entirely between test runs.
Example
Suppose you provide a web service to a client with an SLA that commits 99% availability during any given week-long period, and you monitor this service using a test targeting three nodes. You might configure an SLO as follows:
- Metric: Availability
- Violation Condition:
- At least 2 Nodes
- In the last 30 minutes
- Goal: Greater than or equal to 99% of the time
In this case, once a second node fails to reach the web service within any 30-minute timespan, we begin counting minutes against the SLO. As soon as a subsequent test run succeeds in reaching the web service, we stop counting. Assume that during a given week we counted a total of 200 minutes of downtime by this method.
In this case the SLO for the week would be calculated as follows:
10,080 minutes in a week:
This would be indicated as an SLO vioaltion for the week, as the 99% uptime goal was not met.