When configuring the trigger for a metric-based alert, you can choose Specific Value or Trailing Value.
Specific Value means the alert is triggered when that metric goes above/below a specific threshold (e.g. trigger when Test Time is longer than 10,000ms)
Trailing Value alerts are triggered based on a comparison of recent test performance to the test's performance over a longer time period. (e.g. trigger when average Test Time over the last 30 minutes is 50% greater than average Test Time for the last 24 hours.)
Trailing Value Configuration Settings

Trigger
To enable a Trailing Value alert, select Trailing Value under the Trigger section
The following field specifies the Statistical Calculation that will be compared (e.g. Average, Percentile, Median, etc.)
Next, specify the Warning and Critical percentage thresholds. The time period that you select in this section represents the longer historical time period that you will be comparing recent performance against. So, for example, if you want to compare the most recent 30 minutes to the past day, you would select "1 Day" here.
- Utilize Per Node Historical Data This option is displayed if you select Node or Runs under Conditions. When this is enabled, each Node's recent performance is compared only against its own historical performance. If this is not selected, then each Node's recent performance is compared against the average historical performance of all nodes.
Conditions
-
Average Across Nodes: recent performance and historical performance will be averaged across all nodes.
-
Nodes: Thresholds will be defined per node.
-
Runs: Thresholds will be defined per run.
-
Timeframe: This is the shorter (more recent) timeframe that will be compared against the longer historical period. So, for example, if you want to compare the most recent 30 minutes to the past day, you would select "30 minutes" here.
Examples
See the following examples to get a clearer idea of how these settings interact.
Example # 1: Trailing Alert with Utilize Per Node Historical Data (UPNHD)


Since we’re using utilize per node historical data, the system is looking at each node’s past statistical data within the historical window. The historical window is 30 minutes prior to the alert report time. Using the above example, the alert report time is 14:10 so the historical window is from 12:40 to 13:40. From our chart, you can get the statistical value from the historical window; then multiply by the percentage that you selected. If you have select 100% of the statistical value then the trailing value must be double the amount. In this example, Atlanta Zayo and Los Angeles VZN nodes have met the alert condition.
Example #2: Trailing Alert with Average Across Nodes

When the node threshold is selected as “Average Across Node”, the chosen compare value (shown in the screenshot below) of the current window will be compared with the historical window/trailing value to check if the alert condition is satisfied.

In this example, since we have the compare value selected as 75 percentile, we need to check if the 75 percentile of the current window has increased by the specified percent of 75 percentile of the historical window.
The above alert setting example would trigger
- Warning alert
if (75 percentile of current window)> (75 percentile of historical window + 3% of 75 percentile of the historical window)
- Critical alert
if (75 percentile of current window)> (75 percentile of historical window + 5% of 75 percentile of the historical window)
Below screenshot shows the historical window and the current window
Historical window

Current window

75 percentile of current window = 26524.00ms
75 percentile of historical window/trailing value = 25612.00 ms
Considering these values, check if 26524 > (25612 + 3% * 25612)
26524 > 26380
Hence, this condition would trigger a Warning alert.
Time threshold can impact how frequently your test is meeting the trailing alert condition. You may receive more alerts depending on your test performance and your time threshold.
The schedule window will analyze the data points within the set schedule. If your test is performing poorly consistently, you may not receive the next alert until the beginning of your scheduled window.
Rolling Window is similar to how moving average works because it will analyze the data points by creating a series of averages of different subsets of the full data set. This means every test run is creating a new subset of the full dataset. Hence it's possible to see multiple alerts being triggered in the same time window. For more details, please visit the help section.