Introduction
Endpoint Monitoring collects a wide range of metrics for the various components of the service delivery chain to help identify the impact the device, network, and application have on overall experience. Since looking through all of the metrics can be cumbersome, Catchpoint includes score metrics that represent a collection of related metrics in one simple measurement to reduce the time spent understanding the user experience and troubleshoot issues.
All scores range from 0 to 100 where 0 indicates a severe performance issue and 100 indicates a healthy system. Knowing when a score indicates a problem can be situational based on the tolerance of end users with a given device, network, or application. For example, a user in a remote location may expect more network latency than a user in an office with a reliable network. However, it is common that any score below 70 can often lead to frustration for the end user.

Experience Score
This a composite of the Endpoint, Network, and Application scores. It is calculated as an average of the 3 scores and helps to give a summary of the overall experience. The component scores are only used to calculate the the average to derive the Experience score when non-zero data exists, so if an Endpoint deployment isn't measuring any applications, the application score won't offset the averages by being measured as 0 since it is treated as null and not included in the average calculation.
Since the Experience score is an average, it is possible that a value in the range of 67% to 87% could indicate a significant issue in on of the components. For example, if Endpoint and Network are both 100 and the Application is 60, the Experience Score will be 87.
However, the scores often trend together so that a dip in one score correlates with another resulting in an overall lower Experience Score. For example, when a device is over-utilized, such as 100% CPU, the Endpoint Score will be directly impacted, but the network and applications will appear to be slow as well. For this reason, it often makes sense to troubleshoot from left to right starting with the Endpoint Score. If the Endpoint Score is fine, it might be that the network is impacting the application performance. It is less common for the impact to go from right to left such as an application being slow causing network latency or machine over-utilization.
Endpoint Score

The Endpoint Score shows the impact the Endpoint device is having on user experience. It is calculated using the CPU, Memory, and WiFi Strength metrics. The metrics used to calculate the score are scaled from bad to good in a range from 0 to 100 and then averaged. For example, 100% CPU utilization results in a 0 for the weighting of the CPU metric while 0% CPU utilization results in a weighting of 100 in the Endpoint Score formula.
Please note that WiFi strength of '0' occurs when the user is connected via ethernet. This does not impact the scores since it is treated as null.
Network Score

The Network Score shows the impact the network is having on user experience. It is calculated using the RTT and Packet Loss metrics from Ping and Traceroute tests. The metrics used to calculate the score are scaled from bad to good in a range from 0 to 100 and then averaged.
Traceroute & Ping test runs with RTT \< 20ms get scored as 100, RTT \> 100ms gets scored at 0, and the scores scale linearly in between those ranges.
Packet Loss is similar to CPU in that the scores are inverted from the actual Packet Loss Percentages. Packet Loss has a 2x weighting in the score formula since often issues with packet loss can result in short network paths and thus a low or non-existent RTT. To illustrate, the Network Score is calculated as (packet loss score + packet loss score + RTT score) / 3.
Application Score

The Application Score shows the impact one or more applications are having on user experience. It is calculated using Document Complete and Visually Complete metrics from RUM data, Downtime from both RUM and tests, Wait Time from Object tests, and the destination hop RTT and Packet Loss for Traceroute tests. The destination hop network data is included to indicate when network impact is specifically occurring within an application's network and not in the general internet or a user's local network. This is helpful when Catchpoint is used solely for running network tests like Traceroutes and not used for RUM data collection.
For RUM data, a 100 score given to page views with Document Complete or Visually Complete \< 3s time, a 0 score for pages with > 20s time, and scoring scales linearly for any ranges in between.
Downtime per app and/or test is inverted so that 0% Downtime results in a score of 100 and 100% results in a score of 0.
For Object tests, a 100 score is given for test runs with Wait time < 1s, a 0 score for tests > 5s, and scores scale linearly for range in between.
For destination hop Traceroute data, Packet Loss and RTT impact the scores in the same way as described in the Network Score.
Each metric's score is averaged with downtime having 2x weight in the final formula.