On April 15th, 2025, Catchpoint updated its Endpoint logging domain. This change may require customers to update policies and allow the new domain to pass through firewalls.
The domain name change will apply for new versions going forward.
The new Endpoint domain going forward will be e.3gl.net. We suggest allowing/whitelisting the domain with a wildcard for script: *.3gl.net. You can continue using the Endpoint software which utilizes our current domain: r.3gl.net.
Please reach out to your CSM and VE if you have any questions.
Endpoint Monitoring enables you to monitor the performance of network-based applications directly from end-users' devices. Endpoint Monitoring uses a browser extension and desktop application (Windows or Mac) to report the impact of variables such as the device/browser, office or home network, internet, and SaaS application performance.
You can perform both passive and active (synthetic) monitoring from Endpoints. Passive monitoring captures performance data from the end-user's actual interactions with network- and internet-based applications (RUM data). Active or synthetic monitoring involves pre-configured tests which run from the end-users' devices on a scheduled basis.
Endpoint Module Sections
You can access the Endpoint Module in your Portal under Control Center. Click each feature in the list below to learn more.
- Locations - Locations are configured based on their external IP addresses, enabling you to analyze endpoint data by physical location or VPN aggregator.
- Endpoints - Any devices with Endpoint Monitoring installed using the license key will appear in the Endpoints list, where you can manage their status.
- Employee Apps - Apps represent the SaaS applications your employees use. The Endpoint Monitoring feature includes pre-configured monitors for many popular SaaS apps, and you can add your own custom apps as well.
- Tests - Endpoint Tests are active (scheduled or instant) tests, similar to node-based tests, but which run on Endpoint devices.
- Network Devices - The Network Device list enables you to assign friendly names to known device IPs, making it easier to identify them when viewing your endpoint Traceroute data.
- Alerts - Endpoint Alerts can be configured per location, device, or app. Each alert configuration consists of a metric and its threshold along with a timeframe for which the alert should be active.
Analyzing Endpoint Data
By default, Endpoint Monitoring collects performance data from each device it is deployed to, enabling you to report on performance by device. You can also analyze data by Location, Employee App, or Endpoint Test.
Endpoint Monitoring Smartboard
The Endpoint Smartboards allow you to see a detailed view of performance for any Location, Device, App, or Test. Different items in the page can be filtered to help narrow down the data to find answers to questions like "why was experience poor for a location?" Smartboard includes a network visualization which shows hop-by-hop performance metrics gathered via traceroutes run from any devices running Endpoint Monitoring.
Endpoint Metrics:
The following metrics are reported by Endpoint Monitoring.
- Apdex: A customizable metric that aims to normalize performance reporting across dissimilar applications. It can be applied to different metrics and uses a range of values to represent user satisfaction. By default, Apdex is applied to the Document Complete metric with the following values:
- Green (satisfied): less than or equal to 5 seconds (normalized as values between .8 and 1)
- Orange (tolerating): between 5 and 10 seconds (normalized as values between .5 and .8)
- Red (frustrated): greater than or equal to 10 seconds (normalized as values between 0 and .5)
- Document Complete: The total time taken to load a page. This is when the page fires the onload event.
- Response: The total time taken to load the base html request of a page.
- Latency: Total round trip time from the device to an application endpoint measured by traceroute.
- Packet Loss: Total percent of dropped packets from the device to an application endpoint measured by traceroute.
In addition to the data points above, Catchpoint generates the following composite metrics called "Scores" which help you evaluate overall performance. These scores all range from 0 to 100, where 100 indicates a healthy system.
Endpoint Score
The Endpoint Score is calculated using the endpoint device's CPU, Memory, and WiFi Strength metrics. To calculate the Endpoint Score, we first invert each of these metrics (e.g. if CPU usage is 16%, then the CPU Score = 100 - 16 = 84). Endpoint Score is the average of these three scores. Please note that if the device is connected via ethernet rather than WiFi, then the WiFi strength metric is ignored when calculating the Endpoint Score.
Network Score
The Network Score is calculated using RTT (Round Trip Time) and Packet Loss percentage metrics from Pint and Trace Route Tests. To calculate the Network Score, we first calculate an RTT Score and Packet Loss Score as follows:
- RTT Score - Test runs with RTT < 20ms receive a score of 100. Test runs with RTT > 100ms receive a score of 0. Scores scale linearly in between those thresholds.
- Packet Loss Score - Calculated as an inverted percentage, e.g. if 7% of packets were lost, then the Packet Loss Score would be 100 - 7 = 93.
The Network Score is then calculated as follows:
- Network Score = ((2x Packet Loss Score) + RTT Score)/3
Packet Loss has a 2x weighting in the score formula since issues with packet loss can result in short network paths and thus a low or non-existent RTT.
Application Score
The Application Score is calculated using the Document Complete and Visually Complete metrics from passive Endpoint Monitoring data, the Downtime metric from both passive and active Endpoint data, Wait Time from Object tests, and the destination hop RTT and Packet Loss percentage from Traceroute tests. To calculate the Application Score, we first calculate scores for each of the metrics as follows:
- Document Complete Score - Page views where Document Complete < 3ms receive a score of 100. Page views where Document Complete > 20ms receive a score of 0. Scores scale linearly in between those thresholds.
- Visually Complete Score - Same method and thresholds as Document Complete Score
- Downtime Score - Inverted percentage, e.g. a downtime of 12% would result in a score of 100 - 12 = 88.
- Wait Time Score - Object Test runs where Wait Time < 1ms receive a score of 100. Object Test runs where Wait Time > 5ms receive a score of 0. Scores scale linearly in between those thresholds.
- Packet Loss and RTT scores are determined in the same was as for the Network Score.
The Application Score is calulated by averaging all of the scores above, again with 2x weight given to the Downtime Score.
Experience Score
The Experience Score is determined by calculating the average of the other three scores (Endpoint, Network, and Application), and is intended to provide a summary of the overall end-user experience. If no data is available for one of the other composite scores, then that score will be ignored when calculating the Experience Score.
Since the Experience score is an average, it is possible that a value in the range of 67 to 87 could indicate a significant issue in on of the components. For example, if Endpoint and Network are both 100 and the Application is 60, the Experience Score will be 87.
However, the scores often trend together so that a dip in one score correlates with another, resulting in an overall lower Experience Score. For example, when a device is over-utilized, such as 100% CPU, the Endpoint Score will be directly impacted, but the network and applications will usually appear to be slow as well. For this reason, it often makes sense to troubleshoot from left to right starting with the Endpoint Score. If the Endpoint Score is fine, it might be that the network is impacting performance. It is less common for the impact to go from right to left such as an application being slow causing network latency or machine over-utilization.