Puppeteer Test

Prev Next

Puppeteer scripts in Catchpoint are authored in JavaScript and structured asynchronously using a step-based approach to define browser interactions. These scripts are executed on designated nodes to simulate user behavior and measure end user performance. During execution, all browser-initiated requests are logged, providing detailed telemetry. This data helps teams analyze customer experience and gain visibility into the performance of the underlying network infrastructure.

Catchpoint Puppeteer test has support for Chrome.

Start scripting using the Puppeteer scripting guide.

Puppeteer Test Use Cases

End-User Performance Monitoring: Load a webpage and measure end-to-end performance from globally distributed Catchpoint nodes to capture real-world delays in resource loading, rendering, and network responsiveness.
Scripted Route Monitoring: Validate critical user journeys at scheduled intervals to ensure they function as expected, such as authentication flows or adding items to a cart.
Client-Side and Dynamic Content Loading: Analyze how dynamic elements such as AJAX, lazy-loaded components, and Single Page Application (SPA) content are rendered during test execution.
Network Request Analysis: Analyze browser connection reuse to identify whether existing connections are efficiently maintained across requests, reducing overhead and improving load performance.
CDN Monitoring: Monitor CDN performance by capturing how browser requests are served from edge locations, validating content delivery and caching behavior.
Cloud Service Monitoring: Use Puppeteer to monitor how cloud-hosted applications perform from the end-user’s perspective, validating load times and responsiveness across regions.
Synthetic Monitoring for SLA Compliance: Use synthetic transactions to simulate real-user behavior and validate service-level agreements.
Custom Metrics: Capture values like server name serving the request to tailor monitoring to your app’s unique behavior.
A/B Testing & Benchmarking: Compare performance across site versions and industry peers to optimize user experience and validate improvements.
Regression & CI/CD Testing: Validate that updates don’t break workflows by integrating transaction tests into CI/CD pipelines for early detection of performance regressions.

Puppeteer Test Results

The Smartboard Playwright test highlights an increase in DNS resolution time.
image.png

Puppeteer Test Configuration

Below are the different test settings, advanced settings and metrics supported by Puppeteer test.

Puppeteer Test Properties

Name A name used to identify this test.
Description Optional additional information about the test
Monitor

The browser that Catchpoint will use to run the test. Options include:

  • Chrome
Script

The Puppeteer script that the test will execute.

Location The Product/Folder location of this test (read only)
Status Determines whether this test is currently Active or Inactive

Puppeteer Test Properties

40x or 50x Mark SuccessfulAllows the agent to not treat 40x and 50x response code as failures. The Puppeteer test will continue executing the next line of code ignoring this error.
Additional MonitorRuns an additional Traceroute or Ping test again from the same node to the destination.
Bandwidth ThrottlingArtificially reduce available bandwidth to simulate connections from areas where bandwidth is limited, or to test a service that is optimized for limited bandwidth.
Cross-Origin Iframe Do Not AllowIf enabled, Chrome's cross-origin policy is enabled.
Debug Primary Host on FailureRuns Ping, Traceroute, and DNS traversal to the primary host on test failure. By default, it runs DNS Traversal in case of DNS failure or in the case a DNS Alert Threshold is breached. It will run Ping and Traceroutes in case of Connect, Wait, Load, or Response failure or in the case the respective alert thresholds are breached. The Ping and Traceroute will also run in the case a Webpage Response or Webpage Response with Suspect Alert Threshold is breached - and the main URL response was 20% or higher than Webpage Response.
Debug Referenced Host on FailureRuns Ping, Traceroute, and DNS traversal to the referenced hosts on test failure.
Enable MTU Path DiscoveryEnables collection of Traceroute Path MTU data.
Enforce Failure if test runs longer thanCauses test to result in failure if it runs longer than the specified time. For Puppeteer he default threshold is 30 seconds per step. Allwed on Backbone and Enterprise nodes only.
Filmstrip CaptureCaptures a series of images of the page to show how it is being displayed to the end user. An image is automatically captured each time there is a visual change in the viewport throughout the page-loading process.
Host DataCaptures metrics on a per-host basis.
Http Headers CaptureIf enabled, collects HTTP request/response header information. You can specify if the capture should happen: On Test Failure - when the test fails. On Any Failure - if the test or any request fails or there are JavaScript errors. Always - every time the test runs. The data is available in the waterfall charts. This feature is not required for insights, the two are independent of each other. It does not cost additional points.
Response Content or Metadata CaptureCollects HTTP response content and metadata.
Screenshot CaptureCaptures a screenshot of the page.
SSL Errors IgnoredForce the agent to ignore SSL errors. This can be useful when the SSL certificate on the host does not match the domain. By default, any SSL failures will cause the test to stop and fail.
Test Size Override EnabledWhen enabled, test runs/steps will be allowed to exceed the default limit up to an absolute maximum of 30 MB for backbone and last-mile nodes, or 15 MB for wireless. This will impact point usage.
Third Party ZoneClassifies any data not matching the Self Zone as Third Party.
TracingCollects server-side telemetry for end-to-end visibility.
Verify Test on FailureRuns the test again from the same node in the event of failure. However, will not run the test again when the test receives an HTTP 4xx or 5xx status code. Also, will not run the test again when Test Failure : [50061] - HTTP response header (root-request) did not satisfy the alert settings. If a second test to verify a failure does run, then additional points will be consumed, which match the type of test and any other advanced settings that are enabled. The first failed test run will appear on the waterfall with an "This was the first test for the interval, which failed and was repeated.” However, it will not be included in performance charts, reports, or alerts. The second test run, that ran to verify the failure, will always be included in performance charts, reports, and alerts.
ViewportAllows you to specify dimensions for the viewport of the browser. Useful for testing Mobile sites or targeting specific resolutions. It will impact the dimensions of the screenshot captured by the agent.
Zone DataCaptures metrics for defined Zones (hostnames, paths, URLs, or IPs).

Supported Metrics

# Connection FailuresThe number of times the system was unable to establish a TCP connection to the primary URL server.
# ConnectionsThe total number of TCP connections established during the test.
# Content Load ErrorsThe total number of elements on the webpage that the test was unable to load or that generated errors during loading.
# CSSTotal number of CSS files downloaded during the test.
# DNS FailuresThe number of times the system was unable to resolve the domain from the primary URL to an IP address.
# FlashTotal number of Flash files downloaded during the test.
# FontTotal number of font files downloaded during the test.
# HostsThe total number of unique external hosts referenced by elements on the page.
# HtmlTotal number of HTML files downloaded during the test.
# ImageTotal number of image files downloaded during the test.
# Items (Total)The total number of files included on the webpage. For Object monitor tests, this value is always one.
# JS Errors per PageThe average number of JavaScript errors on each webpage.
# MediaTotal number of media files downloaded during the test.
# OtherTotal count of all files not defined otherwise.
# Purged RunsThe number of test runs manually excluded from calculation for purposes of SLA accuracy.
# RedirectTotal number of redirects on the webpage.
# Response FailuresThe number of times no response was received from the server for the primary URL.
# RunsTotal number of test runs for the defined time period.
# ScriptTotal number of JS files downloaded during the test.
# SSL FailuresThe number of times a secure connection to the server for the primary URL could not be established.
# Test ErrorsThe total number of test runs that failed. This is the sum of all of the following types of test failures:
  • # DNS Failures
  • # Connection Failures
  • # ssl Failures
  • # Response Failures
  • # Timeout Failures
  • # Test Limit Errors
# Test FailuresThe total number of elements that Catchpoint was unable to connect to, receive a response from, or load on the page.
# Test Limit ErrorsThe number test runs that surpassed one of the following system boundaries:
  • Test takes longer than 30 seconds.
  • Test contains a URL that redirects more than 5 times in a row.
  • Test references more than 1,000 URLs.
  • Test references more than 255 different hosts.
# Tests with JS ErrorsThe number of individual test runs that resulted in at least one JavaScript error.
# Timeout FailuresThe number of times a test failed because a server process did not complete and it returned a timeout.
# XMLTotal number of XML files downloaded during the test.
# ZonesThe number of defined Zones containing hosts that were accessed during the test.
% Adjusted AvailabilityIgnoring any purged runs, the percentage of test runs where the primary URL server was reached and the test was completed (i.e. there was not a Test Error.)
% AvailabilityThe percentage of test runs where the primary URL server was reached and the test was completed (i.e. there was not a Test Error.) Availability is calculated as:
(# Test Runs - # Test Errors) / # Test Runs
% Content AvailabilityThe percentage of time that all the elements on the webpage were available. Content Availability is calculated as the number of times the test ran successfully with no loading errors, divided by the total number of times the test ran. If at least one object failed to load during testing, that test run is regarded as having content that failed to load properly.
% DowntimeThe percentage of test runs where the primary URL server was unavailable, unreachable, or otherwise failed (i.e. there was a Test Error.) Downtime is calculated as:
# Test Errors / # Test Runs
% FrustratedThe percentage of test runs that exceeded the Apdex “frustrated” threshold.
% Not FrustratedThe percentage of test runs that completed in less time than the Apdex “frustrated” threshold. This is equivalent to: % Satisfied + % Tolerating
% Ping Packet LossThe percentage of pings packets sent which did not receive a response. Calculated as:
(# packets received / # packets sent) * 100
% SatisfiedThe percentage of test runs that completed in less time than the Apdex “Satisfied” threshold.
% Self BottleneckThe percentage of Document Complete time that hosts in the "self" zone were a bottleneck for.
% Third Party BottleneckThe percentage of Document Complete time that hosts in the "third party" zone were a bottleneck for.
% ToleratingThe percentage of test runs that exceeded the Apdex “Satisfied” threshold but completed in less time than the “Frustrated” threshold.
ApdexA scoring mechanism that translates performance metrics of diverse applications into generic “User Satisfaction” levels using predefined response time thresholds. You can use default Apdex thresholds or configure your own on a per basis. For more details about Apdex, visit http://www.apdex.org/
Client Time (ms)The total amount of time where no request was on wire, from the start of the test or step to Document Complete.
Connect (ms)The time it took to establish a TCP connection with the server.
Content Load (ms)The time it took to load the entire content of the webpage after the connection was established with the primary URL server. This is the time from the end of Send (ms) until the final element, or object, on the page was loaded.
CSS (ms)Total time spent loading CSS files, in milliseconds.
CSS BytesTotal size of CSS files downloaded during the test, in bytes.
Cumulative Layout ShiftMeasures the unexpected shifting of webpage elements while the page is still loading. CLS looks at the proportion of the viewport that was impacted by layout shifts and the movement distance of the elements that were moved.
DNS (ms)The time it took to resolve the domain name to an IP address.
Document Complete (ms)The time it took from the initial URL request being issued until the browser triggered the "onload" event. Any inline requests or requests inserted via "document.write" must complete loading before the event is fired. Document Complete does not account for dynamic requests that may be generated later via JavaScript and/or DOM manipulation.
DOM Load (ms)The time it took to load the Document Object Model (DOM) for the webpage.
Downloaded BytesThe total number of downloaded bytes from the primary URL of the test(s).
Experience ScoreA composite metric that captures the overall experience of a user on a scale of 0-100.
File SizeThe size of the content received from the host for a specific element, in bytes.
First Contentful PaintThe time when the browser rendered the first bit of content from the DOM. (May be text, image, SVG, or even a <canvas> element.)
First PaintThe time when the browser first rendered anything visually different from what was on the screen prior to navigation.
Flash (ms)Total time spent loading Flash files, in milliseconds.
Flash BytesTotal size of Flash files downloaded during the test, in bytes.
Font (ms)Total time spent loading font files, in milliseconds.
Font BytesTotal size of font files downloaded during the test, in bytes.
Frames Per SecondMeasures the performance of animations.
Html (ms)Total time spent loading HTML files, in milliseconds.
Html BytesTotal size of HTML files downloaded during the test, in bytes.
Image (ms)Total time spent loading image files, in milliseconds.
Image BytesTotal size of image files downloaded during the test, in bytes.
Largest Contentful Paint (ms)The time when the largest image or text block (by screen area) visible within the viewport was rendered.
Load (ms)The time from the first packet to the last packet of data for the response.
Media (ms)Total time spent loading media files, in milliseconds.
Media BytesTotal size of media files downloaded during the test, in bytes.
Other (ms)Total time spent loading all files not defined otherwise.
Other BytesTotal size of all files not defined otherwise.
Ping Round Trip (ms)Average time between sending a ping packet and receiving a response.
Redirect (ms)Time from the start of navigation to the end of the last Redirect.
Render Start (ms)The time from initial navigation until the first visual content is painted to the browser display.
Response (ms)The total time from the initial request until receiving the last packet of response data. It is the sum of DNS + Connect + ssl + Send + Wait + Load for all elements.
Round Trip Delay (ms)The total amount of time that the NTP request packet and response packet were traveling between the Node and the ntp server.
Script (ms)Total time spent loading JS files, in milliseconds.
Self Downloaded BytesTotal file size in bytes (including headers) downloaded from hosts in the "self" zone.
Send (ms)The time it took to send the request to the server.
Server Response (ms)The time from when DNS was resolved to receiving the last response packet from server. (This shows the server response exclusive of dns times)
Signal QualityMeasures the quality of the WLAN connection in terms of data transfer speed. It indicates what percent of the available network are you using to move data (upload / download). 99% is as good as it gets in terms of signal quality.
Signal Strength (dBm)This number represents the power the clients device is receiving from the Access Point / Wi-Fi router. A number of -30 dBm indicates excellent while a number of -70dBm indicates very poor signal strength.
Speed IndexA calculated metric that represents how quickly the page rendered the initial user-visible content above the fold. A lower Speed Index indicates faster rendering of visible content.
SSL (ms)The time it took to complete the ssl handshake with the server.
Test Time (ms)One cohesive metric that applies to all test types and indicates the total duration of the test run. Test Time is equivalent to Response, Test Response (Transaction and web tests) and ping RTT (Trace Route tests), and is used when calculating Apdex. Test Time is not available for Request, Host, or Zone charting.
ThroughputMeasures how efficiently the system was able to retrieve all elements in kilobytes per second. Throughput is calculated as follows:
Throughput = Size / Time where:
Size = (FileSize + HeaderSize)/1024 *converts from bytes to KBs
Time = (Wait + Load)/1000 *converts from ms to seconds
Time To First Byte (ms)The total time from the initial DNS request to receiving the first response packet from the server. This is calculated as:
Time To InteractiveThe time when the page first became interactive. TTI marks the point at which the webpage is both visually rendered and capable of reliably responding to user input.
Time to Title (ms)The time from initial navigation until browser displayed the title of the page.
Total Downloaded BytesThe total number of downloaded bytes for all elements of the webpage, including from the primary URL server and any redirects.
Visually Complete (ms)The time when the visual area of a page has finished loading, meaning that all visible elements of the web page are 100% loaded.
Wait (ms)The time from when the request was sent to the server until the first response packet was received. (Known as "First Byte" in some tools)
Webpage
Webpage ThroughputMeasures how efficiently the system downloaded the content of the entire webpage. Webpage Throughput is calculated as:
(File Size + Header Size) / Webpage Response (ms)
Wire Time (ms)The total amount of time where at least one request was on the wire, from the start of the test or step to Document Complete.
XML (ms)Total time spent loading XML files, in milliseconds.
XML BytesTotal size of XML files downloaded during the test, in bytes.