Benchmarks

Prev Next

Catchpoint's Benchmarks feature helps you measure the performance of the most popular Content Delivery Network (CDN) products globally. It does this by running comprehensive CDN tests from a vast population of devices located around the world, providing invaluable insights to optimize your digital delivery strategy. The results of these tests are provided as an anonymized public dataset so you can understand how your CDN’s performance stacks up to the competition and ensure your CDN investments deliver ROI. Benchmarks can also be run for your own website utilizing Catchpoint Real User Monitoring (RUM) or with a dedicated JavaScript tag. This lets you track CDN performance specifically for your own traffic.

Accessing Public Benchmarks

To access the public anonymized benchmark data comprising results of CDN benchmark tests across the full collection of Catchpoint-Connected Devices, please submit this form: https://forms.office.com/r/9JmNpTt01b

Running Public Benchmarks on Your Website Using RUM Apps

Within a Site App, you can enable the Benchmark option, which instructs the RUM tracking code to run benchmark tests. This will run the public CDN benchmarks which are CDN tests targeting like-for-like assets on the most popular CDNs with identical configurations. These tests are managed by Catchpoint and help provide a fair comparison across major CDN providers using assets you don't need to manage within the CDN. Benchmarks run on RUM sites with version 4.0.6 or greater.
image.png

Private Benchmarks

image.png

Private benchmarks allow you to run custom tests from real user devices, comparing performance across CDNs and origin servers. Your private benchmark results are excluded from the public dataset.

Benefits:

  • Evaluate delivery infrastructure from real-world user perspectives.
  • Identify regional performance differences.
  • Determine the best-performing CDN or origin strategy.

Running Private Benchmarks

From Control Center > RUM, can click the New + button and select Benchmark Test. This lets you define your own tests to assets you manage which can be on your CDN or origin servers. Each test allows adding two request URLs to mimic the way public benchmark tests are configured. We recommend setting a 1-packet file as the first request to warm the cache and a larger file (ex: 100kb) as the second. You can also define weights to give more priority for some tests to run in specific geographies, and you can exclude specific geographies as well:
image.png

Running Benchmarks on Your Website Using a Dedicated JavaScript Tag

From Control Center > RUM, you can click the "New +" button and select Benchmark App. After filling in the required fields, a script will be provided for you deploy on your website.

Analyzing Benchmark Data

From Explorer, you can view both the benchmark tests for your RUM sites and the public benchmark data. From the source selector choose Benchmarks and then select which tests you wish to report data for.
image.png

Benchmark Data Stream

Data can be streamed in real time to your endpoint upon request. Please reach out to customer support to request access. The data stream includes data with values reported as IDs to reduce data size. Here are the mappings:

benchmark_mapping.zip

How Benchmarks Work

Website-Side Behavior

image.png

  • Benchmark endpoints load within an iframe and initiate requests after the page’s onload event.
  • Requests are delayed by approximately 1 second post-onload to avoid impacting user experience.
  • Requests are made sequentially, not in parallel.
  • Currently, we test across 7 CDNs:
    • Amazon
    • Azure
    • CDN1 (Akamai)
    • Fastly
    • Cloudflare
    • Google
    • CDN Networks

image.png

Sample Rate per Country

The benchmark client-side script uses local storage (or a cookie named _CP_CoI as fallback) to determine country weights. This is used for sampling at the country level to tell the benchmark tests to run at a specific sample rate per country.


CDN-Side Behavior

Benchmark tests use specific URLs managed by Catchpoint to evaluate CDN performance.

1 Packet Object URLs

These are lightweight requests used to measure latency:

https://lpulse.perflib.com/ipulse/ad/cp.com/creative/1p.gif
https://kpulse.perflib.com/ipulse/ad/cp.com/creative/1p.gif
https://jpulse.perflib.com/ipulse/ad/cp.com/creative/1p.gif
https://apulse.perflib.com/ipulse/ad/cp.com/creative/1p.gif
https://xpulse.perflib.com/ipulse/ad/cp.com/creative/1p.gif
https://cpulse.perflib.com/ipulse/ad/cp.com/creative/1p.gif
https://ctchpnt.akamaized.net/ipulse/ad/cp.com/creative/1p.gif

100k Packet Object URLs

Used to measure throughput and larger payload handling:

https://lpulse.perflib.com/ipulse/ad/cp.com/creative/100.jpg
https://kpulse.perflib.com/ipulse/ad/cp.com/creative/100.jpg
https://jpulse.perflib.com/ipulse/ad/cp.com/creative/100.jpg
https://apulse.perflib.com/ipulse/ad/cp.com/creative/100.jpg
https://xpulse.perflib.com/ipulse/ad/cp.com/creative/100.jpg
https://cpulse.perflib.com/ipulse/ad/cp.com/creative/100.jpg
https://ctchpnt.akamaized.net/ipulse/ad/cp.com/creative/100.jpg
  • They all are configured to go to the same origin server, and in all of them we try to do the following:

    • Cache the assets as long as possible through HTTP headers, and whatever cache settings each CDN has
    • Enable serve from cache, and do not validate with origin where possible
    • Enable specific HTTP headers to allow our JavaScript to gather data

CDN Configuration Goals

  • Maximize caching via HTTP headers and CDN-specific settings.
  • Serve from cache without origin validation where possible.
  • Enable HTTP headers to support JavaScript-based data collection.
Note

Despite these configurations, there's a ~0.2% to 0.4% chance that a CDN edge server may not have the asset cached and will call the origin. To maintain benchmark integrity, such cases are excluded from results.