MCP Server

Prev Next

The Catchpoint  MCP Server redefines how teams analyze and act on Catchpoint data. It enables AI assistants to search, filter, and visualize information in one seamless flow, turning fragmented data into clear, actionable insight.

Built to integrate directly with existing LLM-based workflows, the MCP Server becomes the connective layer between Catchpoint data and intelligent automation powered by large language models. It enables AI-driven assistants, copilots, and chat interfaces to reason over live monitoring data, trigger follow-up actions, and surface insights naturally, bringing observability and AI operations into the same ecosystem.

Deploying the MCP Server

You can deploy the MCP server locally using the following package which contains a readme file for install instructions.

catchpoint-mcp-server.zip
To authenticate, use a Catchpoint REST API key found in the integrations page:

https://portal.catchpoint.com/ui/Symphony/Integrations/Api/RestApi

The key should be used in an Authorization HTTP header, like this:

Authorization: Bearer <key> 

MCP Usage

One connected, you can start interacting with the MCP server to make queries like “find a list of tests containing the work ‘CDN’” or “chart a scatterplot breaking down errors for test ID 1234”.

The available tools include:

json_query

Evaluates a JSONata expression against a JSON object or array. - The "data" field should be a JavaScript object or array, not a JSON string. - The "expr" field should be a valid JSONata expression. - Returns the evaluated subset or value from the data.

Query Strategy and Error Recovery 1. First attempt: Try your JSONata expression 2. Auto-retry on error (2-3 attempts): - Analyze error message carefully - Correct syntax based on error (quotes, brackets, function names) - Simplify expression if needed (remove sorting, use basic operations) - Do NOT ask user for permission to retry 3. Fall back to manual processing: If still failing after 2-3 attempts: - Stop using "json_query" tool - Process the original JSON response directly yourself - Count, filter, or analyze the data manually - Answer the user's question from the raw data 4. Only escalate if necessary: If you cannot answer even with manual processing, explain the issue to user

orchestrate_tools

Executes multiple MCP tools in a single coordinated chain. Default to orchestrator for multi-step operations, even for discovery tasks where you know the end goal.

Each step can specify: - a tool name, - its own arguments (optional), - a merge strategy ("replace" or "shallow-merge"), - and a required JSONata transform applied to that step’s output. After each tool runs, its transform expression processes the raw output: - if not the last step → result becomes the next tool’s input, - if the last step → result becomes the final orchestrator output. This enables complex multi-tool workflows (tool1 → transform1 → tool2 → transform2 → …) to execute entirely server-side, eliminating large intermediate payloads and minimizing token use. Use orchestrator when: - You need data from one tool to call another tool - The user's request requires multiple related API calls - You can identify the target metric/data even if you haven't seen the intermediate results - You want to minimize latency and token usage If intermediate inspection or branching is required, invoke tools step-by-step instead. ## ⚠️Critical Transform output MUST match next step's input schema and be sure that transformed output properties are in the correct format (for ex. times, enums, etc ...) according to next step's input schema. IMPORTANT: When chaining tools, each step's transform MUST produce output that matches the EXACT parameter structure expected by the next tool. ### Step-by-Step Process: 1. Check the next tool's required parameters - Look at the next step's tool schema to see what parameters it expects 2. Design your transform to create that exact structure - Your transform expression must output an object with keys matching those parameter names 3. Use the correct merge strategy: - "shallow-merge": When you want to combine transform output WITH explicit args - "replace": When transform output should be the ONLY args (rare)

test_search-tests

Searches Catchpoint tests by name, type, and/or url. Returns tests that match all provided filters (AND).

Each filter is optional. ⚠️ NOTE: If you only need to call this tool and then apply a JSON transformation (for example, filtering, aggregating, or reshaping its output), use the "orchestrate_tools" tool. ⚠️ CRITICAL RULE: transform output must create an object with keys matching the NEXT tool's expected parameter names. You can invoke the orchestrator with a single step: define this tool as the step's "name" and provide your JSONata expression in the step's "transform". Example: { "steps": [ { "name": "test_search-tests", "args": { ... }, "transform": "your JSONata expression" } ] } This ensures consistent handling of all JSON transformations through the orchestrator, simplifies tool behavior, and aligns single-tool transformations with multi-tool workflows. ## Examples - Count all tests: "distinct(tests.type.name)" - Group and count by type: "tests{type.name: )}" - Filter by name containing "ping": "tests[name ~> /ping/i]" - Count filtered results: "$count(tests[type.name="Web"])" - Get first 5 tests: "tests[0..4]" - Sort by name: "tests^(name)"

test_test-errors

Enables AI assistants to search and analyze errors across Catchpoint tests.

It provides: - Aggregated error counts across all relevant tests. - Breakdown by error type, including DNS, SSL, timeout, connection, and content failures. - Time-bucketed error distribution, allowing trend analysis over custom intervals. Ideal for identifying recurring issues, spotting degradation patterns, and supporting root cause investigations across services and test environments. ⚠️ NOTE: If you only need to call this tool and then apply a JSON transformation (for example, filtering, aggregating, or reshaping its output), use the "orchestrate_tools" tool. ⚠️ CRITICAL RULE: transform output must create an object with keys matching the NEXT tool's expected parameter names. You can invoke the orchestrator with a single step: define this tool as the step's "name" and provide your JSONata expression in the step's "transform". Example: { "steps": [ { "name": "test_test-errors", "args": { ... }, "transform": "your JSONata expression" } ] } This ensures consistent handling of all JSON transformations through the orchestrator, simplifies tool behavior, and aligns single-tool transformations with multi-tool workflows. ## Examples - Count all tests: " 0]" - Select tests with 'Script Failure' error type: "testErrors[errorTypes[type = "Script Failure"]]" - Summarize test name and total error count: "testErrors.{ "name": testName, "errors": totalErrorCount }" - Calculate total error count across all tests: "" data-latex-id="mlij70gwxzc6x4h2y" class="MathJax" jax="SVG" display="true">sum(testErrors.totalErrorCount)" - Calculate total error count for each error type: ( $e := testErrors[].errorTypes; $merge( $distinct($e.type).( $t := $; { ($t): $sum($e[type = $t].errorCount) } ) ) )

test_test-performance

For the visualization of the performance data, you can pass "visualizeTimeSeries" as "true" and then performance data chart as SVG content will be placed in the "chartContent" field of the response.

IMPORTANT-1: When you receive SVG content in the "chartContent" field, you MUST display it as a rendered artifact, NOT as text. IMPORTANT-2: The chart can display a maximum of 9 tests. If more than 9 test IDs are provided, only the first 9 tests will be visualized in the chart. The "chartContent" field will contain the SVG visualization for these first 9 tests only. To display the SVG properly: 1. Create an artifact with type "image/svg+xml" 2. Use the SVG content from "chartContent" field directly as the artifact content 3. Never display the SVG as plain text in your response SVG Color Issue Fix: Before rendering the SVG, replace all instances of fill="currentColor" with fill="#000" and stroke="currentColor" with stroke="#000" to ensure proper visibility of axis labels and tick marks. ⚠️ NOTE: If you only need to call this tool and then apply a JSON transformation (for example, filtering, aggregating, or reshaping its output), use the "orchestrate_tools" tool. ⚠️ CRITICAL RULE: transform output must create an object with keys matching the NEXT tool's expected parameter names. You can invoke the orchestrator with a single step: define this tool as the step's "name" and provide your JSONata expression in the step's "transform". Example: { "steps": [ { "name": "test_test-performance", "args": { ... }, "transform": "your JSONata expression" } ] } This ensures consistent handling of all JSON transformations through the orchestrator, simplifies tool behavior, and aligns single-tool transformations with multi-tool workflows. ## Examples - Count all tests: "$count(tests)" - Filter tests with metric value > 1000: "tests[metricValue > 1000]" - Get names of slow tests: "tests[metricValue > 1000].testName" - Summarize test name and metric: "tests.{ "name": testName, "avgMetric": metricValue }" - Filter by test name: "tests[testName ~= 'Login']"

search_outages

Tool description:

Searches outages (detected by Catchpoint) by Internet service name, city, country, region and label in the given time frame.

Returns outages that match all provided filters (AND).

  • Use "serviceName" to filter outages that occurred for the specified Internet service (case-insensitive)

  • Use "city" to filter outages by city (case-insensitive)

  • Use "country" to filter outages by country (case-insensitive)

  • Use "region" to filter outages by region (case-insensitive)

  • Use "ongoing" to filter only ongoing or already completed outages

  • Use "labelName" and "labelValue" to filter outages belonging to a key-value metadata (e.g., labelName="Customer", labelValue="<CUSTOMER_NAME>").

    ⚠️ NOTE that labels are not added to tests, which are used detect outages automatically and must be attached manually.

    So, if Catchpoint client didn't add any custom metadata to the tests with labels, label based search here will not work.

  • Use "timeInterval" to search outages within a relative time range from now (e.g., "1d" for last 24 hours, "1w" for last week)

  • Alternatively, use "startTime" and "endTime" (in the UTC ISO 8601 format, e.g., 2024-01-21T00:00:00Z) to filter outages occurred in the specified time frame.

  • If "timeInterval" is provided, "startTime" and "endTime" are ignored.

  • IMPORTANT: AI assistants without access to current time should use "timeInterval" instead of "startTime"/"endTime".

All filters are optional.