MCP Integration

Debug webhooks from
your AI assistant

Hooklistener's MCP server gives Claude Code, Codex CLI, Cursor, and Windsurf direct access to your webhook infrastructure. Create endpoints, automate actions, inspect payloads, monitor uptime, and store data — without leaving the terminal.

AI Assistant
> Create a debug endpoint called 'Stripe Webhooks'
I'll create that endpoint for you.
create_endpoint({ name: "Stripe Webhooks" })
→ https://hook.hooklistener.com/e/abc123
> Show me the last webhook that came in
Let me check the captured requests.
list_requests({ endpoint_id: "abc123", limit: 1 })
→ POST /webhook • 200 • 2.3s ago • stripe-signature: ...

15 tools your AI assistant can use

Every tool maps to a real Hooklistener action. Your AI assistant calls them automatically when you ask about webhooks, endpoints, actions, uptime, or stored data.

See the full tool reference in the MCP documentation.

Debug Endpoints

Create and manage webhook debugging endpoints

list_endpoints

List all debug endpoints in your organization. See names, URLs, and request counts at a glance.

get_endpoint

Get endpoint details including the webhook URL, creation date, and configuration.

create_endpoint

Create a new debug endpoint and get a unique webhook URL your services can send to.

Endpoint Actions

Automate responses and workflows on your endpoints

create_endpoint_action

Create an automation action on a debug endpoint — conditions, HTTP requests, scripts, and more.

list_endpoint_actions

List all automation actions configured on a debug endpoint.

delete_endpoint_action

Delete an automation action from a debug endpoint.

Captured Requests

Inspect webhooks that have been received

list_requests

List captured webhooks for an endpoint. Filter by HTTP method, path, or status code.

get_request

Get full request details: headers, body, query params, and timing information.

Uptime Monitors

Monitor your API health and availability

list_monitors

List all uptime monitors in your organization with their current status.

get_monitor_status

Get uptime percentage, average response time, and recent check results.

create_monitor

Create a new uptime monitor for any HTTP endpoint with customizable check intervals.

Datastore

Store and retrieve key-value data across workflows

get_datastore_key

Retrieve a datastore entry by key, with optional namespace scoping.

set_datastore_key

Create or update a datastore entry with any JSON value and optional TTL.

delete_datastore_key

Delete a datastore entry by key and namespace.

list_datastore_keys

List datastore entries with optional namespace and prefix filters.

Real use cases

See how developers use the MCP server to debug webhooks, set up infrastructure, and monitor APIs — all from their AI assistant.

Debug a failing Stripe webhook

When a Stripe webhook isn't working, ask your AI assistant to check what's coming in. Filter requests by status, inspect headers for signature verification, and view the full JSON payload — all without switching to the browser.

  • Filter captured requests by status code or HTTP method
  • Inspect Stripe-Signature headers for verification debugging
  • View the full JSON body to check event structure
  • Compare expected vs actual payloads side by side
AI Assistant
> Show me failed webhooks on "Stripe" endpoint
list_requests({ endpoint: "Stripe", status: "4xx" })
→ 3 requests • POST /webhook • 401 Unauthorized
> Show me the headers on the latest one
get_request({ id: "req_a1b2c3" })
→ stripe-signature: t=1709...,v1=5a2e...
→ content-type: application/json

Set up webhook infra for a new project

Starting a new integration? Ask your AI to create a debug endpoint and an uptime monitor in one conversation. Get back a webhook URL and health monitoring without touching the dashboard.

  • Create a named debug endpoint in seconds
  • Get a unique webhook URL to configure in third-party services
  • Set up an uptime monitor for your receiving API
  • All from a single AI conversation
AI Assistant
> Set up webhook infra for my new payments service
I'll create an endpoint and a monitor.
create_endpoint({ name: "Payments Service" })
→ https://hook.hooklistener.com/e/pay_xyz
create_monitor({ url: "https://api.example.com/health" })
→ Monitor created • checking every 60s

Check API health during an incident

During an outage, quickly ask your AI assistant for uptime stats. See response times, failure rates, and recent check results without navigating away from your code.

  • Get uptime percentage over the last 24 hours or 7 days
  • Check average response time and latency trends
  • View recent failures with status codes and timestamps
  • Triage issues faster without context-switching
AI Assistant
> What's the uptime on my API monitor?
get_monitor_status({ monitor_id: "mon_abc" })
→ Uptime: 99.2% • Avg: 245ms • Last 24h
→ 3 failures • last: 12min ago • 502 Bad Gateway
> Show me all monitors
list_monitors()
→ 4 monitors • 3 up • 1 degraded

Set up in under a minute

Pick your AI tool, connect with OAuth or an API key, and start using webhook tools immediately.

Terminal
# OAuth 2.0 (recommended) — signs in via browser automatically
claude mcp add --transport http hooklistener https://app.hooklistener.com/api/mcp

# Or use an API key (legacy)
claude mcp add --transport http hooklistener https://app.hooklistener.com/api/mcp \
  --header "Authorization: Bearer hklst_your_api_key_here"

# Add --scope project to share with your team (writes to .mcp.json)
# Add --scope user for all your projects
# Run /mcp to verify the server appears

Prerequisites: A Hooklistener account. OAuth 2.0 handles authentication automatically — just sign in when prompted. For API key authentication, generate one from Organization Settings > API Keys.

For detailed setup instructions, see the MCP setup guide.

Frequently Asked Questions

Everything you need to know about using the MCP server

What is MCP and how does it work with Hooklistener?
MCP (Model Context Protocol) is an open standard that lets AI coding assistants call external tools. Hooklistener's MCP server exposes 15 tools across 5 categories — debug endpoints, endpoint actions, captured requests, uptime monitors, and a key-value datastore — so your AI assistant can interact with your webhook infrastructure directly from the terminal. See the full tool reference at docs.hooklistener.com.
Which AI tools support Hooklistener's MCP server?
Hooklistener works with any MCP-compatible client that supports the Streamable HTTP transport. This includes Claude Code, OpenAI Codex CLI, Cursor, and Windsurf. If your tool supports MCP over HTTP, you can connect it to Hooklistener.
How do I authenticate with the MCP server?
The recommended method is OAuth 2.0 — just add the MCP server and your client handles authentication automatically via a browser sign-in. Alternatively, you can use an API key (legacy): go to Organization Settings > API Keys and generate a key starting with hklst_. See docs.hooklistener.com for full authentication details.
Is the MCP server included in the free plan?
Yes. The free plan includes API access with debug endpoint, captured request, and datastore tools. Uptime monitoring and endpoint action tools require a paid plan. All plans include the same MCP server endpoint — your plan determines which tools are available.
Is my data secure when using MCP?
Yes. All communication with the MCP server uses HTTPS. Your API key authenticates every request, and data is scoped to your organization. Hooklistener hosts all data in Europe and never shares your webhook data with third parties.
What can I ask my AI assistant to do?
Anything the 15 tools support: create debug endpoints, set up automation actions, inspect captured requests ("show me the last webhook that came in"), check uptime status ("what's the uptime on my API monitor?"), store and retrieve key-value data, and more. Your AI assistant figures out which tool to call based on your natural language request.
How is this different from using the Hooklistener dashboard?
The dashboard gives you a full visual interface with real-time updates, charts, and advanced filtering. The MCP server gives your AI assistant programmatic access to the same data, so you can debug webhooks without leaving your terminal or IDE. They're complementary — use whichever fits your workflow.
What transport does the MCP server use?
Hooklistener uses the Streamable HTTP transport (JSON-RPC 2.0 over HTTP POST). This is the recommended transport for remote MCP servers. It doesn't require running a local process or stdio — just an HTTP endpoint and your API key.

Have more questions about the MCP server?

Read the full MCP documentation

Ready to debug webhooks from your AI assistant?

Connect your AI coding tool to Hooklistener in under a minute. 15 tools for endpoints, actions, requests, uptime, and storage — all from the terminal.

Free tier available • No credit card required • Setup in under a minute • Read the docs