MCP Integration

Debug webhooks from
your AI assistant

Hooklistener's MCP server gives Claude Code, Codex CLI, Cursor, and Windsurf direct access to your webhook infrastructure. Create endpoints, inspect payloads, and monitor uptime — without leaving the terminal.

AI Assistant
> Create a debug endpoint called 'Stripe Webhooks'
I'll create that endpoint for you.
create_endpoint({ name: "Stripe Webhooks" })
→ https://hook.hooklistener.com/e/abc123
> Show me the last webhook that came in
Let me check the captured requests.
list_requests({ endpoint_id: "abc123", limit: 1 })
→ POST /webhook • 200 • 2.3s ago • stripe-signature: ...

8 tools your AI assistant can use

Every tool maps to a real Hooklistener action. Your AI assistant calls them automatically when you ask about webhooks, endpoints, or uptime.

Debug Endpoints

Create and manage webhook debugging endpoints

list_endpoints

List all debug endpoints in your organization. See names, URLs, and request counts at a glance.

get_endpoint

Get endpoint details including the webhook URL, creation date, and configuration.

create_endpoint

Create a new debug endpoint and get a unique webhook URL your services can send to.

Captured Requests

Inspect webhooks that have been received

list_requests

List captured webhooks for an endpoint. Filter by HTTP method, path, or status code.

get_request

Get full request details: headers, body, query params, and timing information.

Uptime Monitors

Monitor your API health and availability

list_monitors

List all uptime monitors in your organization with their current status.

get_monitor_status

Get uptime percentage, average response time, and recent check results.

create_monitor

Create a new uptime monitor for any HTTP endpoint with customizable check intervals.

Real use cases

See how developers use the MCP server to debug webhooks, set up infrastructure, and monitor APIs — all from their AI assistant.

Debug a failing Stripe webhook

When a Stripe webhook isn't working, ask your AI assistant to check what's coming in. Filter requests by status, inspect headers for signature verification, and view the full JSON payload — all without switching to the browser.

  • Filter captured requests by status code or HTTP method
  • Inspect Stripe-Signature headers for verification debugging
  • View the full JSON body to check event structure
  • Compare expected vs actual payloads side by side
AI Assistant
> Show me failed webhooks on "Stripe" endpoint
list_requests({ endpoint: "Stripe", status: "4xx" })
→ 3 requests • POST /webhook • 401 Unauthorized
> Show me the headers on the latest one
get_request({ id: "req_a1b2c3" })
→ stripe-signature: t=1709...,v1=5a2e...
→ content-type: application/json

Set up webhook infra for a new project

Starting a new integration? Ask your AI to create a debug endpoint and an uptime monitor in one conversation. Get back a webhook URL and health monitoring without touching the dashboard.

  • Create a named debug endpoint in seconds
  • Get a unique webhook URL to configure in third-party services
  • Set up an uptime monitor for your receiving API
  • All from a single AI conversation
AI Assistant
> Set up webhook infra for my new payments service
I'll create an endpoint and a monitor.
create_endpoint({ name: "Payments Service" })
→ https://hook.hooklistener.com/e/pay_xyz
create_monitor({ url: "https://api.example.com/health" })
→ Monitor created • checking every 60s

Check API health during an incident

During an outage, quickly ask your AI assistant for uptime stats. See response times, failure rates, and recent check results without navigating away from your code.

  • Get uptime percentage over the last 24 hours or 7 days
  • Check average response time and latency trends
  • View recent failures with status codes and timestamps
  • Triage issues faster without context-switching
AI Assistant
> What's the uptime on my API monitor?
get_monitor_status({ monitor_id: "mon_abc" })
→ Uptime: 99.2% • Avg: 245ms • Last 24h
→ 3 failures • last: 12min ago • 502 Bad Gateway
> Show me all monitors
list_monitors()
→ 4 monitors • 3 up • 1 degraded

Set up in under a minute

Pick your AI tool, paste your API key, and start using webhook tools immediately.

Terminal
claude mcp add --transport http hooklistener https://app.hooklistener.com/api/mcp \
  --header "Authorization: Bearer hklst_your_api_key_here"

# Add --scope project to share with your team (writes to .mcp.json)
# Add --scope user for all your projects
# Run /mcp to verify the server appears

Prerequisites: A Hooklistener account and an API key. Generate one from Organization Settings > API Keys. The key starts with hklst_.

Frequently Asked Questions

Everything you need to know about using the MCP server

What is MCP and how does it work with Hooklistener?
MCP (Model Context Protocol) is an open standard that lets AI coding assistants call external tools. Hooklistener's MCP server exposes 8 tools — for creating debug endpoints, inspecting captured webhook payloads, and managing uptime monitors — so your AI assistant can interact with your webhook infrastructure directly from the terminal.
Which AI tools support Hooklistener's MCP server?
Hooklistener works with any MCP-compatible client that supports the Streamable HTTP transport. This includes Claude Code, OpenAI Codex CLI, Cursor, and Windsurf. If your tool supports MCP over HTTP, you can connect it to Hooklistener.
How do I get an API key?
Sign in to your Hooklistener account, go to Organization Settings > API Keys, and generate a new key. The key starts with hklst_ and is used to authenticate MCP requests. Keep it secure — treat it like a password.
Is the MCP server included in the free plan?
Yes. The free plan includes API access with debug endpoint and captured request tools. Uptime monitoring tools require a paid plan. All plans include the same MCP server endpoint — your plan determines which tools are available.
Is my data secure when using MCP?
Yes. All communication with the MCP server uses HTTPS. Your API key authenticates every request, and data is scoped to your organization. Hooklistener hosts all data in Europe and never shares your webhook data with third parties.
What can I ask my AI assistant to do?
Anything the 8 tools support: create debug endpoints ("set up a Stripe webhook endpoint"), inspect captured requests ("show me the last webhook that came in"), check uptime status ("what's the uptime on my API monitor?"), and create monitors. Your AI assistant figures out which tool to call based on your natural language request.
How is this different from using the Hooklistener dashboard?
The dashboard gives you a full visual interface with real-time updates, charts, and advanced filtering. The MCP server gives your AI assistant programmatic access to the same data, so you can debug webhooks without leaving your terminal or IDE. They're complementary — use whichever fits your workflow.
What transport does the MCP server use?
Hooklistener uses the Streamable HTTP transport (JSON-RPC 2.0 over HTTP POST). This is the recommended transport for remote MCP servers. It doesn't require running a local process or stdio — just an HTTP endpoint and your API key.

Have more questions about the MCP server?

Check out our developer guides

Ready to debug webhooks from your AI assistant?

Connect your AI coding tool to Hooklistener in under a minute. Create endpoints, inspect payloads, and monitor uptime — all from the terminal.

Free tier available • No credit card required • Setup in under a minute