OpenAI Webhooks Guide — How to Receive Real-Time Callbacks from OpenAI APIs
OpenAI webhooks enable efficient handling of long-running API tasks like the Deep Research API. Instead of polling for completion, webhooks push real-time notifications when operations finish. This guide covers implementation, security, and best practices.
Does OpenAI offer native webhooks?
Short answer: not in the way most developers expect. For synchronous APIs like chat.completions.create (the classic Chat Completions endpoint) and responses.create (OpenAI's newer unified Responses API), OpenAI does not deliver webhooks — you read results from the HTTP response you opened, poll a job status endpoint, or use stream=true to receive tokens incrementally over the same connection. Streaming is useful for chat UIs, but it's not a webhook: your client has to stay connected, and the provider never POSTs to a URL you control.
There is one important exception: OpenAI's Deep Research API and a handful of newer async endpoints do support a genuine callback/webhook pattern, where OpenAI POSTs to a URL you register once the long-running job finishes. The rest of this guide walks through that setup in detail — signing secrets, signature verification, retry behavior, and local testing. If you landed here looking for webhooks on the standard chat endpoints, skip to the patterns below first.
In practice, developers building on OpenAI use one of three patterns to get "real-time" behavior:
- Polling — kick off a job, store the job ID, and poll the status endpoint on an interval until it's done. Simple, works everywhere, wastes requests.
- Streaming responses — open a single long-lived connection with
stream=trueand read tokens as they arrive. Great for chat UIs, but your client has to stay connected. - Webhook bridges — use the native webhook support on endpoints that have it (Deep Research, batch, some async operations), or run your own worker that converts internal job completion events into outbound webhooks your orchestration layer listens to. Tools like Hooklistener are useful here for capturing, inspecting, and replaying those callbacks during development.
Which OpenAI Endpoints Support Webhooks?
Before writing any code, it's worth being precise about which OpenAI APIs actually push events to a URL you control, and which ones require polling or streaming instead:
- ✅ Batch API — emits
batch.*events when asynchronous batch jobs reach a terminal state. - ✅ Deep Research API — async research jobs can emit lifecycle events; see OpenAI's Deep Research documentation for the current event names.
- ❌ Chat Completions (
chat.completions.create) — no webhooks. Use streaming or read the response. - ❌ Responses API (
responses.create) — no webhooks. Usestream=trueor poll. - ❌ Assistants API runs — no webhooks; poll the run status endpoint or consume the run stream.
- ❌ Files / Embeddings / Moderation — synchronous, no async callbacks.
Everything that follows in this guide is scoped to the two endpoint families on the "✅" list — Batch and Deep Research. If you're building on Chat Completions, Responses, or Assistants, the right primitive is streaming or polling, not webhooks.
Webhook Setup for Deep Research & Batch Endpoints
Step 1: Project Configuration
Webhook configuration for Batch and Deep Research jobs is done at the OpenAI project level. The exact dashboard labels have changed over time, so consult the current OpenAI documentation for the precise path — at a high level the steps are:
- Open the OpenAI dashboard and select the project that will call the Batch or Deep Research API
- Find the webhook configuration area in project settings (consult OpenAI's docs for the current location)
- Register a new webhook endpoint URL (must be HTTPS in production)
- Select the Batch and/or Deep Research events you want delivered
- Save the configuration and record the signing secret in your secrets manager
Note that this dashboard UI is only relevant if your project actually uses the async endpoints that emit webhooks. Projects that only call Chat Completions or the Responses API have nothing to configure here.
Important: Handle Your Signing Secret Like a Credential
As with most webhook platforms, treat the signing secret as a one-time credential and store it in a secrets manager (AWS Secrets Manager, GCP Secret Manager, HashiCorp Vault, Doppler, etc.) immediately after generation. Never commit it to source control, and rotate it if you suspect exposure.
Step 2: Webhook Endpoint Implementation
Your webhook endpoint must:
- Accept POST requests with JSON payloads
- Return 2xx status code within a few seconds
- Verify webhook signatures for security
- Handle requests idempotently
Webhook Signature Verification
Understanding OpenAI Signatures
OpenAI includes verification headers with each webhook:
The signature follows the standard-webhooks specification and must be verified to ensure authenticity.
Implementation Examples
Local Development and Testing
Using ngrok for Local Testing
To receive Batch or Deep Research webhook deliveries on a local dev machine, expose your server through an HTTPS tunnel. ngrok is a common choice:
Configure this ngrok URL in your OpenAI project webhook settings for local testing.
Testing Webhook Events
Test your webhook implementation by:
- Triggering actual OpenAI API operations that generate webhooks
- Using OpenAI's test event functionality (when available)
- Creating mock webhook payloads for unit testing
- Validating signature verification with test secrets
Webhook Event Types (Batch & Deep Research)
The event names below are scoped to the specific async endpoint that emits them. OpenAI updates these occasionally, so cross-check against the current Batch API and Deep Research API references before wiring up a production handler.
| Event | Emitted by | Meaning |
|---|---|---|
batch.completed | Batch API | Batch job finished successfully and results are ready to download. |
batch.failed | Batch API | Batch job terminated in a failed state. |
batch.expired | Batch API | Batch job did not complete within the allowed window. |
batch.cancelled | Batch API | Batch job was cancelled before completion. |
| Deep Research lifecycle events are documented in OpenAI's Deep Research API reference — consult the current docs for the exact event names, since they have changed during the API's rollout. | ||
Webhook Payload Structure (Illustrative)
OpenAI's webhook payloads follow the standard-webhooks envelope format: a top-level event wrapper with an id, type, created timestamp, and a data object whose shape depends on the event type. The example below shows the envelope structure only — payload shape is illustrative; consult OpenAI's documentation for the current schema.
Production Best Practices for Batch & Deep Research Webhooks
⚡ Handle Retries Properly
Like most webhook platforms, OpenAI retries failed deliveries with backoff if your endpoint returns a non-2xx status or times out. Check the current OpenAI documentation for the exact retry window and schedule. Return 2xx on success, 4xx for permanent failures you don't want retried, and 5xx for transient errors:
🔄 Implement Idempotency
Handle duplicate webhooks gracefully using event IDs:
📊 Monitoring and Logging
- • Log all webhook receipts with timestamps
- • Track processing success/failure rates
- • Monitor signature verification failures
- • Set up alerts for webhook downtime
- • Dashboard for webhook health metrics
Troubleshooting Batch & Deep Research Webhooks
Common Issues and Solutions:
- ⚠️Signature Verification Fails: Ensure you're using the correct signing secret and passing the raw request body to verification.
- ⚠️Webhooks Not Received: Check that your endpoint is publicly accessible via HTTPS and returns 2xx status codes.
- ⚠️Timeout Issues: Respond within a few seconds. Use async processing for long-running tasks.
- ⚠️Duplicate Events: Implement idempotency checks using the event ID to handle retries gracefully.
Debug OpenAI Webhooks with Hooklistener
Hooklistener provides comprehensive webhook debugging for OpenAI integrations. Capture, inspect, and replay webhook events with signature verification, retry tracking, and team collaboration features.