OpenAI Webhooks Guide — How to Receive Real-Time Callbacks from OpenAI APIs

Updated September 25, 202510 min read

OpenAI webhooks enable efficient handling of long-running API tasks like the Deep Research API. Instead of polling for completion, webhooks push real-time notifications when operations finish. This guide covers implementation, security, and best practices.

Does OpenAI offer native webhooks?

Short answer: not in the way most developers expect. For synchronous APIs like chat.completions.create (the classic Chat Completions endpoint) and responses.create (OpenAI's newer unified Responses API), OpenAI does not deliver webhooks — you read results from the HTTP response you opened, poll a job status endpoint, or use stream=true to receive tokens incrementally over the same connection. Streaming is useful for chat UIs, but it's not a webhook: your client has to stay connected, and the provider never POSTs to a URL you control.

There is one important exception: OpenAI's Deep Research API and a handful of newer async endpoints do support a genuine callback/webhook pattern, where OpenAI POSTs to a URL you register once the long-running job finishes. The rest of this guide walks through that setup in detail — signing secrets, signature verification, retry behavior, and local testing. If you landed here looking for webhooks on the standard chat endpoints, skip to the patterns below first.

In practice, developers building on OpenAI use one of three patterns to get "real-time" behavior:

  • Polling — kick off a job, store the job ID, and poll the status endpoint on an interval until it's done. Simple, works everywhere, wastes requests.
  • Streaming responses — open a single long-lived connection with stream=true and read tokens as they arrive. Great for chat UIs, but your client has to stay connected.
  • Webhook bridges — use the native webhook support on endpoints that have it (Deep Research, batch, some async operations), or run your own worker that converts internal job completion events into outbound webhooks your orchestration layer listens to. Tools like Hooklistener are useful here for capturing, inspecting, and replaying those callbacks during development.

Which OpenAI Endpoints Support Webhooks?

Before writing any code, it's worth being precise about which OpenAI APIs actually push events to a URL you control, and which ones require polling or streaming instead:

  • Batch API — emits batch.* events when asynchronous batch jobs reach a terminal state.
  • Deep Research API — async research jobs can emit lifecycle events; see OpenAI's Deep Research documentation for the current event names.
  • Chat Completions (chat.completions.create) — no webhooks. Use streaming or read the response.
  • Responses API (responses.create) — no webhooks. Use stream=true or poll.
  • Assistants API runs — no webhooks; poll the run status endpoint or consume the run stream.
  • Files / Embeddings / Moderation — synchronous, no async callbacks.

Everything that follows in this guide is scoped to the two endpoint families on the "✅" list — Batch and Deep Research. If you're building on Chat Completions, Responses, or Assistants, the right primitive is streaming or polling, not webhooks.

Webhook Setup for Deep Research & Batch Endpoints

Step 1: Project Configuration

Webhook configuration for Batch and Deep Research jobs is done at the OpenAI project level. The exact dashboard labels have changed over time, so consult the current OpenAI documentation for the precise path — at a high level the steps are:

  1. Open the OpenAI dashboard and select the project that will call the Batch or Deep Research API
  2. Find the webhook configuration area in project settings (consult OpenAI's docs for the current location)
  3. Register a new webhook endpoint URL (must be HTTPS in production)
  4. Select the Batch and/or Deep Research events you want delivered
  5. Save the configuration and record the signing secret in your secrets manager

Note that this dashboard UI is only relevant if your project actually uses the async endpoints that emit webhooks. Projects that only call Chat Completions or the Responses API have nothing to configure here.

Important: Handle Your Signing Secret Like a Credential

As with most webhook platforms, treat the signing secret as a one-time credential and store it in a secrets manager (AWS Secrets Manager, GCP Secret Manager, HashiCorp Vault, Doppler, etc.) immediately after generation. Never commit it to source control, and rotate it if you suspect exposure.

Step 2: Webhook Endpoint Implementation

Your webhook endpoint must:

  • Accept POST requests with JSON payloads
  • Return 2xx status code within a few seconds
  • Verify webhook signatures for security
  • Handle requests idempotently
// Python Flask example with OpenAI client
from flask import Flask, request, jsonify from openai import OpenAI import os app = Flask(__name__) client = OpenAI() webhook_secret = os.environ.get('OPENAI_WEBHOOK_SECRET') @app.route('/openai/webhook', methods=['POST']) def handle_openai_webhook(): try: # Verify and unwrap the webhook event = client.webhooks.unwrap( request.data, request.headers, secret=webhook_secret ) # Process the event process_openai_event(event) return jsonify({"status": "success"}), 200 except Exception as e: print(f"Webhook error: {e}") return jsonify({"error": str(e)}), 400 def process_openai_event(event): # Dispatch based on the event type string delivered by OpenAI. # Consult the current OpenAI docs for the exact event names # supported by Batch and Deep Research. if event.type == "batch.completed": handle_batch_completion(event.data) elif event.type == "batch.failed": handle_batch_failure(event.data) else: print(f"Unhandled event type: {event.type}")

Webhook Signature Verification

Understanding OpenAI Signatures

OpenAI includes verification headers with each webhook:

user-agent: OpenAI/1.0 content-type: application/json webhook-id: msg_1234567890abcdef webhook-timestamp: 1672531200 webhook-signature: v1,signature_here

The signature follows the standard-webhooks specification and must be verified to ensure authenticity.

Implementation Examples

// Python with OpenAI client (recommended)
import openai def verify_webhook(request_data, headers, secret): try: event = client.webhooks.unwrap( request_data, headers, secret=secret ) return event except Exception as e: print(f"Signature verification failed: {e}") raise
// Node.js example
const OpenAI = require('openai'); const openai = new OpenAI(); app.post('/webhook', (req, res) => { try { const event = openai.webhooks.unwrap( req.body, req.headers, process.env.OPENAI_WEBHOOK_SECRET ); // Process the verified event processEvent(event); res.status(200).json({ received: true }); } catch (error) { console.error('Webhook verification failed:', error); res.status(400).json({ error: 'Invalid signature' }); } });

Local Development and Testing

Using ngrok for Local Testing

To receive Batch or Deep Research webhook deliveries on a local dev machine, expose your server through an HTTPS tunnel. ngrok is a common choice:

# Install ngrok
npm install -g ngrok
# Start your local server
python app.py # Running on localhost:5000
# Create public tunnel
ngrok http 5000
# Use the HTTPS URL for webhook configuration
https://abc123.ngrok.io/openai/webhook

Configure this ngrok URL in your OpenAI project webhook settings for local testing.

Testing Webhook Events

Test your webhook implementation by:

  • Triggering actual OpenAI API operations that generate webhooks
  • Using OpenAI's test event functionality (when available)
  • Creating mock webhook payloads for unit testing
  • Validating signature verification with test secrets

Webhook Event Types (Batch & Deep Research)

The event names below are scoped to the specific async endpoint that emits them. OpenAI updates these occasionally, so cross-check against the current Batch API and Deep Research API references before wiring up a production handler.

EventEmitted byMeaning
batch.completedBatch APIBatch job finished successfully and results are ready to download.
batch.failedBatch APIBatch job terminated in a failed state.
batch.expiredBatch APIBatch job did not complete within the allowed window.
batch.cancelledBatch APIBatch job was cancelled before completion.
Deep Research lifecycle events are documented in OpenAI's Deep Research API reference — consult the current docs for the exact event names, since they have changed during the API's rollout.

Webhook Payload Structure (Illustrative)

OpenAI's webhook payloads follow the standard-webhooks envelope format: a top-level event wrapper with an id, type, created timestamp, and a data object whose shape depends on the event type. The example below shows the envelope structure only — payload shape is illustrative; consult OpenAI's documentation for the current schema.

{ "id": "evt_...", "object": "event", "type": "<event_type>", "created": 1234567890, "data": { // Endpoint-specific payload. // For batch.* events, this typically includes the batch job ID. // Consult OpenAI's documentation for the exact fields delivered // with each event type. } }

Production Best Practices for Batch & Deep Research Webhooks

⚡ Handle Retries Properly

Like most webhook platforms, OpenAI retries failed deliveries with backoff if your endpoint returns a non-2xx status or times out. Check the current OpenAI documentation for the exact retry window and schedule. Return 2xx on success, 4xx for permanent failures you don't want retried, and 5xx for transient errors:

app.post('/webhook', (req, res) => { // Always return 2xx for successful processing // Return 4xx for permanent failures (don't retry) // Return 5xx for temporary failures (will retry) try { const event = verifyAndUnwrap(req); processEvent(event); res.status(200).json({ processed: true }); } catch (error) { if (error.type === 'signature_invalid') { // Don't retry signature failures return res.status(400).json({ error: 'Invalid signature' }); } // Temporary error - OpenAI will retry res.status(500).json({ error: 'Processing failed' }); } });

🔄 Implement Idempotency

Handle duplicate webhooks gracefully using event IDs:

processed_events = set() def process_webhook(event): event_id = event.get('id') if event_id in processed_events: print(f"Duplicate event ignored: {event_id}") return {"status": "already_processed"} # Process the event result = handle_event(event) # Mark as processed processed_events.add(event_id) return result

📊 Monitoring and Logging

  • • Log all webhook receipts with timestamps
  • • Track processing success/failure rates
  • • Monitor signature verification failures
  • • Set up alerts for webhook downtime
  • • Dashboard for webhook health metrics

Troubleshooting Batch & Deep Research Webhooks

Common Issues and Solutions:

  • ⚠️
    Signature Verification Fails: Ensure you're using the correct signing secret and passing the raw request body to verification.
  • ⚠️
    Webhooks Not Received: Check that your endpoint is publicly accessible via HTTPS and returns 2xx status codes.
  • ⚠️
    Timeout Issues: Respond within a few seconds. Use async processing for long-running tasks.
  • ⚠️
    Duplicate Events: Implement idempotency checks using the event ID to handle retries gracefully.

Debug OpenAI Webhooks with Hooklistener

Hooklistener provides comprehensive webhook debugging for OpenAI integrations. Capture, inspect, and replay webhook events with signature verification, retry tracking, and team collaboration features.

Real-time OpenAI webhook capture
Signature verification testing
Event replay and debugging tools
Retry and failure monitoring
Start Debugging OpenAI Webhooks →

Related OpenAI Resources