Introduction: Why Webhook Idempotency Matters
A webhook is only reliable if your consumer can handle the same message more than once. Most providers use at-least-once delivery, so retries and duplicate events are normal. If your handler assumes every delivery is unique, one transient failure can cause duplicate charges, duplicate orders, or repeated emails.
That is the problem an idempotent webhook handler solves. Webhook idempotency means processing the same event multiple times leads to the same final state as processing it once. If Stripe, GitHub webhooks, Shopify webhooks, Twilio, or another provider sends the same event again, your system should recognize it and avoid repeating side effects.
Without that protection, a single webhook can trigger real damage: billing a customer twice, creating two fulfillment jobs, or sending the same notification repeatedly. The fix is a reliability pattern built from event IDs, database-based duplicate event detection, retry-safe side effects, and tests that prove your logic survives provider retry logic and delayed deliveries.
What is a webhook Webhook idempotency
This guide focuses on building that safety into your integration so your handler returns clean HTTP 2xx responses when appropriate, avoids duplicate work, and fits into a broader reliability strategy rather than a single code trick.
What Is an Idempotent Webhook Handler?
An idempotent webhook handler produces the same final system state even if the same event arrives multiple times. The goal is not just returning the same HTTP status; it is preventing duplicate side effects in your database, queue, or fulfillment system. A handler can reply 200 OK twice and still be wrong if it creates two orders, two shipment jobs, or two CRM records.
With Stripe, a payment_succeeded webhook should create one order and one fulfillment record, even if Stripe retries delivery. The same applies to GitHub webhooks, Shopify webhooks, and Twilio callbacks: retries, replays, and transient failures must not change business results. An idempotency key or event ID lets you detect duplicates before work happens, which is safer than ignoring repeated requests after side effects already ran.
Why Webhook Handlers Need to Be Idempotent
A non-idempotent webhook handler can turn one event into multiple database writes, duplicate charges, extra shipments, or repeated emails. In event-driven architecture and distributed systems, that creates inconsistent state: one service thinks an order is paid, another sees two payments, and support has to reconcile the mess manually.
Providers retry because they often cannot know whether your server processed the event before the timeout or network failure. That is why webhook retry handling and webhook retry logic are part of normal webhook design. Exactly-once delivery is uncommon, so you should design for at-least-once delivery and make your handler safe under retries.
Without that safety, monitoring fills with noisy alerts, customers get duplicate notifications, and production bugs become hard to reproduce because the failure depends on timing. Idempotency is a reliability requirement, not an optimization.
What Causes Duplicate Webhook Deliveries?
Webhook delivery usually follows at-least-once delivery, not exactly-once delivery. If your endpoint times out, returns a 5xx response, or drops the connection, providers such as Stripe, GitHub webhooks, Shopify webhooks, and Twilio commonly retry until they get a successful response. See webhook retry logic for the mechanics.
Those retries can produce exact duplicate payloads or semantically duplicate events with small metadata differences, such as a new delivery ID or timestamp. That means duplicate event detection must compare business meaning, not just raw JSON.
Out-of-order delivery is also normal in distributed systems. If one provider writes to multiple queues or services, a later event can arrive before an earlier one, so your handler must tolerate both duplicates and sequencing gaps.
How Do You Make a Webhook Handler Idempotent?
Build the handler as a gate, not a script: verify the request with webhook security best practices, extract an idempotency key or event ID, reserve it in a durable store with a database transaction, then run side effects only if that reservation succeeds.
Duplicate detection must happen before any irreversible work, because once you charge a card, create an order, or send a shipment, a later check cannot undo the damage. A unique constraint on the event ID prevents a race condition where two deliveries arrive together and both try to process the same webhook.
After the first successful reservation, return an HTTP 2xx response quickly and hand off work to a message queue or background job. That keeps the endpoint fast and makes downstream actions retry-safe too, so the whole workflow stays idempotent, not just the webhook handler. See handling webhooks in Express for implementation patterns.
Should You Use an Event ID or Payload Hash for Deduplication?
Use the provider’s event ID as your deduplication key whenever possible. Stripe event IDs, GitHub delivery IDs, and Shopify webhook IDs are stable, provider-issued, and easy to validate with a unique constraint or primary key. Use a payload hash, such as SHA-256, only when the provider does not send a reliable event ID; hashes can change if fields are reordered, normalized differently, or if the provider includes non-business metadata in the payload.
Composite business keys like order_id + event_type can work for domain-specific dedupe, but they can also collapse distinct events into one record if the business process legitimately emits multiple events for the same order.
Where Should Processed Webhook Events Be Stored?
Store processed events in a durable database table, not memory. In PostgreSQL, use INSERT ... ON CONFLICT DO NOTHING; in MySQL, use INSERT IGNORE or INSERT ... ON DUPLICATE KEY UPDATE to reserve the row atomically. Redis can support short-lived dedupe, but a restart clears state, so it is safer as a cache than as the source of truth.
A common pattern is a processed_webhooks table with columns such as event_id, provider, received_at, processed_at, and status. That gives you a durable audit trail and makes it easier to answer questions like “Was this webhook already handled?” or “Did processing fail after the event was recorded?” Keep a retention policy and cleanup job for old rows; in-memory deduplication fails after crashes, deploys, and node restarts.
How Do You Prevent Duplicate Database Writes From Webhooks?
Use the database as the lock. Insert the event record first, inside a transaction, and make the event ID unique. If the insert succeeds, continue with the business write. If it fails because the row already exists, stop immediately.
For example, if a Stripe payment webhook creates an order, the order insert should either happen in the same transaction as the dedupe record or be guarded by a second unique constraint on the order reference. That way, even if the webhook is retried, the database rejects the duplicate write instead of letting the application create a second row.
This is where upsert and insert on conflict patterns matter. They let you make the first write atomic and safe under concurrency, which is more reliable than checking for existence in application code before inserting.
How Do You Handle Out-of-Order Webhook Events?
Out-of-order events are common when providers retry, batch, or fan out messages through a message queue. A later event may arrive before an earlier one, especially in distributed systems where multiple workers process related updates independently.
The safest approach is to model the resource as a state machine. For example, an order might move from pending to paid to fulfilled, but not backward. If a fulfilled event arrives before paid, store it as pending reconciliation or ignore it until the prerequisite state exists. This is where a reconciliation job helps: it periodically compares expected state with actual state and repairs gaps.
If the provider includes sequence numbers or timestamps that are meaningful for business ordering, use them carefully. Do not assume timestamps alone are enough, because clock skew and retries can make them misleading.
Example Implementation and Testing
if not webhook_signature_verification(HMAC, raw_body, header): return 401
payload = parse(raw_body)
event_id = payload.id
begin database transaction
inserted = insert into processed_webhooks(event_id, provider, status)
values (event_id, payload.provider, 'received')
on conflict do nothing
if not inserted:
commit
return HTTP 2xx response
run business logic
update processed_webhooks set status = 'processed' where event_id = event_id
commit
return HTTP 2xx response
In Express.js or Node.js, read the raw body before JSON.parse so handling webhooks in Express can verify the HMAC signature verification correctly. In Python, AWS Lambda, Celery, BullMQ, RabbitMQ, or Kafka workers, keep the same order: verify, reserve, process, acknowledge. A unique constraint makes duplicate requests harmless because the second insert fails or becomes a no-op.
For external APIs, pass an idempotency key to Stripe or check order/email status before calling again. If the webhook triggers a downstream API that is not idempotent, wrap that call in your own dedupe record or state check so retries do not create duplicate side effects.
Test with repeated curl requests, replayed fixtures from webhook testing for developers, and forced 500s to trigger retries. Run concurrency tests with parallel deliveries to confirm no duplicate rows, charges, or emails are created. Add tests for duplicate payloads, out-of-order events, and signature failures so your webhook signature verification and dedupe logic are both covered.
Best Practices for Webhook Retry Handling
An idempotent webhook handler should verify the HMAC signature before any business logic, then acknowledge the request quickly with an HTTP 2xx response once the event is safely recorded. That keeps untrusted payloads out of your system and reduces retry pressure from providers. If you need deeper guidance on request validation and transport security, see webhook security best practices and webhook retry handling.
Push slow work into a background job or message queue so the webhook endpoint stays fast, but keep the worker idempotent too. A queue reduces timeout risk, not duplicate risk; retries can happen in the worker, in the queue, or after a crash. Use the same event ID or dedupe key at every stage, and be ready for a dead-letter queue when repeated failures need manual review.
Avoid common mistakes that break an idempotent webhook handler:
- Using in-memory dedupe, which disappears on restart.
- Choosing non-unique payload fields like timestamps or status text as keys.
- Checking duplicates after side effects instead of before them.
- Assuming one delivery equals one processing attempt.
Treat observability as part of the design. Use structured logging with event IDs, duplicate counters, alerting on unusual duplicate rates, and tracing that follows a webhook through the queue, worker, and database. Validate your behavior with a webhook testing guide so retries, crashes, and duplicate deliveries are exercised before production.
Can Webhook Handlers Be Idempotent If They Trigger External API Calls?
Yes, but only if the external call is also protected. If the downstream API supports an idempotency key, send one. If it does not, record the intent before the call and make the call only when the stored state says it has not already happened.
For example, if a webhook triggers an email, payment capture, or shipment request, store a local record such as email_sent = true or shipment_requested_at in the same transaction as the webhook dedupe record when possible. If the external system fails after your local write, a retry should see the stored state and avoid sending the request again.
Why Should Webhook Endpoints Acknowledge Quickly?
Webhook providers usually treat a fast HTTP 2xx response as the signal that delivery succeeded. If your endpoint waits on slow database work, external APIs, or long-running business logic, the provider may time out and retry even though your code eventually finishes.
That is why the common pattern is: verify, dedupe, persist, acknowledge, then process asynchronously. Fast acknowledgment reduces duplicate deliveries, lowers provider retry pressure, and keeps your endpoint responsive under load.
What Happens If a Webhook Is Delivered Twice?
If your handler is not idempotent, the second delivery can create duplicate rows, duplicate charges, duplicate emails, or conflicting state. If your handler is idempotent, the second delivery becomes a no-op: it is recognized as already processed, logged for observability, and safely acknowledged.
That difference is why duplicate event detection, unique constraints, and transaction boundaries matter. The goal is not to prevent the provider from retrying; the goal is to make retries harmless.
How Do Database Unique Constraints Help With Webhook Deduplication?
A unique constraint turns deduplication into a database guarantee instead of an application guess. If two workers try to process the same event at the same time, only one insert succeeds. The other gets a conflict, which you can treat as proof that the webhook was already handled.
This is especially useful in PostgreSQL and MySQL, where atomic insert patterns prevent a race condition between “check if processed” and “mark as processed.” A primary key can serve the same role when the event ID is the row identifier. Combined with a database transaction, the unique constraint ensures the dedupe record and the business write stay consistent.
Final Checklist
Before shipping a webhook integration, confirm that you have:
- Verified the request with HMAC signature verification.
- Stored a durable dedupe record using an event ID or idempotency key.
- Used a unique constraint or primary key to prevent duplicate writes.
- Returned HTTP 2xx responses quickly.
- Moved slow work to a message queue or background job.
- Added observability with structured logging and monitoring.
- Tested duplicate deliveries, retries, crashes, and out-of-order events.
- Planned for a dead-letter queue and a reconciliation job.
That combination is what makes an idempotent webhook handler reliable in real distributed systems.