Skip to main content
Event-Driven Architecture

Idempotency, End to End

Ravinder··6 min read
Event-Driven ArchitectureDistributed SystemsArchitectureIdempotencyReliability
Share:
Idempotency, End to End

At-least-once delivery is the default guarantee of most production event brokers. Exactly-once delivery exists in Kafka with careful configuration, but it applies only within a Kafka-to-Kafka pipeline—the moment your consumer writes to a database or calls an external API, you are back to at-least-once. The correct response to this reality is not to demand stronger broker guarantees. It is to make your consumers idempotent.

An idempotent consumer is one where processing the same event twice produces the same result as processing it once. This is not the same as detecting and skipping duplicates—skipping is one implementation strategy. True idempotency means the operation itself is safe to repeat.

The Anatomy of a Duplicate

Duplicates arise from three sources:

Broker redelivery: a consumer processes an event but crashes before committing the offset. On restart, the broker redelivers from the last committed offset.

Producer retry: a producer times out waiting for broker acknowledgment and retries. The broker received both the original and the retry. Both are published.

Replay: deliberate replaying of historical events (covered in Post 6) re-delivers every event in the window.

sequenceDiagram participant P as Producer participant B as Kafka Broker participant C as Consumer participant DB as Database P->>B: Publish OrderPlaced (attempt 1) B->>B: Persisted B--xP: ACK lost in network P->>B: Publish OrderPlaced (attempt 2 - retry) B->>B: Persisted (duplicate) B-->>P: ACK C->>B: Fetch messages B-->>C: OrderPlaced (copy 1) C->>DB: Write reservation C->>B: Fetch next B-->>C: OrderPlaced (copy 2) C->>DB: Write reservation (DUPLICATE!)

Idempotency Key Design

The foundation of deduplication is a stable, unique key per event occurrence. This is not the same as the aggregate ID. Two OrderPlaced events for different orders have different IDs. One OrderPlaced event delivered twice has the same ID.

The event envelope should always carry an id field generated by the producer:

{
  "id": "evt-a1b2c3d4-e5f6-7890-abcd-ef1234567890",
  "type": "OrderPlaced",
  "version": "1",
  "source": "order-service",
  "occurredAt": "2025-10-13T10:00:00Z",
  "data": {
    "orderId": "ord-123",
    "customerId": "cust-456",
    "totalAmount": 299.99
  }
}

The id must be:

  • Globally unique across all event types (use UUID v4 or UUID v7)
  • Generated once at the producer, stable across retries
  • Present in every event, not optional

For Kafka, use the idempotent producer configuration to prevent broker-side duplicates from producer retries:

from confluent_kafka import Producer
 
producer = Producer({
    "bootstrap.servers": "kafka:9092",
    "enable.idempotence": True,       # Exactly-once within the producer session
    "acks": "all",                    # Wait for all replicas
    "retries": 2147483647,            # Retry indefinitely
    "max.in.flight.requests.per.connection": 5
})

The Idempotency Store

Consumer-side deduplication requires a store that tracks processed event IDs. The store must be checked and written atomically with the business operation.

Database-backed store (PostgreSQL):

CREATE TABLE processed_events (
    event_id     UUID        PRIMARY KEY,
    event_type   VARCHAR(255) NOT NULL,
    consumer_id  VARCHAR(255) NOT NULL,  -- Which consumer processed it
    processed_at TIMESTAMPTZ DEFAULT NOW()
);
 
CREATE INDEX idx_processed_events_cleanup
    ON processed_events (processed_at);
def handle_event(event: dict) -> None:
    event_id = event["id"]
    
    with db.transaction():
        result = db.execute("""
            INSERT INTO processed_events (event_id, event_type, consumer_id)
            VALUES (%s, %s, %s)
            ON CONFLICT (event_id) DO NOTHING
            RETURNING event_id
        """, [event_id, event["type"], CONSUMER_ID])
        
        if result.rowcount == 0:
            metrics.increment("events.duplicates_skipped", tags={"type": event["type"]})
            return  # Already processed
        
        # Business logic runs atomically with the idempotency record
        process_business_logic(event)

Redis-backed store (for stateless consumers or when the business operation is idempotent by nature):

import redis
 
r = redis.Redis(host="redis", port=6379)
DEDUP_TTL = 7 * 24 * 3600  # 7 days
 
def handle_event(event: dict) -> None:
    event_id = event["id"]
    key = f"processed:{event_id}"
    
    # SET NX: only set if not exists, atomic
    was_new = r.set(key, "1", ex=DEDUP_TTL, nx=True)
    
    if not was_new:
        return  # Duplicate
    
    process_business_logic(event)

The Redis approach is non-atomic with the business operation—if the business operation fails after the Redis set, the event is marked processed but the business effect was not applied. Use database-backed stores when atomicity is required.

Idempotent by Design vs Deduplication

Some operations are naturally idempotent—no deduplication needed.

# Idempotent: setting a value to a specific state
def handle_order_status_update(event: dict) -> None:
    db.execute("""
        UPDATE orders SET status = %s, updated_at = %s
        WHERE order_id = %s
    """, [event["data"]["status"], event["occurredAt"], event["data"]["orderId"]])
    # Running this twice with the same event produces the same final state.
 
# NOT idempotent: incrementing a counter
def handle_order_placed(event: dict) -> None:
    db.execute("""
        UPDATE inventory SET reserved_quantity = reserved_quantity + %s
        WHERE sku = %s
    """, [event["data"]["quantity"], event["data"]["sku"]])
    # Running this twice with the same event over-counts.

Design operations to be idempotent where possible—upserts over inserts, set-state over increment, conditional updates over unconditional ones. Reserve deduplication stores for operations that cannot be made naturally idempotent.

Composite Keys and Partial Processing

For multi-step operations, a single event may trigger multiple distinct business effects. Each step needs its own idempotency key:

def handle_order_placed(event: dict) -> None:
    order_id = event["data"]["orderId"]
    event_id = event["id"]
    
    # Each step has a unique key combining event ID + step name
    if not already_done(f"{event_id}:reserve-inventory"):
        reserve_inventory(event)
        mark_done(f"{event_id}:reserve-inventory")
    
    if not already_done(f"{event_id}:update-analytics"):
        update_analytics(event)
        mark_done(f"{event_id}:update-analytics")
    
    if not already_done(f"{event_id}:send-confirmation"):
        send_confirmation_email(event)
        mark_done(f"{event_id}:send-confirmation")

This pattern handles partial processing failures: if the consumer crashes after reserving inventory but before sending the confirmation, replay will skip the completed steps and only execute the pending ones.

Retention Policy

Idempotency stores must be cleaned up or they grow unbounded. Retention should match or exceed the broker's message retention:

graph LR A[Kafka retention: 30 days] --> B[Events replayable for 30 days] B --> C[Idempotency store retention: 35 days] C --> D[Cleanup job runs daily] D --> E[DELETE WHERE processed_at < NOW() - 35 days]

If your idempotency store retains records for 7 days but your broker retains events for 30 days, a 15-day-old replay will re-process events that the store has already cleaned up. Keep idempotency records slightly longer than broker retention.

Key Takeaways

  • At-least-once delivery is the practical reality for any consumer that writes to a database or calls external APIs—design consumers for it, not against it.
  • Idempotency keys must be generated by the producer, stable across retries, and present in every event envelope.
  • Database-backed idempotency stores provide atomicity between the deduplication check and the business operation; Redis stores are faster but require naturally idempotent business logic.
  • Operations that can be made naturally idempotent (upserts, set-state) are preferable to deduplication stores—they remove a failure mode entirely.
  • Use composite keys ({eventId}:{stepName}) for multi-step operations to enable partial processing recovery.
  • Idempotency store retention must exceed broker message retention to handle replay scenarios safely.
Share: