Skip to main content
Event-Driven Architecture

Topology Patterns

Ravinder··6 min read
Event-Driven ArchitectureDistributed SystemsArchitectureKafkaPub/Sub
Share:
Topology Patterns

How your events flow between services is as important as what those events contain. Two systems can share identical event schemas but have completely different operational characteristics depending on whether events pass through a central broker, fan out via a bus, route across a mesh, or flow point-to-point between topics. Picking the wrong topology doesn't break things immediately—it breaks them at scale, or during incidents when you need to understand what is happening.

This post maps the major topologies, their tradeoffs, and the scenarios where each fits.

Pub/Sub

Publish-subscribe is the foundational pattern. Producers publish to a topic. Any subscriber that has registered interest receives a copy. The broker handles delivery.

graph LR P1[Order Service] -->|OrderPlaced| T[Topic: order.placed] T --> S1[Inventory Service] T --> S2[Notification Service] T --> S3[Analytics Service]

Characteristics:

  • Producers have no knowledge of subscribers
  • Adding a subscriber requires no change to the producer
  • Delivery semantics (at-least-once, exactly-once) are controlled at the broker level
  • Message retention means late-arriving subscribers can catch up

When to use: fanout scenarios where one domain event drives reactions in multiple bounded contexts. The canonical example is an e-commerce order pipeline: one OrderPlaced event feeds inventory, notifications, analytics, and fraud detection independently.

Watch out for: unbounded fan-out. If a single event produces 50 downstream reads and 8 of those trigger further events, you've created an event storm. Trace the downstream amplification before assuming pub/sub scales linearly.

Event Bus

An event bus is a logical pub/sub layer, usually implemented in-process or within a single deployment boundary. The distinction from a durable broker is that messages are typically not persisted—fire and forget within the process.

graph TD subgraph "Application Process" A[Order Module] -->|publish| Bus[In-Memory Event Bus] Bus --> B[Inventory Module] Bus --> C[Audit Module] Bus --> D[Cache Invalidation] end

Characteristics:

  • Zero network overhead—events are method calls under the hood
  • No message persistence; if the process dies, in-flight events are lost
  • Simple to implement (MediatR, Guava EventBus, etc.)
  • Useful for decoupling modules within a monolith or within a single service

When to use: intra-service decoupling. When you want domain events to flow between modules inside a single service without coupling them via direct method calls. This is particularly useful during strangler-fig migrations—event bus decouples modules before you split them into services.

Watch out for: leaking this pattern across service boundaries. An in-memory event bus that somehow becomes a distributed event bus is the distributed monolith anti-pattern in disguise.

Message Broker (Durable)

A durable message broker—Kafka, RabbitMQ, AWS SQS/SNS, Google Pub/Sub—persists messages and provides delivery guarantees. This is the most common topology for production event-driven systems.

graph LR subgraph Producers A[Payment Service] B[Order Service] end subgraph Broker T1[payments.processed] T2[orders.placed] T3[orders.cancelled] end subgraph Consumers C[Fulfillment Service] D[Analytics Pipeline] E[Fraud Service] end A --> T1 B --> T2 B --> T3 T1 --> C T2 --> C T2 --> D T2 --> E T1 --> E

Characteristics:

  • Durability: messages survive producer and consumer restarts
  • Replay: consumers can re-read from a past offset (Kafka) or requeue failed messages (RabbitMQ)
  • Consumer groups allow horizontal scaling with message distribution
  • Partitioning enables ordering guarantees within a partition key

Kafka-specific pattern—partitioned topics for ordering:

// Producer sets partition key = customerId
// All events for customer cust-123 land on the same partition
// Consumers see ordered history per customer
 
{
  "topic": "orders",
  "partition_key": "cust-123",
  "value": {
    "type": "OrderPlaced",
    "orderId": "ord-456",
    "customerId": "cust-123"
  }
}

When to use: cross-service communication where you need durability, replay, and consumer group scaling. The default choice for production EDA systems with multiple teams owning different consumers.

Watch out for: treating the broker as a database. Infinite retention in Kafka sounds appealing until your storage costs are ten times your compute costs and compaction runs affect latency.

Event-Driven Mesh

An event mesh is a distributed event routing layer—multiple brokers federated together, with events routed across regions, cloud providers, or organizational boundaries. Products like Solace, AWS EventBridge, and RedHat AMQ support mesh topologies.

graph LR subgraph "Region: US-East" P1[Order Service] --> B1[Broker US-East] end subgraph "Region: EU-West" B2[Broker EU-West] --> C1[Fulfillment EU] end subgraph "Region: APAC" B3[Broker APAC] --> C2[Analytics APAC] end B1 <-->|federation| B2 B1 <-->|federation| B3 B2 <-->|federation| B3

Characteristics:

  • Events flow across geographic and organizational boundaries without producer knowing consumer location
  • Routing rules filter which events replicate across links
  • Supports hybrid cloud and multi-cloud topologies
  • Higher operational complexity—you're managing multiple brokers and their interconnects

When to use: global systems where teams in different regions need to react to the same events, or when you need events to cross organizational trust boundaries (partner integrations, B2B event sharing).

Watch out for: latency amplification. An event that crosses three brokers before reaching its consumer has accumulated three broker-induced delays plus network transit. Design for this explicitly—not every event needs global distribution.

Comparison Matrix

Property In-Process Bus Pub/Sub Durable Broker Event Mesh
Persistence No Varies Yes Yes
Replay No Sometimes Yes Yes
Cross-service No Yes Yes Yes
Cross-region No No Complex Yes
Operational overhead Minimal Low Medium High
Latency Microseconds Milliseconds Milliseconds 10s of ms

Choosing Your Topology

Start with the simplest topology that satisfies your durability and delivery requirements.

  • Modules within a single service: in-process event bus.
  • Cross-service within one datacenter/region: durable broker (Kafka or managed pub/sub).
  • Cross-region or cross-org: evaluate whether event mesh or a simpler replication approach (topic mirroring) solves the problem first.

The upgrade path is one-directional: moving from a durable broker to a mesh is additive. Moving from an in-process bus to a durable broker requires retrofitting durability guarantees into every subscriber.

Key Takeaways

  • Pub/sub is the foundational pattern—producers publish to topics, consumers subscribe without producer awareness.
  • In-process event buses are powerful for intra-service decoupling and strangler-fig migrations but must not cross service boundaries.
  • Durable brokers like Kafka add persistence, replay, and consumer group scaling—they are the production default for cross-service EDA.
  • Event meshes solve cross-region and cross-org routing but carry significant operational overhead; evaluate simpler topic mirroring first.
  • Partition keys on durable brokers provide per-entity ordering without global ordering constraints.
  • The topology you choose determines your replay, debugging, and failure isolation capabilities—choose before you build consumers.
Share: