Event Modeling
Bad event names are a slow tax on your entire engineering organization. When an event is called UserDataChanged or ProcessEvent, every consumer has to read the payload to understand what actually happened. When OrderShipped and OrderCancelled are both subsumed into a single OrderStatusUpdated event, consumers filter by a status field instead of subscribing to the signal they care about. The event bus becomes a query API with extra steps.
Good event modeling is a design discipline, not a naming convention exercise. It forces you to think about what facts your system produces and who cares about those facts.
Facts vs Intent
The most important distinction in event modeling is between facts (things that happened) and commands/intent (things you want to happen).
Events record facts. They are immutable and past-tense. Consumers cannot reject an event—it has already happened.
Commands express intent. They are imperative and can fail. The recipient decides whether to execute them.
Mixing the two creates confusion. An event called SendWelcomeEmail is actually a command disguised as an event. When it fails, what do you emit? SendWelcomeEmailFailed? Now you have a command failure modeled as an event. Use past-tense names to enforce the distinction: WelcomeEmailSent is a fact; someone already decided to send it and it succeeded.
Naming Conventions
Pattern: {Aggregate}{PastTenseVerb} — OrderPlaced, PaymentProcessed, InventoryReserved, ShipmentDispatched.
The aggregate name scopes the event to its domain. The past-tense verb captures what happened. Avoid vague verbs: Updated, Changed, Modified, Processed are signals that the modeler hasn't committed to what actually changed.
Be specific:
Bad: UserUpdated
Good: UserEmailChanged
Good: UserAddressUpdated
Good: UserPasswordResetEach fine-grained event lets consumers subscribe only to the change they care about. A notification service that only cares about email changes does not need to process every user update—it subscribes to UserEmailChanged and ignores the rest.
Granularity
Granularity is the hardest decision in event modeling. Too coarse and consumers over-fetch. Too fine and you create event storms where a single user action produces dozens of events that consumers must correlate.
Coarse-grained (aggregate-level):
{
"type": "OrderUpdated",
"orderId": "ord-123",
"status": "SHIPPED",
"items": [...],
"shippingAddress": {...},
"updatedAt": "2025-09-08T14:00:00Z"
}Consumers get the full aggregate state. Easy to consume, but they can't tell what changed.
Fine-grained (field-level):
{
"type": "OrderStatusChanged",
"orderId": "ord-123",
"previousStatus": "PACKED",
"newStatus": "SHIPPED",
"changedAt": "2025-09-08T14:00:00Z"
}Consumers know exactly what changed. But if 10 fields change simultaneously, you emit 10 events that must be consumed in order.
Practical heuristic: align events with business actions, not database mutations. An order being shipped is one business action—one event. The fact that it updates status, shippedAt, trackingNumber, and warehouseId is an implementation detail.
What Belongs in the Payload
Three schools of thought:
Event notification only — minimal payload, consumers call back for state.
{
"type": "OrderShipped",
"orderId": "ord-123",
"occurredAt": "2025-09-08T14:00:00Z"
}Keeps events small. Forces consumers to make a synchronous call to get data, creating coupling back to the producer's API.
Full event-carried state transfer (ECST) — complete snapshot in the payload.
{
"type": "OrderShipped",
"orderId": "ord-123",
"trackingNumber": "1Z999AA10123456784",
"carrier": "UPS",
"estimatedDelivery": "2025-09-12",
"items": [
{ "sku": "WIDGET-42", "quantity": 2 }
],
"shippedAt": "2025-09-08T14:00:00Z"
}Consumers are self-sufficient. No callback needed. The downside: large payloads and schema evolution complexity.
Delta events — only what changed, plus enough context.
{
"type": "OrderShipped",
"orderId": "ord-123",
"changes": {
"trackingNumber": "1Z999AA10123456784",
"carrier": "UPS",
"status": "SHIPPED"
},
"occurredAt": "2025-09-08T14:00:00Z"
}Practical middle ground. Include the fields that changed and the identifiers needed for context. Avoid including entire nested objects if only one field changed.
Versioning from Day One
Every event you publish is a contract with every consumer. Plan for versioning before your first consumer goes to production.
Use an explicit version field or embed version in the event type:
// Version in type name
{ "type": "OrderPlaced.v2", ... }
// Version as field
{ "type": "OrderPlaced", "schemaVersion": 2, ... }The envelope pattern separates routing metadata from business payload:
{
"id": "evt-789",
"type": "OrderPlaced",
"version": "2",
"source": "order-service",
"occurredAt": "2025-09-08T14:00:00Z",
"correlationId": "req-456",
"data": {
"orderId": "ord-123",
"customerId": "cust-789",
"totalAmount": 149.99
}
}The envelope fields are stable infrastructure. The data field evolves with the business domain.
Event Topology and Naming Together
Notice how each event name tells you exactly what happened without reading the payload. InventoryInsufficient is a fact—it happened. The downstream reaction (cancel the order? waitlist it?) is a separate concern.
Key Takeaways
- Name events as past-tense facts, not commands—
OrderShippednotShipOrder. - Align event granularity with business actions, not database columns.
- Include enough payload data for consumers to act without calling back, but don't include entire aggregates when a delta suffices.
- Version events from the start using an envelope pattern—retrofitting versioning is painful.
- Fine-grained events over a single aggregate change require correlation logic; coarse-grained events require consumers to diff state themselves.
- Vague verbs (
Updated,Changed,Processed) are a red flag—replace them with specific verbs that describe the business action.