Modular Monoliths in Practice
The Monolith Is Not the Problem
The complaints about monoliths are almost always complaints about undisciplined monoliths — codebases where every module imports every other module, where database tables are accessed from anywhere, where the "user" module has a circular dependency with the "billing" module and nobody is sure how it got there.
A disciplined monolith is a different thing entirely. Strict module boundaries, no cross-module database access, explicit public APIs between modules, and a test suite that verifies the boundaries hold. This is the modular monolith, and it is the architecture that a lot of teams should be running but are not.
This post covers how to enforce boundaries, how to partition builds and tests so boundary violations surface fast, and how to build exit ramps so that extracting a service later is a planned project rather than a panic.
What a Module Boundary Actually Means
A module boundary is not a folder. It is a contract.
When module A wants to use something from module B, it goes through B's public interface. It does not reach into B's internal packages. It does not query B's tables directly. It does not read B's configuration files.
In practice, this means:
src/
billing/
__init__.py ← Public API: what other modules may import
_internal/ ← Private: billing module internals only
invoice_calculator.py
payment_processor.py
models.py ← Public: shared data types
service.py ← Public: the façade other modules call
orders/
__init__.py
service.py
# ✓ from billing.service import charge_order
# ✗ from billing._internal.payment_processor import PaymentProcessorThe underscore prefix is a convention signal, not a hard enforcement. Hard enforcement requires tooling.
Enforcing Boundaries With Tooling
Convention breaks down under deadline pressure. Every codebase that relies on "developers will follow the rules" ends up with violations. The fix is to make violations visible automatically.
Option 1: import-linter (Python)
# .importlinter
[importlinter]
root_package = src
[importlinter:contract:orders-isolation]
name = Orders module does not import billing internals
type = forbidden
source_modules =
orders
forbidden_modules =
billing._internal
[importlinter:contract:billing-isolation]
name = Billing module does not import orders internals
type = forbidden
source_modules =
billing
forbidden_modules =
orders._internalRun this in CI. A boundary violation fails the build.
Option 2: ArchUnit (Java/Kotlin)
@Test
public void modulesShouldRespectBoundaries() {
JavaClasses classes = new ClassFileImporter()
.importPackages("com.example");
ArchRule rule = noClasses()
.that().resideInAPackage("..orders..")
.should().dependOnClassesThat()
.resideInAPackage("..billing.internal..");
rule.check(classes);
}Option 3: Go build tags
// billing/internal/payment_processor.go
//go:build !external
// +build !external
package internal
// This package is not importable from outside the billing module
// Enforced via build tags and CI checksPick the approach that fits your ecosystem. The exact tool matters less than having one.
The Shared Database Problem
The hardest boundary to enforce in a monolith is the database boundary.
If the orders module has a foreign key into the billing tables, you have a tight coupling that tooling cannot fully catch. When you try to extract billing into a service later, you discover that the database schema is the real dependency graph — and it is worse than the code dependency graph.
The solution is logical database ownership — even when using a single physical database.
Each module owns a schema prefix. Only the owning module's code issues writes to its schema. Cross-module reads that cannot go through an API (e.g., complex joins) are a yellow flag — document them and plan to eliminate them.
Build and Test Partitioning
One of the practical advantages of a modular monolith is that you can partition your test suite along module lines. This makes tests faster and makes it obvious which module owns a failing test.
tests/
billing/
unit/
integration/ ← Only tests billing's own tables and external calls
orders/
unit/
integration/
cross-module/
e2e/ ← Tests that span modules, run lastIn your CI pipeline, run module-level tests in parallel, cross-module tests last.
# GitHub Actions example
jobs:
test-billing:
runs-on: ubuntu-latest
steps:
- run: pytest tests/billing/ --cov=src/billing
test-orders:
runs-on: ubuntu-latest
steps:
- run: pytest tests/orders/ --cov=src/orders
test-e2e:
needs: [test-billing, test-orders]
runs-on: ubuntu-latest
steps:
- run: pytest tests/cross-module/The boundary check (import-linter or equivalent) runs in its own job that does not need the database. It is fast and it catches violations before any tests run.
Real-World Example: Shopify's Modular Monolith
Shopify ran a Rails monolith for years. By 2020, it had grown to a point where engineers were stepping on each other and the test suite was slow. Their solution was not to rewrite as microservices — it was to introduce component boundaries inside the monolith.
They introduced the packwerk gem to enforce package boundaries in Ruby. A package defines its public interface. Importing a private constant from another package fails the build. The monolith kept running as one deployment, but engineers worked in bounded namespaces.
The result: faster tests (each package could be tested in isolation), clearer ownership, and a path to extract services for components that genuinely needed independent scaling.
The key insight from Shopify's experience: the discipline of module boundaries made the monolith more maintainable, not a stepping stone to something else. Services were extracted selectively, not systematically.
Exit Ramps: From Module to Service
The exit ramp from a module to a service is a planned migration, not an emergency extraction. When you have genuine module boundaries, the extraction is mechanical.
Each phase is independently deployable. You are never in a state where you have half-extracted a module and the system is broken.
The critical constraint: do not start Phase 4 (data migration) until Phase 3 has been stable in production for at least two weeks. You want to know the HTTP interface is correct before you make the data migration irreversible.
# Phase 3 pattern: adapter in the monolith that calls the new service
# The rest of the monolith code sees no change
class BillingService:
"""
Previously: direct call to billing module internals
Phase 3: adapter that calls the extracted billing service
Phase 5: remove this file
"""
def __init__(self, http_client: HttpClient):
self._client = http_client
self._base_url = settings.BILLING_SERVICE_URL
def charge_order(self, order_id: str, amount_cents: int) -> ChargeResult:
response = self._client.post(
f"{self._base_url}/v1/charges",
json={"order_id": order_id, "amount_cents": amount_cents},
timeout=5.0,
)
response.raise_for_status()
return ChargeResult(**response.json())The adapter pattern lets the monolith call the service with no change to call sites. When you extract the service, the import changes — the interface does not.
When Module Boundaries Are Not Enough
Module boundaries inside a monolith solve coupling. They do not solve:
- Independent scaling: If the billing module needs 10x more CPU than the rest of the system, you still scale the whole monolith.
- Fault isolation: A memory leak in the billing module can crash the orders module.
- Technology diversity: If billing needs a different language runtime, you need a separate process.
These are the legitimate reasons to extract a service. If none of them apply to a given module, keep it in the monolith. The boundary enforcement you already have is sufficient.
A useful heuristic: treat each module as a service candidate, not as a service. The candidate becomes a service only when it has a concrete, documented reason that cannot be solved inside the process boundary.
Getting Teams to Care About Boundaries
The hardest part of a modular monolith is not the tooling. It is getting engineers to treat the boundary as a real constraint rather than an inconvenient guideline.
A few things that work:
Make violations visible in code review. The CI check that catches boundary violations should post a comment on the PR explaining which rule was violated and why it exists.
Assign module ownership. When a module has a named owner (team or individual), boundary violations become a conversation with someone who cares. Anonymous boundaries get violated without consequence.
Track violations over time. A dashboard that shows boundary violations per module per month makes architectural debt visible to engineering leaders. It creates pressure without requiring constant manual auditing.
Exempt nothing. The hardest boundary violations to fix are the ones that were "temporarily" exempted in a crunch. Exemptions become permanent. Enforce the boundary, accept the short-term pain, and do the refactor properly.
Key Takeaways
- A modular monolith is not an undisciplined monolith with aspirations. It requires enforced boundaries: no cross-module private imports, no cross-module database writes.
- Tooling enforces what convention cannot. Use import-linter, ArchUnit, or packwerk in CI so violations fail the build.
- Database ownership is the hardest boundary. Give each module a schema prefix and treat cross-module reads as technical debt to be documented and eliminated.
- Partition your test suite by module and run module tests in parallel. It speeds up CI and makes ownership of failures obvious.
- The exit ramp from module to service is a phased migration: extract the interface, deploy behind an adapter, migrate data, remove from monolith. Each phase is independently stable.
- Extract a module to a service only when you have a concrete, documented problem — independent scaling, fault isolation, or technology heterogeneity — that you cannot solve inside the process.