Multi-Tenancy
"We use namespaces for isolation" is a sentence I hear frequently. It is also a sentence that reveals a misunderstanding about what namespaces do and do not provide. Namespaces are a naming boundary and an RBAC scope. They are not a security boundary. A misconfigured pod in the finance namespace can — by default — reach pods in the engineering namespace. A container escape vulnerability in any pod threatens the node, and potentially the entire cluster.
Understanding the multi-tenancy model you are actually running, versus the one you think you are running, is the difference between a reasonable risk posture and a false sense of security.
The Isolation Spectrum
Each step to the right adds isolation and adds cost. The right choice depends on who your tenants are (internal teams vs. external customers), your compliance requirements, and how much operational complexity you can absorb.
Namespace-Based Multi-Tenancy: What It Actually Provides
Namespaces provide:
- Name scoping — resource names only need to be unique within a namespace
- RBAC scope — Role bindings are namespace-scoped; ClusterRole bindings are not
- ResourceQuota and LimitRange — per-namespace resource governance
- NetworkPolicy enforcement — when your CNI supports it
Namespaces do NOT provide:
- Node isolation — all namespaces share the same nodes
- API server isolation — all namespaces share the same API server (control plane vulnerability affects everyone)
- Kernel isolation — container escapes can affect neighboring pods regardless of namespace
- Secrets isolation — a ClusterAdmin can read all Secrets in all namespaces
For internal teams with mutual trust (your own engineering teams), namespace-based tenancy is reasonable. For external tenants (customers running their workloads), it is not sufficient.
# A minimal namespace setup for an internal team tenant
apiVersion: v1
kind: Namespace
metadata:
name: team-payments
labels:
kubernetes.io/metadata.name: team-payments
team: payments
environment: production
---
# ResourceQuota — limits what this namespace can consume
apiVersion: v1
kind: ResourceQuota
metadata:
name: team-payments-quota
namespace: team-payments
spec:
hard:
requests.cpu: "16"
requests.memory: 32Gi
limits.cpu: "32"
limits.memory: 64Gi
count/pods: "80"
---
# LimitRange — sets defaults so pods without requests get sane values
apiVersion: v1
kind: LimitRange
metadata:
name: team-payments-limits
namespace: team-payments
spec:
limits:
- type: Container
default:
cpu: 500m
memory: 512Mi
defaultRequest:
cpu: 100m
memory: 128Mi
max:
cpu: "4"
memory: 8Gi
---
# NetworkPolicy — default deny, then explicitly allow
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny
namespace: team-payments
spec:
podSelector: {}
policyTypes:
- Ingress
- EgressRBAC: The Access Control Layer
RBAC is the most important governance layer for namespace tenancy. The pattern that works: give teams access to their namespace with edit permissions, and nothing more.
# Role — can deploy and manage workloads, cannot manage RBAC itself
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: team-lead
namespace: team-payments
rules:
- apiGroups: ["apps"]
resources: ["deployments", "statefulsets", "replicasets"]
verbs: ["get", "list", "watch", "create", "update", "patch"]
- apiGroups: [""]
resources: ["pods", "services", "configmaps", "events"]
verbs: ["get", "list", "watch", "create", "update", "patch", "delete"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get", "list", "watch"] # Read only — no create/update
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: payments-team-lead
namespace: team-payments
subjects:
- kind: Group
name: payments-team-leads # Maps to your SSO group
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: team-lead
apiGroup: rbac.authorization.k8s.ioNotice: secrets is read-only. Teams should not be creating Secrets directly in a production cluster — that is how static credentials proliferate. Secrets should come from external-secrets-operator pulling from Secrets Manager, not from engineers running kubectl create secret.
vcluster: Virtual Clusters on Shared Infrastructure
vcluster creates a full Kubernetes control plane — API server, scheduler, controller manager — running as pods inside the host cluster. From the tenant's perspective, they have a complete Kubernetes cluster. From the host cluster's perspective, they have a namespace with some long-running pods.
# Install vcluster CLI
curl -L -o vcluster \
"https://github.com/loft-sh/vcluster/releases/latest/download/vcluster-linux-amd64" \
&& chmod +x vcluster && mv vcluster /usr/local/bin
# Create a virtual cluster for team-a
vcluster create team-a \
--namespace vcluster-team-a \
--values values.yaml
# Connect to the virtual cluster
vcluster connect team-a --namespace vcluster-team-a# values.yaml — vcluster configuration
vcluster:
image: rancher/k3s:v1.30.0-k3s1
sync:
ingresses:
enabled: true # Sync Ingress objects to host cluster's ingress controller
storageclasses:
enabled: false # Use host cluster's storage classes
isolation:
enabled: true
podSecurityStandard: restricted # Apply PSS to synced pods
resources:
limits:
cpu: "2"
memory: 4GiWhat vcluster provides over namespaces:
- Tenants get their own API server — they can install CRDs without affecting other tenants
- Separate kubeconfig — tenants cannot accidentally
kubectl get pods -Aand see other tenants - CRD isolation — Operator CRDs in one vcluster do not pollute the host cluster's API
What vcluster does not provide: node isolation. Workload pods still schedule on the host cluster's nodes.
Cluster-per-Tenant: When It Is Justified
Cluster-per-tenant is the right model when:
- Tenants are external customers with regulatory compliance requirements (HIPAA, PCI-DSS, SOC2 Type II)
- Tenants have different Kubernetes version requirements
- Blast radius must be completely contained — one tenant's cluster failure cannot affect another
The operational cost is real. Each cluster needs its own upgrade cycle, addon management, and monitoring stack. The minimum viable approach is to use Cluster API or Terraform modules to make cluster provisioning fully automated, so adding a new tenant is a variable substitution in code, not a manual process.
The Admission Control Layer
Regardless of tenancy model, admission control is the enforcement layer that prevents unsafe configurations from reaching nodes.
# Kyverno — disallow privileged containers in tenant namespaces
apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
name: disallow-privileged-containers
spec:
validationFailureAction: Enforce
rules:
- name: check-privileged
match:
any:
- resources:
kinds: ["Pod"]
exclude:
any:
- resources:
namespaces: ["kube-system", "monitoring"]
validate:
message: "Privileged containers are not permitted."
pattern:
spec:
containers:
- =(securityContext):
=(privileged): "false"At minimum, enforce: no privileged containers, no hostPath mounts, no hostNetwork, containers run as non-root. These four controls close the most common container escape vectors regardless of your tenancy model.
Key Takeaways
- Namespaces are a naming boundary and RBAC scope, not a security boundary. Treat them as team organization, not isolation.
- For internal teams, namespace-based tenancy with ResourceQuotas, LimitRanges, NetworkPolicies, and tight RBAC is sufficient. For external customers, it is not.
- vcluster provides API isolation and CRD separation on top of namespace isolation without the cost of a dedicated cluster. It is the right middle ground for SaaS platforms serving many tenants.
- Cluster-per-tenant is justified for regulated workloads or when tenants need complete blast radius containment. Automation (Cluster API, Terraform) is required to keep it manageable.
- Secrets should not be created by teams directly in production clusters. Use external-secrets-operator pulling from a secrets manager to prevent static credential proliferation.
- Admission control (Kyverno or OPA Gatekeeper) is the enforcement layer that makes any tenancy model actually safe — block privileged containers, hostPath, hostNetwork, and non-root violations uniformly.