Building a multi-tenant SaaS product and pursuing FedRAMP authorization are individually difficult challenges. Combining them — building a multi-tenant SaaS that can serve both commercial and federal customers from a compliant architecture — requires specific architectural decisions from the start that are significantly harder to retrofit later.
This article covers the architecture patterns Rutagon uses when engineering multi-tenant platforms that need to operate within or alongside a FedRAMP authorization boundary.
The Core Tension: Efficiency vs. Isolation
Multi-tenancy is built on shared infrastructure — the economic model depends on one database cluster, one application tier, and one platform serving many customers. FedRAMP (and FISMA-aligned systems) require isolation between government agency data and strong controls over who can access what.
The tension is not irresolvable, but it requires deliberate architectural choices about where isolation lives in your stack.
Tenant Isolation Models
There are three primary isolation models for multi-tenant systems, each with different FedRAMP implications:
Model 1: Silo (Per-Tenant Infrastructure)
Each tenant gets its own dedicated infrastructure: separate database, separate application instances, separate network segments.
FedRAMP implications: Easiest to authorize — each tenant's system can be treated as a separate authorization boundary. Federal agency data never co-mingles with other tenant data at the infrastructure layer.
Cost: Highest. Per-tenant infrastructure eliminates most efficiency gains from multi-tenancy.
When to use: Required for tenants with classified or IL5 data; chosen for premium tiers serving large federal agencies willing to pay for full isolation.
Model 2: Bridge (Shared Application, Separate Databases)
All tenants share the application layer (same code, same pods), but each tenant has a dedicated database schema or database instance.
FedRAMP implications: Application layer is shared — all tenants' requests flow through the same application code. Data isolation at the database layer is strong but application layer vulnerabilities (IDOR, access control bugs) affect all tenants. Assessors evaluate the application layer's access control as a key control.
Cost: Medium. Database infrastructure multiplied by tenant count; application infrastructure shared.
When to use: Strong choice for FedRAMP Moderate systems where the application layer can be verified as rigorously access-controlled. Most production multi-tenant platforms Rutagon has shipped use this model.
Model 3: Pool (Shared Everything, Row-Level Security)
All tenants share the database, with a tenant_id column and row-level security policies enforcing that queries only return the requesting tenant's data.
FedRAMP implications: Highest risk for government tenants. A misconfigured query, a missing tenant filter, or a row-level security bug could expose one agency's data to another. Assessors examine the access control implementation closely and typically require proof-of-concept penetration testing targeting tenant isolation.
Cost: Lowest. Fully shared infrastructure scales more efficiently.
When to use: Acceptable for commercial tenants in lower-sensitivity FedRAMP Low systems. Not recommended for a FedRAMP Moderate authorization where data sensitivity is significant.
Authorization Boundary Definition
One of the most consequential decisions for a multi-tenant FedRAMP platform is the authorization boundary: what is "in scope" for the ATO package?
Option A: Government-tenant-only boundary The FedRAMP authorization boundary includes only the infrastructure and code paths that handle federal agency tenant data. Commercial tenants run on separate infrastructure outside the boundary. This is cleaner from an authorization standpoint but requires physical or logical separation between the government and commercial infrastructure stacks.
Option B: Platform-wide boundary The entire multi-tenant platform is included in the authorization boundary — all tenants, regardless of government status, operate under FedRAMP controls. This simplifies operations (one platform to maintain) but significantly increases the compliance burden for commercial customers who receive government-grade security controls they may not need.
Most mature FedRAMP multi-tenant platforms implement Option A — separate production infrastructure for government tenants, with shared development tooling but separate deployment pipelines. This pattern is analogous to how AWS GovCloud vs commercial regions operate as isolated environments.
Data Partitioning Strategy
Regardless of isolation model, implement explicit data partitioning that is verifiable by an assessor:
# Every database query must be scoped to tenant — enforced at the ORM layer
class TenantScopedQueryMixin:
"""
Base mixin for all ORM models in the multi-tenant system.
Automatically applies tenant_id filter to every query.
Raises TenantContextError if tenant context is not set in request.
"""
@classmethod
def query(cls, *args, **kwargs):
tenant_id = get_current_tenant_id() # From authenticated JWT claim
if tenant_id is None:
raise TenantContextError("No tenant context in current request")
return super().query(*args, **kwargs).filter_by(tenant_id=tenant_id) This pattern makes tenant scoping impossible to forget at the individual query level — developers cannot write a query that bypasses tenant context without explicitly overriding the mixin. The override must be auditable (logged, reviewed in code review) and rare.
Shared Services with Tenant-Aware Logging
Multi-tenant systems share infrastructure services: logging pipelines, monitoring systems, alerting. These shared services must be implemented with tenant context to avoid cross-tenant information leakage in observability systems:
- Logs: Every log line includes
tenant_idin structured fields. Log access controls in CloudWatch or your SIEM restrict log queries to operators with appropriate access - Metrics: Aggregate metrics (system health, resource utilization) are tenant-agnostic. Tenant-specific metrics (request count, error rate) are tagged with tenant context
- Traces: Distributed traces include tenant context as a span attribute. Trace queries can be filtered by tenant
For FedRAMP systems, the log aggregation system itself must be within the authorization boundary, and access to logs containing government tenant data must be role-controlled with audit logging of log access.
Deployment Architecture for Multi-Tenant FedRAMP Systems
On AWS GovCloud, a typical multi-tenant FedRAMP Moderate architecture:
VPC (GovCloud)
├── Public subnets: ALB, NAT Gateway
├── Application subnets: EKS nodes (shared application tier)
│ ├── Namespace: tenant-gov-agency-a (dedicated pods for large tenants)
│ └── Namespace: shared-app (shared pods for standard tenants)
├── Data subnets:
│ ├── RDS for gov-agency-a (dedicated DB instance)
│ └── RDS cluster for shared tenants (separate schema per tenant, RLS enforced)
└── Management subnets: Bastion, monitoring agents Network policies in Kubernetes enforce that pods in tenant-gov-agency-a namespace cannot communicate with pods in shared-app namespace — microsegmentation at the Kubernetes layer reinforcing the data-layer isolation.
CI/CD and Change Management for Multi-Tenant Systems
Changes to the shared application code affect all tenants simultaneously. Multi-tenant platforms require stricter change management than single-tenant systems:
- Feature flags: New features deploy to all tenants behind feature flags; enable for testing tenants first, then progressively roll out
- Schema migrations: Database schema changes must be backward-compatible (old code and new code must work with the same schema simultaneously) to support rolling deployments
- Canary deployments: Route a small percentage of traffic to new application code while monitoring per-tenant error rates; automatic rollback if error rate exceeds threshold for any tenant
- Change notification: Government agency tenants with FedRAMP ATO requirements may need advance notification of changes; this is a contractual and operational requirement, not just a technical one
For the ATO package, document your change management procedures as part of CM-3 (Configuration Change Control) — assessors need to understand how you ensure changes don't break tenant isolation or introduce security regressions.
This architecture integrates with the FedRAMP cloud architecture patterns and CI/CD approval gates for regulated pipelines we implement across government programs.
Frequently Asked Questions
Can a single FedRAMP authorization cover both commercial and government tenants?
Technically yes, but operationally this is rarely advisable. FedRAMP requires all tenants on the authorized platform to operate under the FedRAMP security controls — this means your commercial customers are under the same audit, monitoring, and control requirements as government agencies. This adds operational cost and potential commercial friction. Most platforms seeking FedRAMP authorization create a separate government-only environment that obtains the authorization, while the commercial environment operates separately.
How do we handle government tenants that require data residency in specific GovCloud regions?
Build region affinity into your tenant provisioning system. When a government tenant is onboarded, the provisioning workflow deploys their dedicated resources (database, object storage, etc.) in the required GovCloud region and configures routing to ensure their data never transits through commercial regions. For EKS clusters, this means tenant-specific Kubernetes namespaces may span dedicated node groups in specific availability zones. Document region affinity in the SSP as part of the SC-28 protection of information at rest and SA-9 external system services controls.
What are the implications for penetration testing of a multi-tenant system?
FedRAMP requires regular penetration testing as part of CA-8 (Penetration Testing). For multi-tenant systems, the penetration test scope should explicitly include tenant isolation — test whether a malicious tenant can access another tenant's data through IDOR vulnerabilities, query manipulation, or token forgery. Government agencies with FedRAMP ATOs typically require their own penetration test results as part of the continuous monitoring program, not just the SaaS provider's platform-level testing.
How do we manage secrets across tenants in a multi-tenant system?
Each tenant should have its own secret namespace in AWS Secrets Manager or Parameter Store. Use IAM policies scoped to the application's service role to restrict secret access — the application can only retrieve secrets for the current tenant context. Never store cross-tenant secrets in shared namespaces. For tenant-specific API keys and credentials, consider envelope encryption where a per-tenant key encrypts tenant secrets, and the master key is managed by a KMS CMK with tenant-specific access policies.
Does FedRAMP have specific requirements for multi-tenant systems that differ from single-tenant?
FedRAMP does not have a separate control baseline for multi-tenant systems — the same NIST 800-53 controls apply. What differs is how those controls are implemented. SC-4 (Information in Shared Resources) specifically addresses multi-tenant isolation. The 3PAO assessor will pay particular attention to how you implement SC-4, SC-7 (boundary protection), and AC-4 (information flow enforcement) in the context of shared infrastructure. The SSP must explicitly describe the tenant isolation architecture and how it satisfies each relevant control.
Rutagon architects FedRAMP-compliant cloud systems for government programs →
Discuss your project with Rutagon
Contact Us →