Building a cloud-native application that satisfies FedRAMP, CMMC 2.0, and NIST 800-53 simultaneously sounds like an architect's nightmare. In practice, these frameworks share over 70% of their control requirements. An application designed with security architecture from day one — not bolted on before assessment — satisfies all three without duplicated compliance work.
This is Rutagon's approach to cloud-native government compliance: one architecture, one pipeline, one set of controls that map to multiple frameworks.
Framework Alignment: FedRAMP, CMMC, and NIST 800-53
The three frameworks are not independent:
- NIST SP 800-53 is the foundational control catalog — the source of truth for federal information security
- FedRAMP is NIST 800-53 applied to cloud service providers, with agency-specific overlays and additional continuous monitoring requirements
- CMMC 2.0 Level 2 is NIST SP 800-171 (a subset of 800-53 for protecting CUI) with independent assessment requirements
A cloud-native application built to NIST 800-53 Moderate baseline satisfies the underlying control requirements for all three frameworks. What differs is the assessment process and documentation requirements, not the technical controls.
Key overlapping control families:
- AC (Access Control): Role-based access, least privilege, separation of duties
- AU (Audit and Accountability): Logging, log protection, audit review
- CM (Configuration Management): Baseline configs, change control, least functionality
- IA (Identification and Authentication): MFA, password policy, service authentication
- SC (System and Communications Protection): Encryption in transit/at rest, boundary protection
- SI (System and Integrity): Malware protection, security alerts, input validation
Architecture Decisions for Multi-Framework Compliance
1. Identity and Access — AC and IA Controls
The access control architecture is the most visible element of any ATO assessment. Rutagon's pattern:
# Kubernetes RBAC aligned to least privilege (AC-6)
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: mission-app-operator
namespace: production
rules:
- apiGroups: ["apps"]
resources: ["deployments"]
verbs: ["get", "list", "watch"] # Read only — no create/delete
- apiGroups: [""]
resources: ["pods", "pods/log"]
verbs: ["get", "list", "watch"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: mission-app-operator-binding
namespace: production
subjects:
- kind: ServiceAccount
name: mission-app-operator-sa
namespace: production
roleRef:
kind: Role
name: mission-app-operator
apiGroup: rbac.authorization.k8s.io For human access to the application: integrate with a FIPS-compliant identity provider (Cognito in GovCloud, or an external IdP via SAML). Mandate MFA for all users — IA-2(1) (MFA for privileged) and IA-2(2) (MFA for non-privileged) have the same technical implementation.
2. Audit Logging — AU Controls
Every control framework mandates audit logging. The engineering question is how to make logging comprehensive, tamper-evident, and exportable:
import structlog
import json
from datetime import datetime, timezone
# Structured logging aligned to AU-3 content requirements
log = structlog.get_logger()
def audit_log(
event_type: str,
user_id: str,
resource: str,
action: str,
outcome: str,
source_ip: str = None
):
"""
Generate audit log entry satisfying AU-3 content requirements:
- What event occurred
- When (UTC timestamp)
- Where (source IP, resource)
- Who (user identity)
- Outcome (success/failure)
"""
log.info(
event_type,
timestamp_utc=datetime.now(timezone.utc).isoformat(),
user_id=user_id,
resource=resource,
action=action,
outcome=outcome,
source_ip=source_ip,
log_type="AUDIT"
)
# Usage in application code
@app.route('/api/v1/data/<resource_id>', methods=['GET'])
def get_resource(resource_id: str):
user = get_current_user()
try:
data = fetch_resource(resource_id, user)
audit_log("DATA_ACCESS", user.id, f"resource/{resource_id}", "READ", "SUCCESS", request.remote_addr)
return jsonify(data)
except PermissionError:
audit_log("DATA_ACCESS", user.id, f"resource/{resource_id}", "READ", "DENIED", request.remote_addr)
return jsonify({"error": "Access denied"}), 403 Logs flow to CloudWatch Logs, which feeds the ConMon SIEM. Log retention: 90 days hot, 1 year cold (S3 Glacier) — satisfying AU-11 (Audit Record Retention) for most baselines.
3. Encryption — SC Controls
In transit: Enforce TLS 1.2 minimum at every service boundary. For internal service-to-service traffic, Istio mTLS (see our service mesh guide). For external HTTPS: CloudFront with a security policy that excludes TLS 1.0 and 1.1.
At rest: CMK encryption for every data store (S3, Aurora, ElasticSearch/OpenSearch, DynamoDB). No default AWS-managed keys.
# S3 bucket with CMK encryption — mandatory for IL4+
resource "aws_s3_bucket_server_side_encryption_configuration" "mission_data" {
bucket = aws_s3_bucket.mission_data.id
rule {
apply_server_side_encryption_by_default {
kms_master_key_id = aws_kms_key.data_cmk.arn
sse_algorithm = "aws:kms"
}
bucket_key_enabled = true # Reduces KMS API calls
}
}
# Enforce: deny any request that doesn't use encryption
resource "aws_s3_bucket_policy" "enforce_encryption" {
bucket = aws_s3_bucket.mission_data.id
policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Sid = "DenyUnencryptedPuts"
Effect = "Deny"
Principal = "*"
Action = "s3:PutObject"
Resource = "${aws_s3_bucket.mission_data.arn}/*"
Condition = {
StringNotEquals = {
"s3:x-amz-server-side-encryption-aws-kms-key-id" = aws_kms_key.data_cmk.arn
}
}
}
]
})
} 4. Configuration Management — CM Controls
The ATO assessment will verify that baseline configurations are documented and enforced. Use Policy as Code to make this verifiable:
# OPA (Open Policy Agent) policy: enforce security group rules
package kubernetes.admission
deny[msg] {
input.request.kind.kind == "NetworkPolicy"
not input.request.object.spec.policyTypes
msg = "NetworkPolicy must specify policyTypes (Ingress and/or Egress)"
}
deny[msg] {
input.request.kind.kind == "Pod"
container := input.request.object.spec.containers[_]
container.securityContext.runAsRoot == true
msg = sprintf("Container %v must not run as root", [container.name])
} OPA admission webhooks enforce baseline configuration at deployment time — if a manifest violates policy, it's rejected before reaching the cluster. This generates CM-2 (Baseline Configuration) and CM-3 (Configuration Change Control) evidence automatically.
ATO Evidence Generation in CI/CD
Every commit to the main branch should generate ATO evidence automatically. The pipeline structure:
# GitLab CI: ATO evidence generation pipeline
stages:
- sast # AC-2, SI-3
- sca # SI-2 (patching evidence)
- dast # SI-3, RA-5
- container # CM-7 (least functionality)
- deploy # CM-3 (change control record)
- conmon # CA-7 (continuous monitoring)
sast-scan:
stage: sast
script:
- semgrep --config=p/owasp-top-ten --json --output=sast-results.json src/
- python scripts/parse-sast-findings.py sast-results.json
artifacts:
paths: [sast-results.json]
expire_in: 90 days # AU-11 retention
container-scan:
stage: container
script:
- trivy image --format json --output container-scan.json $CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
- python scripts/fail-on-critical.py container-scan.json
artifacts:
paths: [container-scan.json]
expire_in: 90 days Artifact retention of 90 days satisfies AU-11. The evidence package — SAST, DAST, container scan, deploy log — feeds directly into ConMon reports without manual compilation. This is the pattern described in application security testing for government systems and the managed DevSecOps pipeline for prime delivery.
What Assessors Actually Look For
When a C3PAO or FedRAMP assessor reviews your cloud-native application, they're looking for:
- Boundary definition: Where does your system start and stop? What's in scope for controls?
- Data flow documentation: Where does CUI/sensitive data travel? Is it always encrypted?
- User access evidence: Who has access to what? Is MFA enforced? Is it documented in an access list?
- Change control records: Every production change has a corresponding ticket, approval, and audit trail
- Vulnerability management: Current scan results, open findings, remediation timelines in POA&M
Cloud-native systems have an advantage here: if you've implemented infrastructure as code, automated pipelines, and policy enforcement, the evidence is already in your Git history, CI/CD logs, and Security Hub findings. The challenge is organizing it into the SSP format assessors expect.
Rutagon has supported government program ATO preparation, documentation organization, and technical control implementation. Contact Rutagon to discuss compliance architecture for your program.
Frequently Asked Questions
Can a single cloud-native architecture satisfy both FedRAMP and CMMC?
Yes. FedRAMP uses NIST 800-53 as its control catalog; CMMC Level 2 uses NIST 800-171, which is derived from NIST 800-53 Moderate. An application built to NIST 800-53 Moderate satisfies the underlying technical requirements for both. What differs is the assessment process — FedRAMP uses a 3PAO, CMMC uses a C3PAO — and some documentation requirements. The architecture and code don't need to change.
How long does it take to get an ATO for a cloud-native government application?
Initial ATO (full NIST RMF package) typically takes 12–18 months for a complex system. FedRAMP Authorization to Operate through a 3PAO assessment takes 6–12 months depending on system complexity and readiness. Programs pursuing cATO (continuous ATO) under DoD's DevSecOps Reference Design can deploy to production much earlier through an interim ATO with documented continuous evidence generation. Cloud-native CI/CD pipelines with automated evidence generation significantly accelerate the assessment process.
What's the minimum encryption requirement for government cloud applications?
NIST 800-53 SC-28 requires protecting information at rest. SC-8(1) requires cryptographic protection in transit. For government applications handling CUI: TLS 1.2 minimum in transit (TLS 1.3 preferred), AES-256 at rest with customer-managed KMS keys, and FIPS 140-2 validated modules for all cryptographic operations (native in GovCloud). Weak cipher suites should be disabled at the load balancer or API gateway level.
What's the difference between a System Security Plan and an application security architecture document?
An SSP (System Security Plan) is the formal ATO document — it describes every control in scope, how it's implemented, and links to evidence. It's structured per NIST 800-18 or the FedRAMP SSP template. An application security architecture document is the technical design — what services, what patterns, what encryption. The SSP references the architecture document. Rutagon typically develops both — the architecture document drives technical decisions, and the SSP documents those decisions in the format assessors require.
What cloud controls have the highest ATO assessment failure rate?
Configuration management (CM) and audit logging (AU) are the most common failure areas in government cloud assessments. Specifically: CM failures from undocumented baseline changes and unauthorized configurations; AU failures from incomplete log capture (missing API-level events, missing admin actions) or insufficient retention. Cloud-native architectures with IaC and automated logging address both, but only if configured intentionally — it's possible to have AWS Config enabled but miss logging specific service categories.
Discuss your project with Rutagon
Contact Us →