Security Technical Implementation Guides (STIGs) are the DoD's configuration standards for securing IT systems. For Kubernetes clusters in government environments, the DISA Kubernetes STIG defines hundreds of configuration requirements — from container image policies to network policy enforcement to audit logging configuration.
Manual STIG compliance is unsustainable at deployment speed. The only viable approach is automating the checks and enforcing controls in the CI/CD pipeline so that every deployment is either compliant or blocked.
The DISA Kubernetes STIG: What It Covers
The DISA Kubernetes STIG applies to any Kubernetes cluster processing DoD information at IL2+. Key control areas:
Container image controls:
- KUBER-L2-000010: Images must come from approved sources (DoD Iron Bank or equivalent approved registry)
- KUBER-L2-000020: Container images must not run as root
- KUBER-L2-000030: Images must be free of known critical CVEs
Pod security controls:
- KUBER-L2-000060: Pods must not use hostNetwork, hostPID, or hostIPC
- KUBER-L2-000070: Privileged containers are prohibited (requires explicit justification and waiver)
- KUBER-L2-000080: Containers must have resource limits defined (CPU and memory)
Network controls:
- KUBER-L2-000110: Network policies must restrict pod-to-pod communication to least privilege
- KUBER-L2-000120: The Kubernetes API server must not be internet-accessible
Audit controls:
- KUBER-L2-000180: Kubernetes API audit logging must be enabled
- KUBER-L2-000190: Audit logs must be sent to a centralized log aggregation system
Building Compliance into the Pipeline
The goal is to make STIG violations a build failure rather than an audit finding. Three integration points:
1. Image Scanning at Build Time
Every container image gets scanned for CVEs and configuration issues before being pushed to the registry. Trivy is the tool of choice for this — it covers both OS-level packages and application dependencies.
# GitLab CI pipeline stage
stig-image-scan:
stage: security
image: aquasec/trivy:latest
script:
- trivy image
--severity CRITICAL,HIGH
--exit-code 1
--ignore-unfixed
--format json
-o trivy-results.json
$CI_REGISTRY_IMAGE:$CI_COMMIT_SHA
artifacts:
reports:
container_scanning: trivy-results.json
rules:
- if: $CI_COMMIT_BRANCH == "main"
- if: $CI_MERGE_REQUEST_ID --exit-code 1 makes the pipeline fail on critical or high CVEs. --ignore-unfixed reduces noise from vulnerabilities without available patches — a common practical adjustment that security teams approve in advance.
2. Kubernetes Manifest Validation with OPA/Gatekeeper
Open Policy Agent (OPA) with the Gatekeeper admission controller enforces policy at the Kubernetes API level — rejecting pod manifests that violate STIG controls before they're admitted to the cluster.
Example: Prohibiting root containers
# ConstraintTemplate — defines the policy logic
apiVersion: templates.gatekeeper.sh/v1
kind: ConstraintTemplate
metadata:
name: k8snostigroot
spec:
crd:
spec:
names:
kind: K8sNoStigRoot
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package k8snostigroot
violation[{"msg": msg}] {
container := input.review.object.spec.containers[_]
container.securityContext.runAsNonRoot != true
msg := sprintf(
"KUBER-L2-000020: Container '%v' must set runAsNonRoot: true",
[container.name]
)
}
---
# Constraint — enforces the policy
apiVersion: constraints.gatekeeper.sh/v1beta1
kind: K8sNoStigRoot
metadata:
name: no-root-containers
spec:
match:
kinds:
- apiGroups: [""]
kinds: ["Pod"]
namespaces: ["production", "staging"] This policy generates a violation message that references the specific STIG control ID — making it immediately clear what the issue is and where to find the remediation guidance.
3. Continuous Compliance Scanning with Kube-bench
Kube-bench runs the CIS Kubernetes Benchmark checks against your cluster configuration. While CIS benchmarks don't map 1:1 to DISA STIGs, substantial overlap exists — and kube-bench provides automated, repeatable cluster-level assessment.
# CronJob to run kube-bench daily and export to S3
apiVersion: batch/v1
kind: CronJob
metadata:
name: kube-bench-compliance
spec:
schedule: "0 0 * * *"
jobTemplate:
spec:
template:
spec:
serviceAccountName: kube-bench
containers:
- name: kube-bench
image: aquasec/kube-bench:latest
command: ["kube-bench", "--json"]
env:
- name: AWS_REGION
value: us-gov-west-1
restartPolicy: OnFailure Results export to S3 for long-term retention — continuous monitoring evidence for ATO maintenance.
Audit Logging Configuration
The STIG requires Kubernetes audit logging. The API server audit policy controls what gets logged:
apiVersion: audit.k8s.io/v1
kind: Policy
rules:
# Log all requests at RequestResponse level for CUI namespaces
- level: RequestResponse
namespaces: ["production", "cui-workloads"]
resources:
- group: ""
resources: ["pods", "secrets", "configmaps"]
# Log metadata only for other namespaces
- level: Metadata
# Don't log health checks — reduces noise
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: ""
resources: ["endpoints", "services"] Audit logs feed to CloudWatch Logs for GovCloud deployments, then to a centralized Security Hub pipeline for cross-system correlation.
Network Policy Enforcement
KUBER-L2-000110 requires network policies that restrict pod-to-pod communication to what's explicitly needed. The default posture should be deny-all, with explicit allow rules for required communication paths:
# Default deny all ingress for production namespace
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: default-deny-ingress
namespace: production
spec:
podSelector: {}
policyTypes:
- Ingress
---
# Allow specific service-to-service communication
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: allow-api-to-database
namespace: production
spec:
podSelector:
matchLabels:
app: postgres
ingress:
- from:
- podSelector:
matchLabels:
app: api-server
ports:
- protocol: TCP
port: 5432 Network policies are enforced at the CNI level — Calico and Cilium are the most common CNI plugins supporting full NetworkPolicy enforcement in DoD environments.
Connecting to the ATO Process
STIG compliance automation generates two types of artifacts that matter for ATO:
- Pipeline scan reports — timestamped evidence that every deployment was checked against STIG requirements before going to production
- Continuous monitoring data — ongoing kube-bench and GuardDuty findings demonstrating that controls remain in place after authorization
Both feed the continuous monitoring requirement in NIST RMF Step 6. For more on building the cATO-ready pipeline, see our guides on continuous ATO automation and DevSecOps software factory architecture.
Discuss building your STIG-compliant Kubernetes environment → rutagon.com/contact
Frequently Asked Questions
What is the DISA Kubernetes STIG?
It's the Defense Information Systems Agency's Security Technical Implementation Guide for Kubernetes clusters. It defines configuration requirements that DoD-authorized Kubernetes environments must meet. Available for download on the DISA STIG portal at public.cyber.mil.
Can I use Amazon EKS for a DISA STIG-compliant cluster?
Yes. EKS on AWS GovCloud is used for DoD workloads. You're responsible for configuring the cluster, nodes, and workloads to meet STIG requirements — EKS handles the control plane but worker node configuration, network policies, and admission controls are customer-managed.
What's the difference between CIS Kubernetes Benchmark and DISA STIG?
Both define security hardening standards for Kubernetes, but from different angles. The CIS benchmark is a general industry hardening guide. The DISA STIG is specifically for DoD environments and maps to DoD-specific requirements, regulations, and approval processes. Significant overlap exists — satisfying the DISA STIG typically means satisfying most of the CIS benchmark.
How do I handle STIG waivers for legitimate exceptions?
Some STIG controls may have documented exceptions — privileged containers for a specific system component, for example. Waivers require formal documentation (waiver request, risk acceptance, and AO signature) and should be tracked in your security controls tracking system. OPA/Gatekeeper can be configured to allow specific named exceptions rather than blanket exclusions.
Does Istio service mesh help with STIG compliance?
Istio contributes to STIG compliance by enforcing mTLS between all services (network control requirements), generating detailed access logs (audit requirements), and enabling fine-grained traffic policies (least-privilege requirements). It doesn't satisfy all controls, but it significantly reduces the manual configuration burden for network and communication-related STIGs.
Discuss your project with Rutagon
Contact Us →