Skip to main content
INS // Insights

DoD Software Factory: The DevSecOps Stack

Updated March 2026 · 8 min read

The DoD Software Factory concept emerged from a fundamental problem: defense software development was too slow. 3–5 year acquisition cycles, annual release cycles, and authorization ceremonies that froze systems in amber made it impossible to keep pace with commercial technology or adversary capability development.

A software factory changes this by providing the tooling, practices, and security infrastructure that let defense programs ship software at the velocity commercial teams achieve — without sacrificing the compliance posture that government programs require.

Here's how Rutagon builds software factories for DoD programs, what the stack looks like, and what "done" means in this context.

What a DoD Software Factory Is (and Isn't)

A software factory is not a particular tool or platform. It's the integration of tools, pipelines, practices, and security controls that enables rapid, repeatable, and compliant software delivery.

The reference implementation in the DoD is Platform One — the Air Force-originated software factory now serving programs across the services. Platform One's Big Bang baseline provides the core software factory capability as a managed service.

Not every program can or should operate on Platform One. Programs operating in non-Program One-accessible environments — certain GovCloud configurations, programs with specific network isolation requirements, or programs on contract vehicles that don't include Platform One access — need to build equivalent capability.

The Rutagon software factory stack delivers that equivalent.

The Software Factory Stack

Layer 1: Source Code Management

The version control platform is the foundation of the factory. For DoD programs, requirements include:

  • Branch protection: Protected branches enforcing peer review before merge
  • Signed commits: GPG commit signing to verify author identity
  • Access control: Role-based repository access aligned to need-to-know
  • Audit logging: All repository activity logged and retained

GitLab Self-Managed on GovCloud is the most common choice for programs with data sensitivity requirements. GitHub Enterprise Government is also used in specific environments.

# GitLab repository protection configuration
protected_branches:
  - name: main
    push_access_level: 40        # Maintainers only
    merge_access_level: 30       # Developers and above
    code_owner_approval_required: true
    required_approval_count: 1

Layer 2: Hardened CI/CD Pipeline

The pipeline is where security gates live. Every merge to a protected branch triggers:

SAST (Static Application Security Testing):

sast:
  stage: security
  variables:
    SAST_EXCLUDED_PATHS: "spec, test, tests, tmp"
  include:
    - template: Security/SAST.gitlab-ci.yml

Dependency scanning:

dependency-scanning:
  stage: security
  include:
    - template: Security/Dependency-Scanning.gitlab-ci.yml

Container scanning (Iron Bank validation):

container-scan:
  stage: security
  image: aquasec/trivy:latest
  script:
    - trivy image --exit-code 1 
      --severity CRITICAL,HIGH 
      --ignore-unfixed 
      "${CI_REGISTRY_IMAGE}:${CI_COMMIT_SHA}"

IaC compliance scanning:

iac-compliance:
  stage: security
  script:
    - checkov -d ./terraform --framework terraform
      --check CKV_AWS_* --output junitxml
      --output-file-path checkov-results.xml

All security scan results are artifacts retained in the pipeline history — this is the evidence trail for the cATO posture.

Layer 3: Iron Bank Container Registry

Platform One's Iron Bank is the DoD's hardened container image registry. Iron Bank images are:

  • Based on hardened base images that satisfy DISA STIG requirements
  • Continuously scanned for CVEs
  • Maintained with timely security patches
  • Authorized for use in DoD environments at appropriate impact levels

All containerized services in the software factory use Iron Bank base images. The pipeline enforces this:

# Verify base image comes from Iron Bank
verify-iron-bank:
  stage: validate
  script:
    - |
      BASE_IMAGE=$(grep "^FROM" Dockerfile | head -1 | awk '{print $2}')
      if [[ ! "$BASE_IMAGE" == registry1.dso.mil/* ]]; then
        echo "ERROR: Base image must be from Iron Bank (registry1.dso.mil)"
        exit 1
      fi

For programs operating outside Platform One where Iron Bank access isn't available, Rutagon builds equivalent hardening processes against DISA STIG benchmarks using the DISA STIG Viewer and automated STIG validation tooling.

Layer 4: Kubernetes and Helm Chart Management

Containerized workloads run on managed Kubernetes (EKS in GovCloud). Helm charts define the deployment configuration and are treated as first-class code artifacts:

  • Stored in source control alongside application code
  • Versioned with semantic versioning
  • Tested in non-prod environments before promotion to production
  • Scanned for security misconfigurations (Kubernetes-specific Checkov checks)
# Helm chart security scan in CI
helm-security-scan:
  stage: security
  script:
    - helm template ./chart/ | checkov -d - --framework kubernetes
      --check CKV_K8S_* --output junitxml

Layer 5: Automated STIG Compliance

STIG (Security Technical Implementation Guide) validation runs against every deployment. The automated STIG scan produces a compliance report that documents:

  • Total applicable STIG controls
  • Controls in compliance (CAT I/II/III)
  • Controls with open findings
  • Findings requiring POA&M entries

In a mature software factory, the STIG compliance score for production workloads is tracked over time. Regressions in compliance score fail the deployment pipeline.

Layer 6: Artifact Registry and Signing

Deployable artifacts (container images, Helm charts, IaC modules) are stored in a tamper-evident artifact registry with:

  • Immutable artifact storage (artifacts cannot be modified after push)
  • Cosign or similar container image signing
  • Software Bill of Materials (SBOM) generation for all deployable artifacts (per CISA guidance and DoD SBOM requirements)

What a Running Software Factory Looks Like

In production, the software factory gives program teams:

  • A commit-to-deploy cycle measured in minutes, not months
  • Automated security compliance reporting without manual evidence collection
  • A defensible cATO posture maintained by the pipeline itself
  • SBOM visibility into every component in production

This is how defense programs keep pace with commercial software delivery velocity while maintaining the compliance posture that government work requires.

See our related work on security compliance in CI/CD, container security in production CI/CD, and Kubernetes-containerization capabilities.

Contact Rutagon to discuss software factory requirements →

Frequently Asked Questions

What is a DoD software factory?

A DoD software factory is an integrated set of tools, pipelines, security controls, and practices that enables defense programs to deliver software continuously at commercial velocity while maintaining compliance with DoD security requirements. The DoD's reference implementation is Platform One, but many programs build equivalent capability using GitLab, Kubernetes, Iron Bank containers, and automated security scanning in their own environments.

What is Iron Bank and why is it required for DoD software?

Iron Bank is Platform One's centralized repository of hardened, DISA-STIG-compliant container base images. Images in Iron Bank are continuously scanned for CVEs, maintained with timely patches, and authorized for use in DoD environments. DoD programs using containerized workloads are strongly encouraged (and in some cases required) to use Iron Bank base images rather than public container registry images, which may contain vulnerabilities or unauthorized software.

What automated security gates should every DoD CI/CD pipeline include?

At minimum, a DoD CI/CD pipeline should include: SAST scanning (static analysis of application source code), dependency scanning (checking for vulnerable third-party components), container scanning (CVE scanning of container images before deploy), IaC compliance scanning (Checkov or OPA enforcing security configuration standards), and DISA STIG automated compliance scanning. All results should be retained as pipeline artifacts for continuous monitoring evidence collection.

How does a software factory support continuous ATO?

A software factory generates the continuous monitoring evidence that cATO requires. Every CI/CD pipeline run produces: security scan results proving no new vulnerabilities were introduced, STIG compliance reports showing the current system posture, and artifact manifests documenting what was deployed. These artifacts, automatically generated and stored, satisfy the continuous monitoring requirements without manual evidence collection ceremonies.

Can a program achieve software factory capability without being on Platform One?

Yes. Platform One provides software factory capability as a managed service, which reduces the program's build and operate burden. Programs not on Platform One can build equivalent capability by deploying GitLab Self-Managed, Kubernetes (EKS on GovCloud), an internal container registry with Iron Bank-equivalent hardening processes, and the security scanning pipeline. The investment is higher upfront, but the operational result is comparable.