Skip to main content
INS // Insights

Engineering Velocity at a Defense Tech Company

Updated April 2026 · 8 min read

The dominant narrative about government IT is that it moves slowly. Compliance requirements, approval processes, ATO timelines, and bureaucratic overhead conspire to produce systems that are years behind commercial software in capability and quality. This narrative is partially accurate — but it confuses process overhead with engineering capability.

Engineering velocity — the rate at which a team delivers working, reliable software — is not determined by the customer. It's determined by how the team builds.

The Real Source of Slowness

Government IT programs are slow for specific, fixable reasons:

Manual compliance processes: Teams that treat security compliance as a one-time audit event spend months before deployment assembling documentation, running scans, and preparing for assessment. Teams that build compliance evidence into their CI/CD pipeline generate that evidence automatically with every deployment.

Late-stage quality gates: Programs that save testing and security review for the end of a release cycle discover problems when the cost of fixing them is highest. Programs with automated testing and security scanning at every commit find problems when the cost of fixing them is lowest.

Bespoke everything: Teams that build custom solutions from scratch for each new requirement spend months on infrastructure that should take days. Teams with pre-built, compliance-validated infrastructure modules deploy in days and focus engineering time on the mission-specific problem.

Approval ceremony, not fast decisions: Large organizational decision-making structures add latency to every technical decision. Small, empowered engineering teams with clear authority to make architecture decisions move faster.

None of these are compliance problems. They're engineering and organizational problems that compliance gets blamed for.

What Velocity Looks Like in Practice

At Rutagon, engineering velocity shows up in concrete delivery metrics:

Week 1 → deployed environment: New programs should have a baseline infrastructure environment (VPC, EKS, monitoring, logging, secrets management) deployed to AWS GovCloud within the first week. This is not aspirational — it is the result of pre-built Terraform modules that have already passed compliance review. The team does not reinvent VPC architecture for each engagement.

Every sprint → demonstrable software: Two-week sprints end with a working demonstration of newly implemented capability, not a status report about what was planned. If the sprint produces a new API endpoint, the endpoint exists, is tested, and is accessible to the government product owner for review at the sprint review. This is the delivery philosophy that Rutagon applies consistently.

Deployments → automated and frequent: Production deployments on active programs happen multiple times per week, not quarterly. Each deployment runs the full CI/CD pipeline: unit tests, integration tests, container scanning, SAST, DAST (where applicable), infrastructure drift detection, and compliance evidence generation. The pipeline takes 15-20 minutes. Deployments are never a high-anxiety event.

Incidents → detected fast, resolved faster: Monitoring alerts fire within minutes of degradation. On-call response patterns are defined and tested. Root cause analysis happens within 24 hours for significant incidents, with lessons applied before the next sprint closes.

The Role of Pre-Built Infrastructure

The highest-leverage investment a defense technology company can make in engineering velocity is building and maintaining a library of compliance-ready infrastructure modules.

Every government cloud program needs:

  • Multi-account AWS Organizations structure with security baselines
  • VPC architecture with appropriate segmentation
  • EKS cluster with STIG-aligned configuration
  • CI/CD pipeline with security scanning gates
  • Logging and monitoring infrastructure meeting NIST AU controls
  • Secrets management with no long-lived credentials
  • IAM patterns aligned with least privilege

Building these from scratch for each program is wasteful. The architectural decisions don't change materially between programs — only the specifics (account IDs, CIDRs, team names) vary. Terraform modules with well-designed variable interfaces deploy compliant baseline infrastructure in hours, not weeks.

The compliance investment happens once, at module creation. Every subsequent program that uses the module inherits that compliance work. The SSP description of the infrastructure controls is templated and needs only program-specific details filled in.

This is the infrastructure-as-code philosophy that covers our GovCloud Terraform approach in detail.

Lean Team Architecture

Engineering velocity is also a team design problem. Larger teams are not faster teams. A team of 6 engineers with clear ownership of specific components, automated handoffs between components, and shared access to pre-built tooling outperforms a team of 20 engineers with unclear ownership, manual coordination, and each member reinventing their own tooling.

For defense technology programs, the team structure that maximizes velocity at compliance-appropriate quality:

2-3 cloud/infrastructure engineers: Own the deployment platform, CI/CD pipeline, and compliance architecture. Responsible for baseline module maintenance and ATO evidence generation.

3-4 application engineers: Own the mission-specific application code. Work within the deployment platform the cloud engineers maintain. Focus entirely on the problem the government is trying to solve.

1 security/compliance lead (often shared across programs): Owns the security assessment process, translates compliance requirements into engineering requirements, coordinates with 3PAOs and government assessors. This role is most effective when compliance work has been automated — the lead validates and guides rather than manually assembling documentation.

This structure, familiar from production SaaS companies, is directly applicable to government programs. The key difference is that the compliance automation infrastructure must be present from day one — not added later.

Velocity and Compliance as Complementary

The defense technology companies that create the most value for government programs are those that have resolved the false tension between velocity and compliance.

Compliance does not slow down engineering. Compliance implemented manually, late in the development cycle, by engineers who don't understand it — that slows down engineering. Compliance embedded in the pipeline, automated in the infrastructure, documented continuously as development proceeds — that costs almost nothing in velocity and prevents the catastrophic slowdown of a failed ATO assessment.

The continuous ATO automation work Rutagon has built generates ATO evidence as a byproduct of normal development. Each deployment produces updated documentation, scan reports, and configuration state. Assessors get current evidence, not documentation assembled under deadline pressure.

From the government's perspective, a small business that demonstrates this capability — shipping reliable software quickly, with visible compliance posture, at reasonable cost — is exactly what SBIR/STTR programs, OTA vehicles, and small business set-asides are designed to access. The engineering track record that supports this capability comes from building and operating real production systems, not from learning government IT theory.

Frequently Asked Questions

How does a defense tech startup build engineering velocity without sacrificing security?

The answer is automation — security gates embedded in the CI/CD pipeline, not added on top of it. Container scanning with Trivy on every commit, SAST analysis integrated with the code review process, infrastructure compliance scanning before every apply. When security checks are automatic and fast, developers get feedback in minutes instead of days, and security becomes part of the normal flow rather than a separate phase that interrupts velocity.

What is a realistic deployment frequency for a government IT program?

It depends on the program's change management requirements. Programs on the Software Acquisition Pathway with agile delivery expectations should target multiple deployments per week to a test environment and at least weekly to production (or as frequently as the program's change approval process allows). Programs under more formal change management may target bi-weekly or monthly production deployments. The important metric is not deployment frequency in isolation — it is the combination of deployment frequency and deployment reliability. Frequent deployments that require significant manual coordination or frequently cause incidents do not constitute velocity.

How do you maintain infrastructure modules across multiple government programs?

Treat infrastructure modules as a shared internal product with its own versioning, documentation, and maintenance roadmap. Semantic versioning (major.minor.patch) communicates breaking changes, new capabilities, and patches. Programs pin to a specific module version for stability, and engineering leadership decides when to upgrade programs to newer module versions. When a compliance control changes (e.g., a NIST control baseline update) the fix happens once in the module, and all programs can upgrade to inherit the fix. This is fundamentally the same pattern as managing open-source library dependencies.

How does team size affect velocity on government programs?

Counter-intuitively, adding engineers to a program often reduces velocity initially and may not improve it long-term. Brooks's Law ("adding manpower to a late software project makes it later") applies in government IT as well. Small, highly capable teams with clear ownership and automated tooling outperform large teams with coordination overhead. For defense programs, the ideal team size for a task order is typically 4-8 engineers, scaled based on the scope and technical complexity of the work, not on the contract value or stakeholder expectations.

What makes a defense technology company competitive on technical evaluation factors?

Government source selection evaluations for IT programs score technical approach (how the company proposes to solve the problem), management approach (how they'll deliver and manage risk), and past performance (demonstrated relevant experience). Technical companies that demonstrate depth in compliance architecture, automation-first delivery methodology, and production-proven technology — rather than generic IT capability statements — score higher on technical evaluation factors. The most compelling technical proposals include specific, demonstrated patterns rather than theoretical approaches.

Rutagon delivers software at speed with compliance built in →

Discuss your project with Rutagon

Contact Us →

Ready to discuss your project?

We deliver production-grade software for government, defense, and commercial clients. Let's talk about what you need.

Initiate Contact