"Earn the Next Contract" is one of Rutagon's four company values, and it's the one that most directly shapes how we operate. The principle is simple: every piece of work we deliver should be good enough that the client wants to give us more. Not because of relationships, not because of incumbent advantage, not because switching costs are high — because the work itself was exceptional. In government IT contracting, where past performance is evaluated on every competitive proposal, this philosophy isn't just aspirational. It's the primary growth mechanism for a small business.
This article explores how delivery-driven growth works in practice: how past performance compounds into competitive advantage, what "earning" the next contract actually looks like at the engineering level, and why this approach produces better outcomes for government clients and the company alike.
Past Performance as a Growth Engine
In federal procurement, past performance carries significant evaluation weight. FAR Part 15.305 requires agencies to evaluate past performance on competitive acquisitions. For IT services, past performance typically represents 20-35% of the total evaluation score — often as much as technical approach and sometimes more than price.
The evaluation isn't abstract. Assessors review:
- CPARS ratings: Contractor Performance Assessment Reporting System ratings from previous government contracts
- Technical quality: Did the contractor deliver what was proposed, on time, at the stated quality level?
- Schedule performance: Did delivery milestones hit on time?
- Management responsiveness: How did the contractor handle problems, changes, and communication?
- Small business participation: For primes, did they meet subcontracting goals?
A small business with three contracts rated "Exceptional" in CPARS has a measurable competitive advantage over a firm with ten contracts rated "Satisfactory." Volume of past performance matters, but quality matters more.
This is where "Earn the Next Contract" becomes a strategic asset. Every delivery is simultaneously a client engagement and a future proposal differentiator. The engineering decisions we make — deployment automation, documentation quality, operational transparency — directly feed the past performance narrative on the next proposal.
What "Earning" Looks Like in Practice
The gap between satisfactory and exceptional delivery is usually not about the technology. It's about the behaviors and practices that surround the technology:
Delivery Transparency
We operate with full transparency into our delivery process. Government technical teams have access to:
- Git repositories: Every commit, every pull request, every code review — visible to the client in real time
- CI/CD dashboards: Build status, test coverage, deployment frequency — the metrics that demonstrate engineering discipline
- Sprint boards: What we're working on, what's blocked, what's done — updated daily, not weekly
This isn't about creating oversight — it's about building trust through evidence. When a contracting officer's representative can see that the team deploys to production three times per week with automated testing at 85% coverage, the "Exceptional" CPARS rating writes itself.
The DevOps pipeline approach we deploy makes this transparency automatic. The dashboard exists because the infrastructure demands it, not because we built a reporting layer on top of opaque processes.
Documentation That Survives the Contract
Government contracts end. Teams transition. The documentation left behind determines whether the system thrives or decays after the contractor moves on. We treat documentation as a first-class deliverable:
- Architecture Decision Records (ADRs): Why we chose this database, this framework, this deployment pattern — the reasoning, not just the result
- Runbooks: Step-by-step operational procedures for every routine and emergency operation
- System security documentation: SSP contributions, control implementation descriptions, evidence mapping
- Onboarding guides: How a new engineer gets productive on this system in their first week
Most contractors deliver code and call it done. We deliver systems that the government can operate, maintain, and evolve independently. This approach generates the management responsiveness and technical quality ratings that win future evaluations.
Proactive Problem Resolution
Problems occur on every contract. The differentiator is how they're handled. Our approach:
- Identify early: Continuous monitoring and automated alerting catch issues before they become incidents. The observability patterns we deploy mean we usually know about problems before the government does.
- Communicate immediately: The government technical team hears about issues from us, not from their users. This means calling the COR at 7 AM when a deployment revealed an edge case, not waiting for the weekly status report.
- Fix completely: Root cause analysis, not just symptom treatment. When a production issue occurs, the fix includes the code change, the test that would have caught it, the monitoring rule that would have detected it earlier, and the process change that prevents recurrence.
This pattern builds the trust that converts "Satisfactory" CPARS ratings to "Exceptional" ones. Assessors specifically look for evidence of problem identification and resolution in past performance narratives.
The Compounding Effect
Past performance compounds. Each "Exceptional" rating makes the next contract win more likely, which creates more opportunities for exceptional delivery, which builds more past performance. For a small business, this compounding is the primary growth mechanism:
Year 1: One contract, strong delivery, first CPARS rating
Year 2: First CPARS rating supports winning a second contract; two concurrent deliveries building past performance
Year 3: Two strong CPARS ratings differentiate in competitive evaluations; win rate increases; third and fourth contracts follow
The compound effect works in reverse too. A single "Marginal" rating — from cutting corners on documentation, missing delivery milestones, or poor communication — takes years to overcome. The ratings stay in CPARS for three years minimum and evaluators weight recent negative performance heavily.
This is why "Earn the Next Contract" is a value, not a strategy. It's not something we turn on for important contracts and relax for smaller ones. Every delivery gets the same engineering discipline, documentation quality, and communication cadence — because every delivery is building (or eroding) the past performance portfolio.
Engineering Decisions That Drive Past Performance
Specific engineering practices map directly to past performance evaluation criteria:
Infrastructure as Code = Schedule Performance
When infrastructure provisioning takes 45 minutes instead of 45 days, schedule milestones hit on time. The Terraform patterns we deploy eliminate the most common source of schedule delays in government IT — waiting for environments. No environment request tickets, no capacity planning meetings, no manual configuration drift.
Automated Testing = Technical Quality
Automated test suites running in CI/CD pipelines with approval gates prevent the regression bugs that erode technical quality ratings. When every deployment is validated against hundreds of test cases before reaching production, the defect rate drops and the quality perception rises.
Security Automation = Management Responsiveness
Automated compliance reporting means security findings are detected, documented, and remediated in days rather than months. When an assessor identifies a finding, we can show not just the fix but the automated monitoring that ensures the finding doesn't recur. This demonstrates the security posture management that evaluators explicitly score under management responsiveness.
Continuous Deployment = Customer Satisfaction
Government users who see regular improvements — new features, bug fixes, performance enhancements — develop confidence in the delivery team. The Ship Don't Slide philosophy ensures that progress is visible and continuous, not backloaded into quarterly releases where half the planned features slip.
Building Past Performance Without Prior Government Contracts
New entrants to government contracting face a cold-start problem: you need past performance to win contracts, but you need contracts to build past performance. Our approach:
Commercial relevance: Production SaaS platforms and content platforms operating at scale demonstrate the same engineering capabilities government evaluators look for — availability, security, scalability, operational maturity. We reference these as relevant experience when government-specific past performance is limited.
Subcontracting: Performing as a subcontractor to a prime on government work builds past performance under the same evaluation criteria as prime contracts. The teaming approach we take ensures subcontract performance is documented and referenceable.
Micro-purchases and small contracts: Starting with smaller contract vehicles builds past performance incrementally. Each successful delivery — even a $10,000 micro-purchase — generates a performance record that supports the next, larger opportunity.
The trajectory is deliberately progressive: micro-purchase to simplified acquisition to full-and-open competition to IDIQ task orders. Each step is earned by the quality of the previous delivery. This is the practical reality of small business advantages in government IT — the path exists, but only if every step builds the credibility for the next one.
Measuring Delivery Quality Internally
We don't wait for CPARS ratings to assess delivery quality. Internal metrics track the behaviors that produce exceptional ratings:
| Metric | Target | Why It Matters |
|---|---|---|
| Deployment frequency | 3+ per week | Demonstrates continuous delivery capability |
| Mean time to recovery | < 1 hour | Shows operational readiness |
| Test coverage | > 80% | Prevents regression defects |
| Documentation currency | < 1 sprint behind | Ensures knowledge transfer readiness |
| Security finding closure | < 14 days | Demonstrates security responsiveness |
| Client communication | Daily async, weekly sync | Builds trust through transparency |
These metrics are visible to the delivery team — not as performance pressure, but as quality indicators. When deployment frequency drops or documentation falls behind, it's a signal that delivery quality is at risk, and the team self-corrects before it impacts the client.
Frequently Asked Questions
How long do CPARS ratings remain visible to evaluators?
CPARS ratings are available for a minimum of three years and up to six years, depending on the contract type and reporting period. Evaluators can access the full history of ratings within this window. Recent ratings carry more weight, but older ratings still contribute to the overall past performance picture.
Can a contractor dispute a negative CPARS rating?
Yes. The CPARS process includes a contractor review period where the contractor can provide a written response to any rating. For ratings below "Satisfactory," the reviewing official considers the contractor's response before finalizing. However, the most effective strategy is preventing negative ratings through proactive communication and problem resolution — not disputing them after the fact.
How do evaluators weigh past performance against price?
This varies by acquisition. Best-value tradeoff procurements allow agencies to pay a price premium for superior past performance. Lowest-price technically acceptable (LPTA) procurements reduce past performance to a pass/fail gate. For IT services, best-value is more common, and past performance weight of 20-35% is typical. In some cases, past performance is explicitly stated as more important than price.
What if a small business has no past performance?
FAR 15.305 states that a lack of relevant past performance "shall not be evaluated favorably or unfavorably." In practice, this means new entrants are rated "Neutral" rather than penalized. However, competitors with strong past performance have a clear advantage. Building past performance through subcontracting, commercial work, and micro-purchases is the fastest path to competitiveness.
Does past performance transfer when a company is acquired?
Past performance generally transfers with the entity — if Company A acquires Company B, Company A can reference Company B's past performance. However, if key personnel responsible for that performance have departed, evaluators may discount the relevance. The work must be genuinely representative of the proposing entity's current capabilities.
Discuss your project with Rutagon
Contact Us →