Oracle licensing in government programs is an expensive, persistent dependency. For programs running on-premises Oracle, moving to AWS GovCloud creates two competing requirements: eliminate Oracle licensing costs (often $50K–$500K+ annually) while maintaining ATO continuity during migration. This is a constraint-heavy engineering problem — not just a database conversion.
Rutagon approaches Oracle-to-PostgreSQL migrations in government environments as a strangler-fig pattern: new writes go to PostgreSQL while legacy Oracle serves reads, until validation confirms parity, then cut over. Here's the technical approach.
Why PostgreSQL for Government Programs
PostgreSQL is not a compromise choice. For most federal application workloads, it's the superior option:
- Cost: Aurora PostgreSQL on GovCloud eliminates Oracle licensing fees. For programs with $100K+/year Oracle licenses, the Aurora infrastructure cost is a fraction.
- FIPS compliance: AWS GovCloud Aurora uses FIPS 140-2 validated cryptographic modules natively
- ATO path: Aurora PostgreSQL has established DISA PAs and FedRAMP authorizations. Oracle on-premises requires custom security assessment
- Open source: No vendor lock-in; the SQL dialect is standard; tooling ecosystem is vast
What you lose: some Oracle-specific features — Oracle-specific PL/SQL syntax, Oracle Forms (if you're migrating a legacy UI as well), some advanced partitioning syntax. These require refactoring, which is the bulk of the migration work.
Phase 1: Schema Assessment and Conversion
The first phase is an honest inventory of what needs to change.
AWS Schema Conversion Tool (SCT) automates the initial schema conversion:
# Install AWS SCT and run conversion assessment
# SCT generates a report of automated vs manual conversion items
# Connect to source Oracle database
oracle_connection = {
'host': 'oracle.gov.internal',
'port': 1521,
'service_name': 'MISSIONDB',
'username': 'migration_user',
'password': os.environ['ORACLE_PASSWORD']
}
# SCT outputs conversion rate and a list of items requiring manual review
# Typical results: 70-85% automated, 15-30% requiring manual refactoring Common manual conversion items:
- PL/SQL packages: Oracle packages → PostgreSQL schemas + functions
- Oracle-specific types:
NUMBER→NUMERICorBIGINT;VARCHAR2→VARCHAR;DATE(Oracle stores date+time) →TIMESTAMP - Sequences: Oracle sequences → PostgreSQL sequences (mostly compatible, slight syntax differences)
- Outer join syntax: Oracle's
(+)outer join syntax → ANSILEFT JOIN - ROWNUM: Oracle's
ROWNUMpagination → PostgreSQLLIMIT/OFFSET - Hierarchical queries: Oracle
CONNECT BY→ PostgreSQL recursive CTEs (WITH RECURSIVE)
PL/SQL to PL/pgSQL: Common Patterns
-- Oracle PL/SQL package function
CREATE OR REPLACE PACKAGE BODY mission_data_pkg AS
FUNCTION get_latest_telemetry(p_satellite_id IN NUMBER)
RETURN SYS_REFCURSOR AS
v_cursor SYS_REFCURSOR;
BEGIN
OPEN v_cursor FOR
SELECT * FROM telemetry
WHERE satellite_id = p_satellite_id
ORDER BY receive_time DESC
FETCH FIRST 100 ROWS ONLY;
RETURN v_cursor;
END;
END mission_data_pkg;
-- Equivalent PostgreSQL function
CREATE OR REPLACE FUNCTION get_latest_telemetry(p_satellite_id INTEGER)
RETURNS SETOF telemetry AS $$
BEGIN
RETURN QUERY
SELECT * FROM telemetry
WHERE satellite_id = p_satellite_id
ORDER BY receive_time DESC
LIMIT 100;
END;
$$ LANGUAGE plpgsql; The key difference: Oracle uses SYS_REFCURSOR for result sets; PostgreSQL uses RETURNS SETOF table_type or RETURNS TABLE(...). This is a common, high-volume refactoring item.
Phase 2: Data Migration Architecture
For government programs, the data migration must not interrupt operations. The strangler-fig approach:
┌────────────────────────────────────────────────────┐
│ Migration Architecture │
│ │
│ Writes ──► Application ──► PostgreSQL (GovCloud) │
│ │ │ │
│ │ Sync (DMS) │
│ │ ▼ │
│ └──► Oracle (Legacy Read) ◄────────┘
│ │ │
│ Legacy Apps Validation │
└────────────────────────────────────────────────────┘ AWS Database Migration Service (DMS) handles continuous replication from Oracle to Aurora PostgreSQL:
resource "aws_dms_replication_instance" "migration" {
allocated_storage = 100
apply_immediately = true
auto_minor_version_upgrade = true
availability_zone = "us-gov-west-1a"
engine_version = "3.5.2"
multi_az = false
preferred_maintenance_window = "sun:03:00-sun:04:00"
publicly_accessible = false
replication_instance_class = "dms.r5.xlarge"
replication_instance_id = "oracle-pg-migration"
replication_subnet_group_id = aws_dms_replication_subnet_group.main.id
vpc_security_group_ids = [aws_security_group.dms.id]
}
resource "aws_dms_endpoint" "oracle_source" {
endpoint_id = "oracle-source"
endpoint_type = "source"
engine_name = "oracle"
server_name = "oracle.gov.internal"
port = 1521
database_name = "MISSIONDB"
username = "dms_migration_user"
password = data.aws_secretsmanager_secret_version.oracle_creds.secret_string
}
resource "aws_dms_endpoint" "aurora_target" {
endpoint_id = "aurora-pg-target"
endpoint_type = "target"
engine_name = "aurora-postgresql"
server_name = aws_rds_cluster.gov_database.endpoint
port = 5432
database_name = "mission_data"
ssl_mode = "verify-full"
username = "migration_user"
password = data.aws_secretsmanager_secret_version.pg_creds.secret_string
} DMS provides CDC (Change Data Capture) — it replicates ongoing changes from Oracle while the initial full-load completes. This keeps PostgreSQL current with Oracle through the validation phase.
Phase 3: Validation Before Cutover
Cutover without validation is a federal program risk. The validation framework:
import psycopg2
import cx_Oracle
from typing import Tuple
class MigrationValidator:
def __init__(self, oracle_conn, pg_conn):
self.oracle = oracle_conn
self.pg = pg_conn
def validate_row_counts(self, tables: list) -> dict:
"""Compare row counts between Oracle and PostgreSQL for all tables"""
results = {}
for table in tables:
oracle_count = self._get_count(self.oracle, table)
pg_count = self._get_count(self.pg, table)
results[table] = {
'oracle_count': oracle_count,
'pg_count': pg_count,
'match': oracle_count == pg_count,
'delta': abs(oracle_count - pg_count)
}
return results
def validate_checksums(self, table: str, key_column: str, sample_size: int = 1000) -> dict:
"""Spot-check row checksums for data integrity"""
oracle_sample = self._get_sample(self.oracle, table, key_column, sample_size)
pg_sample = self._get_sample(self.pg, table, key_column, sample_size)
mismatches = []
for key, oracle_data in oracle_sample.items():
pg_data = pg_sample.get(key)
if oracle_data != pg_data:
mismatches.append({'key': key, 'oracle': oracle_data, 'pg': pg_data})
return {
'sample_size': sample_size,
'mismatch_count': len(mismatches),
'mismatch_rate': len(mismatches) / sample_size,
'mismatches': mismatches[:10] # Log first 10 for investigation
} Acceptance criteria before cutover:
- Row counts match within 0.001% (accounting for in-flight transactions)
- Spot checksum validation: <0.01% mismatch rate
- All application test cases pass against PostgreSQL
- Performance benchmarks: PostgreSQL query times within 20% of Oracle for critical paths
Phase 4: ATO Continuity During Migration
The migration must not void the existing ATO. Key considerations:
System boundary: DMS runs within the GovCloud boundary — not as an internet-facing migration. The replication connection uses a Site-to-Site VPN or Direct Connect between the Oracle environment and GovCloud.
Data classification: CUI and classified data handling must be documented during the migration phase. DMS logs provide the audit trail; CMK encryption ensures data is protected at rest in the target Aurora environment.
SSP update: Update the SSP to document the migration phase as a system change, the DMS architecture as a temporary component, and the PostgreSQL target as the permanent replacement. Brief the ISSO before beginning.
Testing in non-production first: Complete the full migration cycle (schema convert, data load, validate, cutover) in a non-production environment before touching production. Document the non-production test results as evidence for the production migration plan.
See our government IT modernization approach for the broader context of legacy system migrations in federal programs.
Rutagon implements database migrations as part of broader cloud modernization programs. Contact Rutagon to assess your Oracle dependency and plan the path to PostgreSQL.
Frequently Asked Questions
How long does an Oracle to PostgreSQL migration take for a government program?
Timeline depends heavily on database size, PL/SQL complexity, and application coupling. A moderately complex government database (1TB, 50+ stored procedures, 3–5 dependent applications) typically takes 4–8 months from assessment to production cutover. Schema conversion and testing are the longest phases. Data migration with DMS runs in days for most government database sizes once the schema is ready.
Does Oracle to PostgreSQL migration require a new ATO?
Not necessarily. If the migration is treated as a significant system change under the existing ATO, you update the SSP to document the new architecture and brief the ISSO. The ATO authority typically accepts the update without requiring a full re-authorization, provided the security posture doesn't degrade and the change control process is followed. If the migration also changes the impact level or system boundary substantially, a full re-authorization may be required.
What's the biggest risk in Oracle to PostgreSQL migration?
PL/SQL-heavy applications are the biggest risk. Stored procedures, packages, and database triggers that implement business logic in Oracle require careful refactoring — a mistake can silently corrupt data. The validation phase (row counts, checksums, application functional tests) is what de-risks this. Never skip validation, and always test in non-production before production cutover.
Can AWS DMS handle Oracle CDC (Change Data Capture) during migration?
Yes. AWS DMS supports Oracle CDC using LogMiner or Binary Reader. This is how the strangler-fig migration works: full load to PostgreSQL, then CDC keeps PostgreSQL current with Oracle changes during the validation and cutover preparation phase. DMS CDC latency is typically under 5 minutes for most government database workloads, meaning PostgreSQL is near-current with Oracle throughout the migration window.
What happens if data validation fails before cutover?
Cutover should not proceed if validation shows material discrepancies. The team investigates root cause — typically a PL/SQL trigger behavior difference between Oracle and PostgreSQL, a type conversion issue, or a DMS transformation rule gap. Fix the issue, re-run the full-load or affected table reload, and re-validate. This is why non-production migrations must complete successfully before production is attempted.
Discuss your project with Rutagon
Contact Us →