Skip to main content

Overview

If you have an existing audit log implementation (database tables, log files, or another service), you can migrate to Immutable incrementally without downtime.

Strategy

1

Map your schema

Map your existing audit log fields to Immutable’s event structure:
Your FieldImmutable Field
user_idactor_id
user_emailactor_name
event_typeaction
target_typeresource (resource type)
target_idresource_id
detailsmetadata
organization_idtenant_id
created_atoccurred_at
2

Dual-write in production

Start writing to both your existing system and Immutable simultaneously. This lets you validate that Immutable captures everything correctly before cutting over.
// Dual-write: existing system + Immutable
async function trackAction(data: AuditEvent) {
  // Write to existing system
  await db.auditLog.create({ data });

  // Write to Immutable
  await client.track({
    actor_id: data.user_id,
    actor_name: data.user_email,
    action: data.event_type,
    resource_id: data.target_id,
    resource: data.target_type,
    metadata: data.details,
    tenant_id: data.organization_id,
    occurred_at: data.created_at,
  });
}
3

Backfill historical data

Use the batch endpoint to import historical events. Use idempotency_key to make the import safe to retry:
// Process in batches of 100
const historicalEvents = await db.auditLog.findMany({ take: 100, skip: offset });

await client.trackBatch(
  historicalEvents.map((event) => ({
    actor_id: event.user_id,
    actor_name: event.user_email,
    action: event.event_type,
    resource_id: event.target_id,
    resource: event.target_type,
    metadata: event.details,
    tenant_id: event.organization_id,
    occurred_at: event.created_at.toISOString(),
    idempotency_key: `migration_${event.id}`, // Safe to retry
  }))
);
Use a unique idempotency_key per historical event (e.g. prefixed with migration_). This ensures safe replay — if the import job crashes partway through, you can re-run it without creating duplicate events.
4

Validate

Compare event counts and spot-check specific events between your old system and Immutable:
# Count events in Immutable for a date range
curl "https://getimmutable.dev/api/v1/events?from=2026-01-01T00:00:00Z&to=2026-03-31T23:59:59Z&limit=1" \
  -H "Authorization: Bearer imk_your_api_key_here"
Check the pagination.total field in the response to compare against your old system’s count.
5

Cut over

Once you are confident in the data, remove the dual-write to your old system and use Immutable exclusively.

Gradual Rollout

If you prefer a more cautious approach, roll out by tenant:
  1. Start tracking events to Immutable for a single tenant.
  2. Verify the data is correct.
  3. Gradually add more tenants.
  4. Once all tenants are migrated, remove the old system.
Use the tenant_id field to isolate events per customer, making it easy to query and verify each tenant’s data independently.

Tips

  • Set occurred_at on backfilled events to preserve the original timestamp. Without it, all imported events will have the import time as their occurred_at.
  • The hash chain is built based on created_at (server time), so backfilled events will have a different chain order than their original chronological order. This is expected and correct.
  • Use event schemas in permissive mode during migration to discover all action strings your old system uses, then switch to strict mode once you have defined patterns for all of them.