Skip to content

Migrations

Granit provides two complementary tools for schema management:

  1. How to apply migrations — at startup, via --migrate, or via a SQL script
  2. How to handle breaking schema changes — Expand & Contract for zero-downtime deployments
CLI modeStandard (at startup)SQL script (CI/CD)Expand & Contract
PackageGranit.Persistence.HostingEF Core built-inEF Core built-inGranit.Persistence.Migrations
WhenProduction, K8s init containerDev / single instanceProduction DDL-onlyZero-downtime breaking changes
ComplexityLowLowMediumHigh
Multi-module✅ Topological order, auto-discoveryManual per DbContextManual per DbContextPer DbContext

Most production applications use --migrate to apply schema changes and Expand & Contract for any migration that must rename, split, or backfill a column without downtime.


Granit.Persistence.Hosting adds a first-class --migrate CLI mode to any Granit application. Running dotnet run --migrate (or docker run myapp --migrate in CI/CD) applies all pending EF Core migrations in dependency-graph order, runs data seeders, and exits cleanly. The HTTP server never starts.

This replaces ad-hoc MigrateAsync() calls scattered across Program.cs and eliminates the startup-time migration race condition in multi-replica deployments.

  1. Add the package

    Terminal window
    dotnet add package Granit.Persistence.Hosting
  2. Register migration support in Program.cs

    await builder.AddGranitAsync<AppHostModule>();
    builder.AddGranitMigrateSupport(); // registers the runner + lock
    WebApplication app = builder.Build();
    await app.UseGranitAsync();
    if (app.HasGranitMigrateFlag())
    {
    await app.RunGranitMigrationsAsync(); // migrate → seed → flush logs
    return; // clean exit — no HTTP server
    }
    app.UseAuthentication();
    app.UseAuthorization();
    // ... remaining middleware ...
    await app.RunAsync();

    return from top-level statements is the correct exit mechanism — it respects all using / finally blocks. RunGranitMigrationsAsync disposes the application and flushes Serilog sinks before returning.

  3. Mark your modules as migratable

    On each GranitModule subclass that owns EF Core migrations, implement IMigratableModule<TContext>:

    [DependsOn(
    typeof(GranitPersistenceModule),
    typeof(GranitAuthorizationEntityFrameworkCoreModule))]
    public sealed class AppCoreModule : GranitModule, IMigratableModule<AppDbContext>
    {
    public override void ConfigureServices(ServiceConfigurationContext context)
    {
    // ... DbContext registration, etc.
    }
    }
  4. Run migrations

    Terminal window
    # Local development
    dotnet run --project src/MyApp.Host -- --migrate
    # Docker / K8s init container
    docker run myapp --migrate
    # CI/CD pipeline (before `kubectl rollout`)
    docker run --rm myapp:$VERSION --migrate
flowchart TD
    A[dotnet run --migrate] --> B[UseGranitAsync — module init]
    B --> C[HasGranitMigrateFlag?]
    C -->|no| D[Normal startup]
    C -->|yes| E[Discover IMigratableModule&lt;T&gt; in topological order]
    E --> F[TryAcquire distributed lock]
    F -->|lock unavailable| G[Log warning — skip]
    F -->|acquired| H[MigrateAsync per module]
    H --> I[EnsureCreated — Expand & Contract table]
    I --> J[SeedAsync — if SeedAfterMigration = true]
    J --> K[Release lock — return exit code 0]

The discovery step reads the topologically sorted module list from GranitApplication (respecting [DependsOn] declarations). Modules that do not implement IMigratableModule<T> — including all internal Granit DbContexts such as BackgroundJobsDbContext or WebhooksDbContext — are silently skipped. Only host application DbContexts that own migration files should implement the interface.

Granit.Persistence.Hosting
public interface IMigratableModule<TContext> where TContext : DbContext;

Marker interface — no methods to implement. One module = one DbContext = one IMigratableModule<TContext> declaration. If your application has two DbContexts (e.g., CoreDbContext and SecurityDbContext), create two modules:

public sealed class AppCoreModule : GranitModule, IMigratableModule<CoreDbContext> { ... }
[DependsOn(typeof(AppCoreModule))]
public sealed class AppSecurityModule : GranitModule, IMigratableModule<SecurityDbContext> { ... }

[DependsOn] controls migration order: CoreDbContext is migrated before SecurityDbContext because AppSecurityModule depends on AppCoreModule.

builder.AddGranitMigrateSupport(options =>
{
options.CliFlag = "--migrate"; // default
options.SeedAfterMigration = true; // default
options.Timeout = TimeSpan.FromMinutes(5);// default
options.MaxRetries = 3; // default
options.RetryDelay = TimeSpan.FromSeconds(5);// default
options.SeedOnStartup = false; // default — see below
});
OptionDefaultDescription
CliFlag"--migrate"CLI argument that triggers migration mode. Environment variables are intentionally not supported — see safety rules.
SeedAfterMigrationtrueRun IDataSeeder after all migrations complete.
Timeout5 minutesTotal timeout for the migration run.
MaxRetries3Retry attempts per DbContext on transient DB failures.
RetryDelay5 secondsWait between retry attempts.
SeedOnStartupfalseRe-enable data seeding at normal startup. Unsafe for multi-replica deployments.

By default, AddGranitMigrateSupport() registers NullMigrationLock — a no-op that always succeeds, suitable for single-instance deployments and init containers (which run exactly once).

For multi-instance scenarios where multiple pods could start simultaneously, you need a real distributed lock. The IGranitMigrationLock interface returns null when the lock cannot be acquired — signalling the runner to skip migrations on that instance:

public interface IGranitMigrationLock
{
// Returns null if the lock could not be acquired (another instance is migrating).
Task<IAsyncDisposable?> TryAcquireAsync(string resource, CancellationToken cancellationToken);
}

For PostgreSQL, Granit.Persistence.Postgres provides NpgsqlAdvisoryMigrationLock out of the box — zero configuration required. Call AddGranitPostgres() before AddGranitMigrateSupport() and the advisory lock is registered automatically via TryAddSingleton:

builder.AddGranitPostgres(); // registers NpgsqlAdvisoryMigrationLock
builder.AddGranitMigrateSupport(); // uses it automatically

See Persistence — PostgreSQL for how the advisory lock works and why it requires a raw connection instead of the EF Core connection pool.

For SQL Server or custom databases, implement IGranitMigrationLock and register it manually before AddGranitMigrateSupport():

builder.Services.AddSingleton<IGranitMigrationLock, MyCustomMigrationLock>();
builder.AddGranitMigrateSupport();

When ITenantEnumerator is registered in the DI container, the runner automatically migrates each tenant’s database in isolation:

// ITenantEnumerator enumerates active tenant IDs
// ITenantDbIsolator configures the DbContext for a specific tenant
// (e.g., sets search_path for schema-per-tenant, or swaps the connection string)
await foreach (Guid tenantId in tenantEnumerator.GetActiveTenantIdsAsync(ct))
{
await isolator.IsolateAsync(dbContext, tenantId, ct).ConfigureAwait(false);
await dbContext.Database.MigrateAsync(ct).ConfigureAwait(false);
}

See Multi-tenancy for how to implement these interfaces.

AddGranitMigrateSupport() disables DataSeedingHostedService at normal startup by default. Seeding only runs during --migrate mode (when SeedAfterMigration = true).

builder.AddGranitMigrateSupport(options =>
{
// Dev convenience: re-enable seeding at startup (single instance only)
options.SeedOnStartup = true;
});
TopologySetup
Simple appdotnet run --migrate once before dotnet run
Dockerdocker run myapp --migrate in CI/CD before kubectl rollout
K8s init containerargs: ["--migrate"] in the init container spec; pods start only after init succeeds
K8s rolling update--migrate applies Expand migration; Expand & Contract backfills at runtime
MicroserviceSame — typically one IMigratableModule<T> per service
SaaS schema-per-tenantITenantEnumerator iterates tenants; ITenantDbIsolator sets search_path
Integration testsNo --migrate arg → HasGranitMigrateFlag() returns false → no-op; WebApplicationFactory works normally

No environment variable support for CliFlag. GranitMigrateOptions.CliFlag reads CLI arguments only, never environment variables. This is intentional: if --migrate could be set via an env var and that var was accidentally copied to a Kubernetes Deployment, every pod restart would trigger a migration run, causing a CrashLoopBackOff cascade.

return, not Environment.Exit(). Top-level statement return ensures all IAsyncDisposable / finally blocks execute before the process terminates. RunGranitMigrationsAsync handles log flushing internally — do not call Environment.Exit() after it.


The standard EF Core migration workflow: one migration file per schema change, applied at startup in development or via SQL script in production.

Terminal window
dotnet ef migrations add AddAppointmentNotes \
--project src/MyApp.Host \
--context AppDbContext \
--output-dir Migrations/App

EF Core diffs the current model (OnModelCreating) against the last snapshot and generates the migration file.

// Program.cs — after builder.Build(), before RunAsync()
await using var scope = app.Services.CreateAsyncScope();
var factory = scope.ServiceProvider
.GetRequiredService<IDbContextFactory<AppDbContext>>();
await using var db = await factory.CreateDbContextAsync();
await db.Database.MigrateAsync();

Works well with Testcontainers in integration tests. Blocks startup until migrations complete, which is acceptable for a single instance with a maintenance window.

Terminal window
# Add a migration
dotnet ef migrations add AddPatientFullName \
--project src/MyApp.Host \
--context AppDbContext \
--output-dir Migrations/App
# Apply pending migrations
dotnet ef database update \
--project src/MyApp.Host \
--context AppDbContext
# Generate idempotent SQL script
dotnet ef migrations script --idempotent \
--project src/MyApp.Host \
--context AppDbContext \
-o migrations/app.sql
# List applied / pending migrations
dotnet ef migrations list \
--project src/MyApp.Host \
--context AppDbContext
# Remove the last unapplied migration
dotnet ef migrations remove \
--project src/MyApp.Host \
--context AppDbContext

Standard migrations handle most schema changes without service interruption:

  • Adding a table or nullable column (with or without a default)
  • Adding or removing an index
  • Changing column constraints (max length, nullability)
  • Dropping an unused column or table

These changes are backwards-compatible — the old application version continues running against the new schema during the deployment window.

Some changes require the old and new application versions to coexist during rolling updates:

  • Renaming a column — old code still references the old name
  • Changing a column type — old code expects the old type
  • Splitting a column — old and new code write to different columns simultaneously
  • Backfilling computed data — millions of rows cannot be updated in one transaction

These are the cases where Expand & Contract is necessary.


Granit.Persistence.Migrations implements the Expand & Contract pattern: split a breaking schema change into three safe, backwards-compatible phases, each deployed independently.

stateDiagram-v2
    [*] --> Expand: ADD COLUMN (nullable)
    Expand --> Migrate: Background batch backfill
    Migrate --> Contract: DROP COLUMN (old)
    Contract --> [*]
PhaseSchema changeApplication behavior
ExpandALTER TABLE ADD COLUMN (nullable)Writes to both old and new columns
MigrateBackground batch UPDATEReads new column, falls back to old
ContractALTER TABLE DROP COLUMN (old)Reads/writes new column only

Each phase is a separate EF Core migration deployed independently. The schema is never incompatible with running instances.

Expand & Contract and --migrate are complementary, not competing:

Concern--migrateExpand & Contract
Schema DDL (CREATE TABLE, ADD COLUMN)
Data backfill (batch UPDATE)
Schema cleanup (DROP old column)
TimingDeployment-time (before pods start)Runtime (background worker, after all pods updated)

A typical zero-downtime release looks like:

Release v2 — Expand:
1. docker run myapp:v2 --migrate
└── Applies Expand migration: ADD COLUMN patient_full_name (nullable)
2. kubectl rollout (zero downtime)
└── Pods write to both first_name/last_name and full_name
Release v2 runtime:
3. MigrationBatchWorker backfills full_name from first_name + ' ' + last_name
Release v3 — Contract:
4. docker run myapp:v3 --migrate
└── Applies Contract migration: DROP COLUMN first_name, DROP COLUMN last_name
[MigrationCycle(MigrationPhase.Expand, "patient-fullname-v2")]
public partial class AddPatientFullName : Migration
{
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.AddColumn<string>(
"FullName", "Patients", nullable: true);
}
}

Register a batch delegate that backfills data in chunks:

registry.Register<AppDbContext>(
"patient-fullname-v2",
async (context, batch, ct) =>
{
var patients = await context.Set<Patient>()
.Where(p => p.FullName == null)
.OrderBy(p => p.Id)
.Take(batch.Size)
.ToListAsync(ct)
.ConfigureAwait(false);
foreach (var p in patients)
p.FullName = $"{p.FirstName} {p.LastName}";
await context.SaveChangesAsync(ct).ConfigureAwait(false);
return new MigrationBatchResult(
patients.Count,
patients.Count < batch.Size
? null
: patients[^1].Id.ToString());
});

Batch delegates must be idempotent — re-processing already-migrated rows must have no side effects.

{
"GranitMigrations": {
"DefaultBatchSize": 500,
"BatchExecutionTimeout": "00:05:00"
}
}
Granit.Persistence.Migrations+ .Wolverine
DispatchIn-memory Channel<T>Durable outbox
Restart safetyLost batches on crashSurvives restarts
Multi-instanceSingle node onlyDistributed

Use Granit.Persistence.Migrations.Wolverine when batch processing must survive application restarts or run across multiple nodes.