Getting Started
This guide walks you from zero to a running flow in about five minutes.
Prerequisites
- .NET 8 SDK or later
- A running SQL Server or PostgreSQL (in-memory works fine for local development with no external DB required)
Install NuGet Packages
Pick the storage backend that matches your environment:
# Core (always required)
dotnet add package FlowOrchestrator.Core
# Runtime adapter — choose one
dotnet add package FlowOrchestrator.Hangfire # Hangfire-backed execution
dotnet add package FlowOrchestrator.InMemory # In-process Channel<T>-backed execution (no Hangfire needed)
dotnet add package FlowOrchestrator.ServiceBus # Azure Service Bus (cloud-native, multi-replica)
# Storage backend — choose one
dotnet add package FlowOrchestrator.SqlServer # SQL Server via Dapper
dotnet add package FlowOrchestrator.PostgreSQL # PostgreSQL via Npgsql
# (FlowOrchestrator.InMemory also provides in-process storage)
# Optional REST API + SPA dashboard
dotnet add package FlowOrchestrator.Dashboard
Define Your First Flow
A flow is a plain C# class implementing IFlowDefinition. The manifest declares triggers and steps; steps are connected by runAfter dependencies.
using FlowOrchestrator.Core.Abstractions;
public sealed class HelloWorldFlow : IFlowDefinition
{
// Stable GUID — must never change after first deployment
public Guid Id { get; } = new Guid("00000000-0000-0000-0000-000000000001");
public string Version => "1.0";
public FlowManifest Manifest { get; set; } = new FlowManifest
{
Triggers = new FlowTriggerCollection
{
// Trigger manually from dashboard or REST API
["manual"] = new TriggerMetadata { Type = TriggerType.Manual },
// Also fire every minute automatically
["scheduled"] = new TriggerMetadata
{
Type = TriggerType.Cron,
Inputs = new Dictionary<string, object?> { ["cronExpression"] = "*/1 * * * *" }
}
},
Steps = new StepCollection
{
["system_check"] = new StepMetadata
{
Type = "LogMessage",
Inputs = new Dictionary<string, object?> { ["message"] = "System health check starting..." }
},
["system_ready"] = new StepMetadata
{
Type = "LogMessage",
// Only runs after system_check succeeds
RunAfter = new RunAfterCollection { ["system_check"] = [StepStatus.Succeeded] },
Inputs = new Dictionary<string, object?> { ["message"] = "All systems operational." }
}
}
};
}
Note
The flow Id is a stable identifier stored in the database and used to route jobs across all runtimes. Never change it after a flow has been deployed to an environment with existing run history.
Write a Step Handler
Step handlers contain the actual business logic. Register them by type name — the name in StepMetadata.Type must match exactly.
using FlowOrchestrator.Core.Abstractions;
// Input class — properties map to keys in StepMetadata.Inputs
public sealed class LogMessageInput
{
public object? Message { get; set; } // object? because expressions resolve to JsonElement
}
public sealed class LogMessageHandler : IStepHandler<LogMessageInput>
{
private readonly ILogger<LogMessageHandler> _logger;
public LogMessageHandler(ILogger<LogMessageHandler> logger) => _logger = logger;
public ValueTask<object?> ExecuteAsync(
IExecutionContext ctx,
IFlowDefinition flow,
IStepInstance<LogMessageInput> step)
{
var msg = step.Inputs.Message?.ToString() ?? step.Key;
_logger.LogInformation("[Flow {RunId}] {Step}: {Message}", ctx.RunId, step.Key, msg);
return ValueTask.FromResult<object?>(new { Logged = msg });
}
}
Register Everything
SQL Server
var connStr = builder.Configuration.GetConnectionString("FlowOrchestrator")!;
builder.Services.AddHangfire(c => c
.SetDataCompatibilityLevel(CompatibilityLevel.Version_170)
.UseSimpleAssemblyNameTypeSerializer()
.UseRecommendedSerializerSettings()
.UseSqlServerStorage(connStr));
builder.Services.AddHangfireServer();
builder.Services.AddFlowOrchestrator(options =>
{
options.UseSqlServer(connStr);
options.UseHangfire();
options.AddFlow<HelloWorldFlow>();
});
builder.Services.AddStepHandler<LogMessageHandler>("LogMessage");
// Optional dashboard
builder.Services.AddFlowDashboard(builder.Configuration);
PostgreSQL
var pgConnStr = builder.Configuration.GetConnectionString("FlowOrchestratorPg")!;
builder.Services.AddHangfire(c => c
.UsePostgreSqlStorage(pgConnStr)); // Hangfire.PostgreSql package
builder.Services.AddHangfireServer();
builder.Services.AddFlowOrchestrator(options =>
{
options.UsePostgreSql(pgConnStr);
options.UseHangfire();
options.AddFlow<HelloWorldFlow>();
});
In-Memory (Dev / Testing)
The in-memory runtime uses a Channel<T>-backed step dispatcher and a PeriodicTimer-driven cron scheduler — no Hangfire packages are needed.
builder.Services.AddFlowOrchestrator(options =>
{
options.UseInMemory();
options.UseInMemoryRuntime(); // Channel<T> dispatcher + PeriodicTimer cron
options.AddFlow<HelloWorldFlow>();
});
Warning
UseInMemory() must be called explicitly — there is no silent fallback. All run data is lost when the process restarts.
Map the Dashboard
Add these two lines after var app = builder.Build();:
app.UseHangfireDashboard("/hangfire"); // Hangfire's own dashboard (optional)
app.MapFlowDashboard("/flows"); // FlowOrchestrator dashboard + REST API
Add a Health Check (optional but recommended)
For production, expose a /health endpoint that probes flow-store reachability so a load balancer can drop traffic when the database is unavailable:
builder.Services
.AddHealthChecks()
.AddFlowOrchestratorHealthChecks(); // probes IFlowStore reachability
app.MapHealthChecks("/health");
The check resolves whichever IFlowStore you registered (SQL Server, PostgreSQL, or in-memory). Probe budget defaults to 5 seconds and is configurable. See Production Checklist for the full operational story.
Run the App
dotnet run
Navigate to http://localhost:5000/flows. You will see HelloWorldFlow listed in the Flows tab. Click Trigger to fire a manual run, or wait for the cron trigger to fire.
Trigger via REST API
POST /flows/api/flows/00000000-0000-0000-0000-000000000001/trigger
Content-Type: application/json
{}
The response includes the runId:
{ "runId": "3fa85f64-5717-4562-b3fc-2c963f66afa6" }
Use that ID to poll the run status:
GET /flows/api/runs/3fa85f64-5717-4562-b3fc-2c963f66afa6
Next Steps
- Core Concepts — understand flows, manifests, steps, and run lifecycle
- Step Handlers — write more complex handlers with typed outputs
- Triggers — cron schedules, webhooks, and idempotency
- Polling Steps — wait for external systems without blocking threads
Tip
When you are ready for production, read Versioning Flows before changing any deployed flow, and walk through the Production Checklist before go-live.
Troubleshooting
Aspire AppHost: stuck "Running" runs after a hard kill
If you're using the Aspire AppHost for local development and start seeing many runs
stuck in Running status that never complete (especially after a force-kill of the host
process or a Docker Desktop restart), the underlying SQL Server / PostgreSQL data volume
is likely corrupted. Hangfire stores its job queue in the same database, and once
the queue tables are damaged the worker can no longer dispatch step jobs.
Symptoms:
- Hangfire dashboard
/hangfire/retriesshows jobs failing with Postgres error codesXX001 missing chunk number 0 for toast valueorXX002 right sibling's left-link doesn't match, or SQL Server errors complaining about consistency / corruption. /flows/api/runs/statsshowsactiveRunsgrowing each minute whilecompletedTodaystays flat.
The cause is WithDataVolume() in the AppHost — it persists the database across
restarts, and a hard kill (Stop-Process -Force, OOM, host crash) interrupts WAL flush.
Containers stopped this way leave the database in a half-committed state.
Fix — drop the corrupted volumes and let Aspire recreate them on next start:
# Stop the AppHost first, then:
docker volume ls --filter "name=floworchestrator" -q | xargs -r docker volume rm
dotnet run --project ./FlowOrchestrator.AppHost
Prevention — always shut down the AppHost with Ctrl+C, never Stop-Process -Force.
The graceful path gives Postgres / SQL Server several seconds to flush WAL and unmount
cleanly. For CI environments running Aspire on disposable runners, consider switching
to WithLifetime(ContainerLifetime.Session) so volumes are wiped between runs.