This is the abridged developer documentation for Granit
# Architecture
> Design decisions, patterns, and ADRs
This section documents the architectural decisions and design patterns used throughout the Granit framework. ## Sections [Section titled “Sections”](#sections) * **[Pattern Library](./patterns/)** — 51 design patterns with their concrete implementation in Granit, organized by category (architecture, cloud/SaaS, GoF, data, concurrency, .NET idioms, security) * **[ADRs](./adr/)** — 16 Architecture Decision Records documenting key technology choices (Serilog, Redis, Wolverine, Scriban, ClosedXML, etc.) ## Design principles [Section titled “Design principles”](#design-principles) Granit is built on a set of explicit architectural principles: * **Convention over configuration** — sensible defaults, explicit overrides * **Module isolation** — each module owns its DbContext, its DI registrations, and its public API surface * **CQRS everywhere** — `IReader` and `IWriter` interfaces are never merged * **Soft dependencies** — modules access cross-cutting concerns (tenancy, time, user context) via `Granit.Core` interfaces, not direct package references * **Compliance by design** — GDPR and ISO 27001 constraints are architectural decisions, not afterthoughts
# Architecture Decision Records
> Key technical decisions and their rationale
Architecture Decision Records (ADRs) document significant technical decisions made during the development of Granit. Each ADR follows a consistent template: Context, Decision, Evaluated Alternatives, Justification, and Consequences. ## ADR index [Section titled “ADR index”](#adr-index) | # | Title | Status | Date | Scope | | ---------------------------------------- | --------------------------------------------------- | -------- | ---------- | ------------------------------- | | [001](001-observability/) | Observability Stack — Serilog + OpenTelemetry | Accepted | 2026-02-21 | Granit.Observability | | [002](002-redis/) | Redis via StackExchange.Redis — Distributed Cache | Accepted | 2026-02-21 | Granit.Caching | | [003](003-testing-stack/) | Testing Stack — xUnit v3, NSubstitute and Bogus | Accepted | 2026-02-21 | granit-dotnet | | [004](004-asp-versioning/) | Asp.Versioning — REST API Versioning | Accepted | 2026-02-22 | Granit.ApiVersioning | | [005](005-wolverine-cronos/) | Wolverine + Cronos — Messaging, CQRS and Scheduling | Accepted | 2026-02-22 | Granit.Wolverine | | [006](006-fluentvalidation/) | FluentValidation — Business Validation Framework | Accepted | 2026-02-24 | Granit.Validation | | [007](007-testcontainers/) | Testcontainers — Containerized Integration Tests | Accepted | 2026-02-24 | Integration Tests | | [008](008-smartformat-pluralization/) | SmartFormat.NET — CLDR Pluralization | Accepted | 2026-02-26 | Granit.Localization | | [009](009-scalar-api-documentation/) | Scalar.AspNetCore — Interactive API Documentation | Accepted | 2026-02-26 | Granit.ApiDocumentation | | [010](010-scriban-template-engine/) | Scriban — Text Template Engine | Accepted | 2026-02-27 | Granit.Templating.Scriban | | [011](011-closedxml-excel-generation/) | ClosedXML — Excel Spreadsheet Generation | Accepted | 2026-02-27 | Granit.DocumentGeneration.Excel | | [012](012-puppeteersharp-pdf-rendering/) | PuppeteerSharp — HTML to PDF Rendering | Accepted | 2026-02-28 | Granit.DocumentGeneration.Pdf | | [013](013-magicknet-image-processing/) | Magick.NET — Image Processing | Accepted | 2026-02-28 | Granit.Imaging.MagickNet | | [014](014-migration-shouldly/) | Migrate FluentAssertions to Shouldly | Accepted | 2026-02-28 | granit-dotnet | | [015](015-sep-csv-parsing/) | Sep — High-Performance CSV Parsing | Accepted | 2026-03-01 | Granit.DataExchange.Csv | | [016](016-sylvan-data-excel-parsing/) | Sylvan.Data.Excel — Streaming Excel File Reading | Accepted | 2026-03-01 | Granit.DataExchange.Excel |
# Architecture Decision Records
> Key technical decisions for the Granit TypeScript/React SDK
Architecture Decision Records (ADRs) for the Granit Frontend SDK document significant technical decisions made during the development of the TypeScript and React packages. Each ADR follows the same template as backend ADRs: Context, Decision, Alternatives Considered, Justification, Consequences, and Re-evaluation Conditions. ## ADR index [Section titled “ADR index”](#adr-index) | # | Title | Status | Date | Scope | | ----------------------------- | --------------------------------------------------- | -------- | ---------- | -------------------------------- | | [001](001-source-direct/) | TypeScript Source-Direct — No Build Step | Accepted | 2026-02-27 | All `@granit/*` packages | | [002](002-pnpm-workspace/) | pnpm Workspace Monorepo | Accepted | 2026-02-27 | granit-front | | [003](003-react-19/) | React 19 as Minimum Version | Accepted | 2026-02-27 | All React packages | | [004](004-headless-packages/) | Headless Packages — Hooks Only, UI in Consumer Apps | Accepted | 2026-03-06 | All `@granit/*` packages | | [005](005-keycloak/) | Keycloak as Authentication Provider | Accepted | 2026-02-27 | @granit/authentication | | [006](006-tanstack-query/) | TanStack Query for Data Fetching | Accepted | 2026-03-04 | All React data-fetching packages | | [007](007-vitest/) | Vitest as Test Runner | Accepted | 2026-02-27 | granit-front | | [008](008-opentelemetry/) | OpenTelemetry for Distributed Tracing | Accepted | 2026-03-04 | @granit/tracing |
# ADR-001: TypeScript Source-Direct — No Build Step
> Export raw TypeScript source files via package.json exports, relying on Vite for on-the-fly transpilation
> **Date:** 2026-02-27 **Authors:** Jean-Francois Meyers **Scope:** All `@granit/*` packages ## Context [Section titled “Context”](#context) The `@granit/*` packages are consumed exclusively by Vite-based applications. In local development, packages are linked via `pnpm link:` and resolved through Vite aliases. Vite transpiles TypeScript on-the-fly via esbuild. Maintaining a build step (`tsc`, `tsup`, `rollup`) for each package would introduce: * An additional `watch` process during development * A propagation delay for modifications reaching applications * Configuration complexity (source maps, declaration files, dual CJS/ESM) * A risk of desynchronization between source and compiled artifacts ## Decision [Section titled “Decision”](#decision) The `@granit/*` packages export their raw `.ts` source files directly via the `exports` field in `package.json`: ```json { "exports": { ".": "./src/index.ts" } } ``` No `dist/` directory is generated or committed. Consumer applications resolve imports via Vite aliases pointing to the sources. A `publishConfig` with `dist/` exports and a `tsup` build step is provided for npm publication to GitHub Packages. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Source-direct with Vite transpilation (selected) [Section titled “Option 1: Source-direct with Vite transpilation (selected)”](#option-1-source-direct-with-vite-transpilation-selected) * **Advantage**: instant HMR, zero build configuration, direct source maps * **Disadvantage**: coupled to Vite (or any bundler capable of TypeScript transpilation) ### Option 2: tsup/rollup watch mode [Section titled “Option 2: tsup/rollup watch mode”](#option-2-tsuprollup-watch-mode) * **Advantage**: standard npm distribution, compatible with any consumer * **Disadvantage**: additional `watch` process per package, propagation delay, dual CJS/ESM complexity, source map indirection ### Option 3: tsc —watch with project references [Section titled “Option 3: tsc —watch with project references”](#option-3-tsc-watch-with-project-references) * **Advantage**: official TypeScript tooling, incremental compilation * **Disadvantage**: slow watch mode on large workspaces, declaration file management, no tree-shaking ## Justification [Section titled “Justification”](#justification) | Criterion | Source-direct | tsup/rollup watch | tsc —watch | | ------------------- | -------------------------- | --------------------------- | ------------------- | | HMR latency | Instant | \~1-3s per change | \~2-5s per change | | Dev configuration | None | Per-package config | tsconfig references | | Source maps | Direct to source | Through dist/ layer | Through dist/ layer | | Refactoring support | Full (IDE resolves source) | Partial (may resolve dist/) | Good | | CI type-check | `pnpm tsc --noEmit` | `pnpm tsc --noEmit` | `pnpm tsc --build` | | npm publication | Requires `tsup` build | Ready | Ready | ## Packages used [Section titled “Packages used”](#packages-used) | Package | Role | | -------------------- | ----------------------------------------------------- | | `tsup` | Build step for npm publication only (`publishConfig`) | | `esbuild` (via Vite) | On-the-fly TypeScript transpilation in development | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Instant HMR: modifications in a package are reflected immediately in the application * Zero build configuration in development * Direct source maps to the original code (no intermediate layer) * Fluid refactoring: TypeScript tools (rename, find references) traverse packages seamlessly * Simplified CI: `pnpm tsc --noEmit` is sufficient for type checking ### Negative [Section titled “Negative”](#negative) * Coupled to Vite: consumer applications must use a bundler capable of TypeScript source transpilation * Publication: a build step is required before npm publication, creating two consumption modes (source-direct vs dist) * Compatibility: packages are not directly usable by a Node.js project without transpilation ## Re-evaluation conditions [Section titled “Re-evaluation conditions”](#re-evaluation-conditions) This decision should be re-evaluated if: * A consumer application needs to use a bundler that cannot transpile TypeScript (unlikely) * The number of packages grows beyond a point where Vite’s on-the-fly transpilation becomes a performance bottleneck * TypeScript natively supports running `.ts` files without transpilation (Node.js `--strip-types`) ## References [Section titled “References”](#references) * Vite TypeScript support: * tsup:
# ADR-002: pnpm Workspace Monorepo
> Use pnpm as the exclusive package manager for the monorepo with workspace:* protocol for internal dependencies
> **Date:** 2026-02-27 **Authors:** Jean-Francois Meyers **Scope:** granit-front ## Context [Section titled “Context”](#context) The Granit frontend framework is composed of multiple packages with inter-package dependencies. The package manager must: * Manage a multi-package workspace with local symbolic links * Efficiently resolve shared dependencies (hoisting) * Offer fast resolution and installation performance * Support the `workspace:*` protocol for internal dependencies ## Decision [Section titled “Decision”](#decision) Use **pnpm** as the exclusive package manager for the monorepo. Configuration in `pnpm-workspace.yaml`: ```yaml packages: - 'packages/@granit/*' ``` Inter-package dependencies use `workspace:*` in `peerDependencies`: ```json { "peerDependencies": { "@granit/api-client": "workspace:*" } } ``` ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: pnpm (selected) [Section titled “Option 1: pnpm (selected)”](#option-1-pnpm-selected) * **Advantage**: content-addressable store, isolated `node_modules`, native `workspace:*` protocol, fast installs * **Disadvantage**: less widespread than npm, pnpm-specific lock file ### Option 2: npm workspaces [Section titled “Option 2: npm workspaces”](#option-2-npm-workspaces) * **Advantage**: zero additional tooling (ships with Node.js) * **Disadvantage**: aggressive hoisting creates phantom dependencies, no `workspace:*` protocol, slower performance ### Option 3: Yarn Berry (PnP) [Section titled “Option 3: Yarn Berry (PnP)”](#option-3-yarn-berry-pnp) * **Advantage**: Plug’n’Play eliminates `node_modules`, zero-installs * **Disadvantage**: PnP complicates Vite and TypeScript tool integration, zero-installs bloat the repository ## Justification [Section titled “Justification”](#justification) | Criterion | pnpm | npm workspaces | Yarn Berry | | ---------------------- | -------------------------- | ---------------------- | -------------------------- | | Dependency isolation | Strict (non-flat) | Hoisted (phantom deps) | Strict (PnP) | | Install performance | Fast (content-addressable) | Moderate | Fast (zero-install) | | `workspace:*` protocol | Native | Not supported | Supported | | Vite compatibility | Works out of the box | Works | Requires PnP configuration | | Cross-package commands | `pnpm -r exec`, `--filter` | `npm -ws exec` | `yarn workspaces foreach` | | Community adoption | Growing rapidly | Universal | Declining | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Strict isolation: no phantom dependencies thanks to pnpm’s non-flat `node_modules` * Performance: content-addressable store shared across projects, near-instant installs after the first run * `workspace:*`: automatic resolution of local packages with version replacement at publication * `pnpm -r exec`: command execution across all packages (lint, tsc) * `--filter`: targeted execution on a specific package ### Negative [Section titled “Negative”](#negative) * Adoption: pnpm is less widespread than npm, which may surprise new contributors * Lock file: `pnpm-lock.yaml` is incompatible with other managers, imposing pnpm on all contributors * Overrides: pnpm-specific syntax in `package.json` (`pnpm.overrides`) ## Re-evaluation conditions [Section titled “Re-evaluation conditions”](#re-evaluation-conditions) This decision should be re-evaluated if: * npm workspaces gains native `workspace:*` support and strict isolation * A new package manager emerges with compelling advantages over pnpm * The pnpm project is abandoned or maintenance slows significantly ## References [Section titled “References”](#references) * pnpm: * pnpm workspaces:
# ADR-003: React 19 as Minimum Version
> Require React 19 as the minimum peer dependency to leverage React Compiler, use() hook, and Actions
> **Date:** 2026-02-27 **Authors:** Jean-Francois Meyers **Scope:** All `@granit/*` React packages ## Context [Section titled “Context”](#context) The `@granit/*` packages declare `react: "^19.0.0"` in their `peerDependencies`. This requires consumer applications to use React 19 as a minimum. React 19 introduces several features leveraged by the framework: * **React Compiler**: automatic re-render optimization (replaces manual `useMemo`/`useCallback`) * **`use()` hook**: read promises and contexts during render * **Actions**: native integration with forms and mutations * **Suspense improvements**: native streaming SSR support * **`ref` as prop**: eliminates the need for `forwardRef` in wrapper components ## Decision [Section titled “Decision”](#decision) Set React 19 as the minimum version in all `@granit/*` packages that have a peer dependency on React: ```json { "peerDependencies": { "react": "^19.0.0" } } ``` React 18 and earlier versions are not supported. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: React 19 minimum (selected) [Section titled “Option 1: React 19 minimum (selected)”](#option-1-react-19-minimum-selected) * **Advantage**: modern APIs without conditions or polyfills, React Compiler benefits, simpler codebase * **Disadvantage**: excludes applications still on React 18 ### Option 2: React 18+ with conditional features [Section titled “Option 2: React 18+ with conditional features”](#option-2-react-18-with-conditional-features) * **Advantage**: broader compatibility * **Disadvantage**: conditional branches for multiple major versions, cannot use new APIs unconditionally, maintenance burden ### Option 3: React 18 minimum [Section titled “Option 3: React 18 minimum”](#option-3-react-18-minimum) * **Advantage**: maximum compatibility * **Disadvantage**: cannot leverage React 19 features, technical debt from day one ## Justification [Section titled “Justification”](#justification) | Criterion | React 19 min | React 18+ conditional | React 18 min | | ------------------- | ----------------- | ------------------------- | ------------- | | API surface | Full React 19 | Partial (conditional) | React 18 only | | React Compiler | Automatic | Not available | Not available | | Code complexity | Simple | Higher (version branches) | Simple | | Forward-looking | Yes | Partial | No | | Consumer constraint | React 19 required | React 18+ | React 18+ | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Modern API: framework code uses new APIs without conditions or polyfills * Performance: automatic benefit from React Compiler in consumer applications * Simplicity: no conditional branches for multiple React major versions * Forward-looking: all consumer applications are already on React 19 ### Negative [Section titled “Negative”](#negative) * Exclusion: applications still on React 18 cannot use `@granit/*` packages * Ecosystem: some third-party libraries may not yet support React 19 (transitory risk) ## Re-evaluation conditions [Section titled “Re-evaluation conditions”](#re-evaluation-conditions) This decision should be re-evaluated if: * A major consumer application cannot upgrade to React 19 * React 20 introduces breaking changes that require multi-version support ## References [Section titled “References”](#references) * React 19 release notes: * React Compiler:
# ADR-004: Headless Packages — Hooks Only, UI in Consumer Apps
> Framework packages export only hooks, types, providers, and utilities — no React components
> **Date:** 2026-03-06 **Authors:** Jean-Francois Meyers **Scope:** All `@granit/*` packages ## Context [Section titled “Context”](#context) Initially, the monorepo contained UI packages that exposed React components (buttons, modals, tables, etc.) alongside business hooks. This approach caused several problems: * **Tight coupling** between business logic and visual presentation * **Design divergence**: different consumer applications use different design systems (components, tokens, spacing) * **Duplication**: each application ended up wrapping UI components to adapt them to its design system * **Heavy UI dependencies**: Tailwind CSS, lucide-react, shadcn/ui had to be aligned between the framework and applications A refactoring (2026-03-06) eliminated the UI layer from the framework. ## Decision [Section titled “Decision”](#decision) The `@granit/*` packages are strictly **headless**: they export only hooks, types, providers, and utility functions. No React component (JSX) is exported by the framework. UI components live exclusively in consumer applications, each implementing its own design system. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Headless packages (selected) [Section titled “Option 1: Headless packages (selected)”](#option-1-headless-packages-selected) * **Advantage**: clean separation between logic and presentation, no UI dependency in the framework, stable TypeScript API contract * **Disadvantage**: component duplication across consumer applications ### Option 2: Shared component library with theming [Section titled “Option 2: Shared component library with theming”](#option-2-shared-component-library-with-theming) * **Advantage**: consistent UI across applications, single implementation * **Disadvantage**: theme configuration complexity, design system lock-in, heavy dependencies in the framework ### Option 3: Headless + optional UI package [Section titled “Option 3: Headless + optional UI package”](#option-3-headless--optional-ui-package) * **Advantage**: best of both worlds — headless core with optional UI * **Disadvantage**: double maintenance burden, risk of UI package becoming mandatory over time, version alignment complexity ## Justification [Section titled “Justification”](#justification) | Criterion | Headless | Shared components | Headless + optional UI | | ---------------------- | ---------------- | ---------------------- | ---------------------- | | Logic/UI separation | Complete | Coupled | Complete | | Design freedom | Full | Constrained | Full | | Framework dependencies | Minimal | Heavy (Tailwind, etc.) | Minimal core | | Test complexity | Low (hooks only) | Higher (DOM, styles) | Low core | | Component duplication | Yes | No | Partial | | Maintenance | Low | High | Medium | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Clean separation between logic (framework) and presentation (application) * Design freedom: each application chooses its design system without constraint * Fewer dependencies: packages no longer need Tailwind, lucide-react, or any component library * Simplified testing: testing hooks is simpler than testing components (no DOM, no styles) * Stable API: the public interface is a TypeScript contract (types + hook signatures), independent of visual rendering ### Negative [Section titled “Negative”](#negative) * Component duplication: consumer applications implement their own table, export modal, etc. * No cross-app UI consistency: the framework does not guarantee visual homogeneity between applications * Initial effort: each new application must implement the UI layer for each `@granit/*` package it uses ## Re-evaluation conditions [Section titled “Re-evaluation conditions”](#re-evaluation-conditions) This decision should be re-evaluated if: * All consumer applications converge on the same design system * A headless component library (e.g. Radix, Ark UI) provides sufficient abstraction to share UI without design coupling * The duplication cost exceeds the coupling cost ## References [Section titled “References”](#references) * Headless UI pattern: * TanStack Table (headless reference):
# ADR-005: Keycloak as Authentication Provider
> Use Keycloak for OIDC/OAuth 2.0 authentication to meet HDS compliance and enable self-hosted European deployment
> **Date:** 2026-02-27 **Authors:** Jean-Francois Meyers **Scope:** @granit/authentication, @granit/react-authentication, @granit/react-authorization ## Context [Section titled “Context”](#context) The platform operates in a healthcare data hosting (HDS) context that imposes strict authentication requirements: * **OpenID Connect / OAuth 2.0**: standard protocols required * **Complete audit**: connection and session traceability * **Multi-tenant**: realm isolation per tenant * **HDS compliance**: the provider must be hostable in France on private infrastructure (no US cloud) * **Open source**: no commercial lock-in ## Decision [Section titled “Decision”](#decision) Use **Keycloak** as the authentication provider via the following packages: * `@granit/authentication` — OIDC types (framework-agnostic) * `@granit/react-authentication` — Keycloak init hook, auth context factory, mock provider * `@granit/react-authorization` — Permission checking hooks (RBAC) The packages encapsulate `keycloak-js` and expose: * A React authentication context (`BaseAuthContextType`) * Integration hooks (`useAuth`, `useKeycloakInit`, `usePermissions`) * A mock provider for tests and local development * A 401 interceptor for revoked session handling The `BaseAuthContextType` is an extensible interface: consumer applications add their own fields (e.g. `register`, `hasAdminRole`). ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Keycloak (selected) [Section titled “Option 1: Keycloak (selected)”](#option-1-keycloak-selected) * **Advantage**: open source (Apache 2.0), self-hosted, standard protocols, CNCF project, large community, realm-per-tenant isolation * **Disadvantage**: operational burden (deployment, updates, realm backup, monitoring), `keycloak-js` API sometimes unstable between major versions ### Option 2: Auth0 [Section titled “Option 2: Auth0”](#option-2-auth0) * **Advantage**: fully managed SaaS, excellent developer experience * **Disadvantage**: **incompatible with HDS** — hosted in the US, Cloud Act applies, commercial lock-in ### Option 3: Azure AD B2C [Section titled “Option 3: Azure AD B2C”](#option-3-azure-ad-b2c) * **Advantage**: native Azure integration, managed service * **Disadvantage**: Azure dependency, US Cloud Act applies, complex configuration for custom flows ### Option 4: Custom OIDC implementation [Section titled “Option 4: Custom OIDC implementation”](#option-4-custom-oidc-implementation) * **Advantage**: full control, no external dependency * **Disadvantage**: enormous implementation effort, security risk, no community review, certification burden ## Justification [Section titled “Justification”](#justification) | Criterion | Keycloak | Auth0 | Azure AD B2C | Custom | | --------------------- | -------------------- | ---------------- | ----------------- | ---------------- | | HDS compliance | Yes (self-hosted EU) | No (US) | No (US Cloud Act) | Yes | | Open source | Yes (Apache 2.0) | No | No | Yes | | OIDC/OAuth 2.0 | Native | Native | Native | Manual | | Multi-tenant (realms) | Native | Per-tenant plans | Partial | Manual | | Operational cost | Self-hosted | SaaS fee | Azure fee | Development cost | | Community | CNCF, Red Hat | Large | Large | None | ## Packages used [Section titled “Packages used”](#packages-used) | Package | Role | | ------------- | ------------------------------------ | | `keycloak-js` | Official Keycloak JavaScript adapter | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * HDS compliance: Keycloak is self-hosted on private European infrastructure * Standards: OpenID Connect, OAuth 2.0, SAML 2.0 supported natively * Extensibility: themes, SPIs, identity federation * Community: CNCF project, active Red Hat maintenance * Isolation: one realm per tenant, no shared data ### Negative [Section titled “Negative”](#negative) * Operations: Keycloak must be deployed and maintained (updates, realm backup, monitoring) * Complexity: realm, client, and role configuration is non-trivial * `keycloak-js`: the official client library has occasionally unstable APIs between major versions ## Re-evaluation conditions [Section titled “Re-evaluation conditions”](#re-evaluation-conditions) This decision should be re-evaluated if: * A European managed identity service emerges with HDS certification * Keycloak maintenance burden becomes disproportionate * `keycloak-js` is deprecated in favor of a generic OIDC client ## References [Section titled “References”](#references) * Keycloak: * keycloak-js:
# ADR-006: TanStack Query for Data Fetching
> Use TanStack Query v5 as the data fetching layer with intelligent caching, structured query keys, and typed mutations
> **Date:** 2026-03-04 **Authors:** Jean-Francois Meyers **Scope:** All `@granit/react-*` packages with data fetching ## Context [Section titled “Context”](#context) The data-fetching React packages (`@granit/react-querying`, `@granit/react-data-exchange`, `@granit/react-authorization`, `@granit/react-authentication-api-keys`, etc.) need a caching and synchronization layer with: * **Intelligent cache**: avoid duplicate requests, targeted invalidation * **Pagination**: native support for page/pageSize pagination * **Mutations**: optimistic updates with cache invalidation * **Retry and refetch**: network error resilience * **DevTools**: cache inspection during development ## Decision [Section titled “Decision”](#decision) Use **TanStack Query v5** (`@tanstack/react-query ^5.0.0`) as the data fetching layer. The dependency is declared in `peerDependencies`: ```json { "peerDependencies": { "@tanstack/react-query": "^5.0.0" } } ``` Public hooks (`usePermissions`, `useApiKeys`, `useIdentityCapabilities`, etc.) encapsulate `useQuery` and `useMutation` from TanStack Query. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: TanStack Query v5 (selected) [Section titled “Option 1: TanStack Query v5 (selected)”](#option-1-tanstack-query-v5-selected) * **Advantage**: rich API, typed query keys, mutations with invalidation, complete DevTools, large adoption, Orval compatibility * **Disadvantage**: peer dependency, \~13 kB gzip, learning curve ### Option 2: SWR (Vercel) [Section titled “Option 2: SWR (Vercel)”](#option-2-swr-vercel) * **Advantage**: simpler API, smaller bundle * **Disadvantage**: no structured mutations, no typed query keys, limited DevTools, less suitable for complex CRUD operations ### Option 3: Custom hooks (useState + useEffect) [Section titled “Option 3: Custom hooks (useState + useEffect)”](#option-3-custom-hooks-usestate--useeffect) * **Advantage**: zero dependency, full control * **Disadvantage**: reinventing cache, deduplication, pagination, and invalidation is a major effort with subtle bug risks ## Justification [Section titled “Justification”](#justification) | Criterion | TanStack Query | SWR | Custom hooks | | ----------------- | ---------------------------- | ----------- | ------------ | | Cache management | Excellent | Good | Manual | | Query key typing | Native | Manual | Manual | | Mutation support | `useMutation` + invalidation | Manual | Manual | | Pagination | Built-in | Partial | Manual | | DevTools | Complete | Basic | None | | Bundle size | \~13 kB gzip | \~4 kB gzip | 0 kB | | Ecosystem (Orval) | Compatible | Partial | N/A | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Shared cache: all queries share the same `QueryClient`, enabling cross-invalidation * Structured query keys: key factories (`permissionKeys`, `apiKeyKeys`) ensure key consistency across hooks * Typed mutations: `useMutation` with `onSuccess` / `onError` / `onSettled` for business flows (export, import, workflow transitions) * DevTools: cache inspection, query replay in development * Ecosystem: compatible with Orval for automatic hook generation ### Negative [Section titled “Negative”](#negative) * Peer dependency: consumer applications must install and configure `@tanstack/react-query` (QueryClientProvider) * Bundle size: \~13 kB gzip (acceptable for an SPA) * Learning curve: concepts of stale time, gc time, invalidation, and query keys can be complex for new contributors ## Re-evaluation conditions [Section titled “Re-evaluation conditions”](#re-evaluation-conditions) This decision should be re-evaluated if: * React introduces a built-in data fetching primitive that replaces TanStack Query * TanStack Query v6 introduces breaking changes that require major migration effort * A significantly lighter alternative emerges with equivalent features ## References [Section titled “References”](#references) * TanStack Query: * Query key factories:
# ADR-007: Vitest as Test Runner
> Use Vitest for TypeScript unit testing with native ESM support and zero transpilation configuration
> **Date:** 2026-02-27 **Authors:** Jean-Francois Meyers **Scope:** granit-front ## Context [Section titled “Context”](#context) The monorepo needs a unit test framework capable of: * Executing TypeScript tests without prior transpilation * Supporting ESM (`"type": "module"` in `package.json`) * Offering a performant watch mode for development * Generating coverage reports (lcov, html) * Working in a pnpm multi-package workspace ## Decision [Section titled “Decision”](#decision) Use **Vitest** (v4) as the test framework for all packages in the monorepo: ```bash pnpm test # watch mode (development) pnpm test:coverage # v8 coverage (CI) ``` Tests are co-located with source code: * `src/**/*.test.ts` for simple unit tests * `src/__tests__/` for more complex test suites The minimum coverage target is 80% on all new code. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Vitest (selected) [Section titled “Option 1: Vitest (selected)”](#option-1-vitest-selected) * **Advantage**: uses the same transformation pipeline as Vite (esbuild), native ESM and TypeScript support, Jest-compatible API, fast file-system-based watch mode * **Disadvantage**: younger ecosystem than Jest, some Jest plugins have no Vitest equivalent ### Option 2: Jest [Section titled “Option 2: Jest”](#option-2-jest) * **Advantage**: industry standard, extensive plugin ecosystem * **Disadvantage**: experimental and unstable ESM support, TypeScript configuration requires `ts-jest` or `@swc/jest`, slow watch mode on large workspaces ### Option 3: Node.js native test runner [Section titled “Option 3: Node.js native test runner”](#option-3-nodejs-native-test-runner) * **Advantage**: zero dependency, ships with Node.js * **Disadvantage**: limited API, no watch mode, no coverage integration, no mocking framework ## Justification [Section titled “Justification”](#justification) | Criterion | Vitest | Jest | Node.js test runner | | ------------------------ | ------------------------ | ------------------- | ------------------- | | TypeScript transpilation | esbuild (automatic) | ts-jest / @swc/jest | Manual | | ESM support | Native | Experimental | Native | | Watch mode | File-system-based (fast) | Polling (slow) | None | | API compatibility | Jest-compatible | Standard | Limited | | Coverage | v8 (built-in) | istanbul / v8 | Experimental | | Workspace-aware | `vitest.workspace.ts` | Custom config | No | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Zero transpilation configuration: Vitest uses esbuild, same as Vite — `.ts` files are transpiled on-the-fly * Native ESM: no issues with `import`/`export`, `import.meta`, etc. * Performance: parallel execution, file-system-based watch mode (no polling) * Jest-compatible API: `describe`, `it`, `expect`, `vi.fn()` — minimal learning curve for developers coming from Jest * v8 coverage: lcov and html reports without additional dependencies * Workspace-aware: `vitest.workspace.ts` for multi-package configuration ### Negative [Section titled “Negative”](#negative) * Ecosystem: some Jest plugins have no Vitest equivalent (marginal risk) * Maturity: Vitest is more recent than Jest (mitigated by massive adoption in the Vite ecosystem) ## Re-evaluation conditions [Section titled “Re-evaluation conditions”](#re-evaluation-conditions) This decision should be re-evaluated if: * Jest achieves stable ESM support with comparable performance * Vitest development slows or the project is abandoned * A new test runner emerges with compelling advantages ## References [Section titled “References”](#references) * Vitest: * Vitest workspace:
# ADR-008: OpenTelemetry for Distributed Tracing
> Implement OpenTelemetry browser tracing for end-to-end request correlation between frontend and .NET backend
> **Date:** 2026-03-04 **Authors:** Jean-Francois Meyers **Scope:** @granit/tracing, @granit/react-tracing, @granit/logger-otlp ## Context [Section titled “Context”](#context) The platform requires distributed tracing to: * Follow requests end-to-end (frontend → .NET backend → database) * Diagnose performance issues * Correlate frontend logs with backend traces * Feed an observability backend (Grafana Tempo) The .NET backend already uses OpenTelemetry (see [ADR-001](/architecture/adr/001-observability/)). The frontend tracing must use the same standard for seamless correlation. ## Decision [Section titled “Decision”](#decision) Use **OpenTelemetry** via the `@granit/tracing` package, which encapsulates: * `WebTracerProvider` for SDK initialization * `OTLPTraceExporter` (HTTP) for trace export * Auto-instrumentations: `fetch`, `XMLHttpRequest`, `document-load` * `TracingProvider` (React context) for activation in the component tree * `useTracer` and `useSpan` hooks for custom spans * `getTraceContext` for non-React integration (e.g. `@granit/logger-otlp`) The `@granit/logger-otlp` package extends `@granit/logger` to inject trace IDs into logs, enabling log-to-trace correlation in Grafana. All OpenTelemetry dependencies are declared as `peerDependencies`: ```json { "peerDependencies": { "@opentelemetry/api": "^1.9.0", "@opentelemetry/sdk-trace-web": "^2.6.0", "@opentelemetry/exporter-trace-otlp-http": "^0.213.0", "@opentelemetry/instrumentation-fetch": "^0.213.0", "@opentelemetry/instrumentation-xml-http-request": "^0.213.0", "@opentelemetry/instrumentation-document-load": "^0.57.0", "@opentelemetry/resources": "^2.6.0", "@opentelemetry/semantic-conventions": "^1.40.0", "@opentelemetry/context-zone": "^2.6.0" } } ``` ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: OpenTelemetry (selected) [Section titled “Option 1: OpenTelemetry (selected)”](#option-1-opentelemetry-selected) * **Advantage**: CNCF standard, vendor-agnostic, same protocol as .NET backend, auto-instrumentation for HTTP and page load, self-hostable collector * **Disadvantage**: 9 peer dependencies, web SDK less mature than server-side ### Option 2: Proprietary solution (Datadog, New Relic) [Section titled “Option 2: Proprietary solution (Datadog, New Relic)”](#option-2-proprietary-solution-datadog-new-relic) * **Advantage**: complete SaaS solution (logs + traces + metrics + APM) * **Disadvantage**: **incompatible with data sovereignty** — US company subject to Cloud Act, high cost per host/GB ### Option 3: Custom trace propagation [Section titled “Option 3: Custom trace propagation”](#option-3-custom-trace-propagation) * **Advantage**: minimal dependencies, full control * **Disadvantage**: non-standard, no auto-instrumentation, no ecosystem tooling, no correlation with backend traces ## Justification [Section titled “Justification”](#justification) | Criterion | OpenTelemetry | Datadog / New Relic | Custom | | -------------------- | ------------------------- | ---------------------------- | ----------- | | Sovereignty | Self-hosted collector | No (US) | Self-hosted | | CNCF standard | Yes | Partial (proprietary agents) | No | | Backend correlation | Same protocol (.NET OTel) | Proprietary | Manual | | Auto-instrumentation | fetch, XHR, page load | Full | None | | Cost | Infrastructure only | Per-host/GB | Development | ## Packages used [Section titled “Packages used”](#packages-used) | Package | Role | | ------------------------------------------------- | ----------------------------------------- | | `@opentelemetry/api` | Core tracing API | | `@opentelemetry/sdk-trace-web` | Browser tracer provider | | `@opentelemetry/exporter-trace-otlp-http` | OTLP HTTP trace export | | `@opentelemetry/instrumentation-fetch` | Auto-instrumentation for `fetch()` | | `@opentelemetry/instrumentation-xml-http-request` | Auto-instrumentation for XHR | | `@opentelemetry/instrumentation-document-load` | Page load tracing | | `@opentelemetry/resources` | Resource metadata (service name, version) | | `@opentelemetry/semantic-conventions` | Standard attribute names | | `@opentelemetry/context-zone` | Zone.js context propagation | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Standard compliance: same OTLP protocol as the .NET backend, seamless end-to-end correlation * Vendor-agnostic: the observability backend can be changed without modifying frontend code * Automatic correlation: trace IDs propagate automatically between frontend and backend * Auto-instrumentation: HTTP requests (fetch, XHR) and page load are traced automatically * Data sovereignty: the OTLP collector is self-hosted on European infrastructure ### Negative [Section titled “Negative”](#negative) * Peer dependency count: 9 OpenTelemetry packages add configuration overhead in consumer applications * Web SDK maturity: the OpenTelemetry web SDK is less mature than server-side SDKs (Node.js, .NET) * Performance: instrumentation adds slight overhead to HTTP requests (mitigated by graceful degradation when the collector is absent) ## Re-evaluation conditions [Section titled “Re-evaluation conditions”](#re-evaluation-conditions) This decision should be re-evaluated if: * The OpenTelemetry web SDK is deprecated in favor of a different approach * Browser-native tracing APIs emerge (e.g. Performance Observer extensions) * The number of peer dependencies becomes a significant maintenance burden ## References [Section titled “References”](#references) * OpenTelemetry JS: * OpenTelemetry Web SDK: * ADR-001 (.NET Observability): [ADR-001](/architecture/adr/001-observability/)
# ADR-001: Observability Stack — Serilog + OpenTelemetry
> Adoption of Serilog and OpenTelemetry for structured logging, distributed tracing, and metrics with data sovereignty compliance
> **Date:** 2026-02-21 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.Observability) ## Context [Section titled “Context”](#context) The Granit framework provides the `Granit.Observability` module which encapsulates the configuration of structured logging, distributed tracing and metrics. The choice of instrumentation libraries determines: * **ISO 27001 traceability**: structured timestamped logs retained for 3 years * **Distributed tracing**: request correlation across modules, Wolverine messages and HTTP calls * **Metrics**: performance monitoring and alerting * **Data sovereignty**: no telemetry data must leave European infrastructure Observability data is exported via the OTLP protocol to a self-hosted Grafana stack: **Loki** (logs), **Tempo** (traces), **Mimir** (metrics). ## Decision [Section titled “Decision”](#decision) * **Logging**: Serilog (`Serilog.AspNetCore` + `Serilog.Sinks.OpenTelemetry`) * **Tracing & metrics**: OpenTelemetry .NET SDK (7 packages) * **Export**: OTLP (OpenTelemetry Protocol) to the Grafana stack ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Serilog + OpenTelemetry (selected) [Section titled “Option 1: Serilog + OpenTelemetry (selected)”](#option-1-serilog--opentelemetry-selected) * **Logging**: Serilog — structured logging, enrichers (context, tenant, user), OTLP sink to unify the pipeline * **Tracing**: OpenTelemetry — CNCF standard, automatic instrumentation (ASP.NET Core, HTTP, EF Core), W3C Trace Context propagation * **Export**: OTLP to Loki/Tempo/Mimir (self-hosted in Europe) ### Option 2: Microsoft.Extensions.Logging + OpenTelemetry only [Section titled “Option 2: Microsoft.Extensions.Logging + OpenTelemetry only”](#option-2-microsoftextensionslogging--opentelemetry-only) * **Advantage**: no third-party dependency for logging * **Disadvantage**: limited enrichment, no dedicated sink for advanced structured formats, less flexible configuration than Serilog (namespace filtering, destructuring) ### Option 3: NLog + OpenTelemetry [Section titled “Option 3: NLog + OpenTelemetry”](#option-3-nlog--opentelemetry) * **Advantage**: NLog is mature and performant * **Disadvantage**: less developed OTLP sink ecosystem than Serilog, XML configuration (vs fluent API), modern .NET community leans toward Serilog ### Option 4: Application Insights (Azure Monitor) [Section titled “Option 4: Application Insights (Azure Monitor)”](#option-4-application-insights-azure-monitor) * **Advantage**: native .NET integration, out-of-the-box dashboard * **Disadvantage**: **incompatible with data sovereignty** — data hosted on Azure (US Cloud Act), variable cost based on ingestion volume, vendor lock-in ### Option 5: Datadog / New Relic [Section titled “Option 5: Datadog / New Relic”](#option-5-datadog--new-relic) * **Advantage**: complete SaaS solution (logs + traces + metrics + APM) * **Disadvantage**: **incompatible with data sovereignty** — data outside EU (or EU region but US company subject to Cloud Act), high cost per host/GB ## Justification [Section titled “Justification”](#justification) | Criterion | Serilog + OTel | MEL + OTel | NLog + OTel | App Insights | Datadog | | -------------------- | -------------- | ----------- | ----------- | ------------ | --------- | | Sovereignty | Self-hosted | Self-hosted | Self-hosted | No (Azure) | No (US) | | CNCF standard | Yes (OTel) | Yes (OTel) | Yes (OTel) | Partial | Partial | | Log enrichment | Excellent | Basic | Good | Good | Excellent | | Native OTLP sink | Yes | No | Partial | N/A | N/A | | .NET community | Very large | Standard | Medium | Large | Medium | | Cost | Infra only | Infra only | Infra only | Variable | High | | ISO 27001 compliance | Yes | Yes | Yes | Risk | Risk | ## Packages used [Section titled “Packages used”](#packages-used) | Package | Role | | --------------------------------------------------- | ------------------------------------------- | | `Serilog.AspNetCore` | ASP.NET Core integration, request enrichers | | `Serilog.Sinks.OpenTelemetry` | Log export via OTLP | | `OpenTelemetry` | Core SDK | | `OpenTelemetry.Api` | Instrumentation API (ActivitySource, Meter) | | `OpenTelemetry.Extensions.Hosting` | `IHostBuilder` integration | | `OpenTelemetry.Instrumentation.AspNetCore` | Automatic HTTP request traces | | `OpenTelemetry.Instrumentation.Http` | Automatic `HttpClient` call traces | | `OpenTelemetry.Instrumentation.EntityFrameworkCore` | Automatic EF Core query traces | | `OpenTelemetry.Exporter.OpenTelemetryProtocol` | OTLP export to Loki/Tempo/Mimir | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Sovereignty compliance: zero telemetry data leaving European hosting * CNCF standard: portability to any OTLP-compatible backend * Full correlation: logs, traces, and metrics via the same TraceId * Serilog contextual enrichment: tenant, user, module, correlation-id * Unified Grafana dashboard for the entire platform ### Negative [Section titled “Negative”](#negative) * Grafana stack maintenance (Loki, Tempo, Mimir) falls on the SRE team * Initial configuration more complex than a SaaS solution * Serilog is a third-party dependency (maintenance risk, although very stable) ## Re-evaluation conditions [Section titled “Re-evaluation conditions”](#re-evaluation-conditions) This decision should be re-evaluated if: * A European managed observability service emerges (ISO 27001 certified) * OpenTelemetry .NET SDK reaches feature parity with Serilog for structured logging * The maintenance burden of the self-hosted Grafana stack becomes disproportionate ## References [Section titled “References”](#references) * Initial commit: `52f1444` (2026-02-21) * Serilog: * OpenTelemetry .NET:
# ADR-002: Redis via StackExchange.Redis — Distributed Cache
> Selection of Redis via StackExchange.Redis as the distributed cache backend for L1+L2 caching
> **Date:** 2026-02-21 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.Caching, Granit.Caching.StackExchangeRedis, Granit.Caching.Hybrid) ## Context [Section titled “Context”](#context) The Granit framework provides cache abstractions (`Granit.Caching`) and requires a distributed cache backend for: * **Performance**: reducing latency for frequent reads (settings, translations, templates, permissions) * **Scalability**: shared cache across Kubernetes instances (sticky sessions impossible in an ISO 27001 context — high availability required) * **Idempotency**: HTTP idempotency key storage * **SignalR**: Redis backplane for real-time notifications The choice of cache backend determines the implementation of `Granit.Caching.StackExchangeRedis` and the L1+L2 pattern of `Granit.Caching.Hybrid`. ## Decision [Section titled “Decision”](#decision) **Redis** via **StackExchange.Redis** as the distributed cache backend (L2), combined with `Microsoft.Extensions.Caching.Hybrid` for the L1+L2 pattern. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Redis via StackExchange.Redis (selected) [Section titled “Option 1: Redis via StackExchange.Redis (selected)”](#option-1-redis-via-stackexchangeredis-selected) * **License**: MIT (StackExchange.Redis) * **Advantage**: de facto standard, native `IDistributedCache` integration, HybridCache support, Pub/Sub for invalidation, SignalR backplane ### Option 2: Memcached [Section titled “Option 2: Memcached”](#option-2-memcached) * **Advantage**: simple, lightweight, performant for pure key-value * **Disadvantage**: no Pub/Sub, no advanced data structures, no persistence, no SignalR backplane ### Option 3: NCache [Section titled “Option 3: NCache”](#option-3-ncache) * **Advantage**: .NET native solution, advanced topologies * **Disadvantage**: commercial license, limited community, no standard HybridCache integration ### Option 4: Microsoft Garnet [Section titled “Option 4: Microsoft Garnet”](#option-4-microsoft-garnet) * **Advantage**: Redis protocol compatible, superior performance * **Disadvantage**: recent project (2024), no managed service, stability risk for ISO 27001 production use ## Justification [Section titled “Justification”](#justification) | Criterion | SE.Redis | Memcached | NCache | Garnet | | ------------------- | ----------------- | ----------- | ----------- | ---------- | | Client license | MIT | Apache-2.0 | Freemium | MIT | | IDistributedCache | Native MS | Third-party | Third-party | Compatible | | HybridCache .NET 10 | Yes | No | No | Compatible | | Pub/Sub | Yes | No | Yes | Yes | | SignalR backplane | Yes (MS official) | No | No | Untested | | Maturity | 10+ years | Mature | Mature | Recent | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Native integration with Microsoft DI (`IDistributedCache`, `HybridCache`) * MIT client, stable and very widely adopted * Pub/Sub for cache invalidation and SignalR backplane * Transparent L1+L2 HybridCache pipeline via `Granit.Caching` ### Negative [Section titled “Negative”](#negative) * Redis is an additional infrastructure dependency to operate * Redis 7.4+ license (SSPL): to monitor if self-hosted * Complex object serialization requires a consistent strategy ## Re-evaluation conditions [Section titled “Re-evaluation conditions”](#re-evaluation-conditions) This decision should be re-evaluated if: * Microsoft Garnet reaches production maturity and offers a managed service * The Redis license (SSPL) becomes problematic for self-hosted deployment * Cache needs evolve toward a pattern incompatible with Redis (e.g. geographically distributed cache) ## References [Section titled “References”](#references) * Initial commit: `76378865` (2026-02-21) * StackExchange.Redis:
# ADR-003: Testing Stack — xUnit v3, NSubstitute and Bogus
> Selection of xUnit v3, NSubstitute, and Bogus as the foundational testing stack
> **Date:** 2026-02-21 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet, consuming applications ## Context [Section titled “Context”](#context) The Granit framework applies the “tests are part of the DoD” principle: each package has a `*.Tests` project and no code can be shipped without test coverage. The choice of testing stack is therefore foundational for the entire platform. Requirements: * **Test framework**: parallelism, native CancellationToken, DI in tests, xUnit.v3 support for the new APIs * **Mocking**: dependency substitution (services, repositories, HTTP clients) with a clear API and no license issues * **Test data**: realistic and localized (FR) data generation * **Coverage**: code coverage collection for CI (Cobertura/OpenCover) * **CI**: result export in TRX/JUnit format for CI integration ## Decision [Section titled “Decision”](#decision) | Role | Library | License | | -------------- | ------------------- | ------------ | | Test framework | xUnit v3 | Apache-2.0 | | Mocking | NSubstitute | BSD-3-Clause | | Test data | Bogus | MIT | | Coverage | coverlet.collector | MIT | | CI report | JunitXml.TestLogger | MIT | ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Test framework [Section titled “Test framework”](#test-framework) #### xUnit v3 (selected) [Section titled “xUnit v3 (selected)”](#xunit-v3-selected) * Native CancellationToken (`TestContext.Current.CancellationToken`) * Parallelism by default (test collections) * Dominant adoption in the .NET open source ecosystem * Native `IAsyncLifetime` support for async setup/teardown #### NUnit [Section titled “NUnit”](#nunit) * Mature framework, rich in attributes (`[TestCase]`, `[SetUp]`, `[TearDown]`) * Disadvantage: less natural parallelism, modern .NET community leans more toward xUnit, no native CancellationToken in tests #### MSTest [Section titled “MSTest”](#mstest) * Official Microsoft framework * Disadvantage: limited features compared to xUnit/NUnit, low adoption in .NET open source, less expressive API #### TUnit [Section titled “TUnit”](#tunit) * Recent framework based on source generators (no runtime reflection) * Disadvantage: young project (v1.x), single maintainer, limited ecosystem (Testcontainers, Verify primarily target xUnit/NUnit) * Re-evaluation planned via a future ADR when the project reaches sufficient maturity (cf. [ADR-014](014-migration-shouldly.md)) ### Mocking [Section titled “Mocking”](#mocking) #### NSubstitute (selected) [Section titled “NSubstitute (selected)”](#nsubstitute-selected) * Clear and readable API (`service.Method().Returns(value)`) * BSD-3-Clause — no license issues * No verbose `Setup`/`Verify` syntax #### Moq [Section titled “Moq”](#moq) * Most popular historical library * **License issue**: SponsorLink (v4.20+) injected telemetry code into builds, creating a compliance risk and a community trust crisis. Incompatible with GDPR/ISO 27001 security policy #### FakeItEasy [Section titled “FakeItEasy”](#fakeiteasy) * Pleasant fluent API (`A.CallTo(() => ...).Returns(...)`) * Disadvantage: more verbose syntax than NSubstitute, smaller community ### Test data [Section titled “Test data”](#test-data) #### Bogus (selected) [Section titled “Bogus (selected)”](#bogus-selected) * Realistic data generation with locales (fr, fr\_BE, en, etc.) * Fluent API (`new Faker().RuleFor(...)`) * Support for complex types and business rules #### AutoFixture [Section titled “AutoFixture”](#autofixture) * Automatic generation without configuration * Disadvantage: unrealistic data (random strings), less control over business values, less intuitive syntax #### Faker.NET [Section titled “Faker.NET”](#fakernet) * Port of Faker.js * Disadvantage: less rich API than Bogus, less maintained ## Justification [Section titled “Justification”](#justification) ### Test framework [Section titled “Test framework”](#test-framework-1) | Criterion | xUnit v3 | NUnit | MSTest | TUnit | | ------------------------ | ---------- | --------- | --------- | ---------- | | Native CancellationToken | Yes | No | No | Yes | | Default parallelism | Yes | Partial | Partial | Yes | | .NET OSS adoption | Dominant | Strong | Low | Emerging | | Maturity | 15+ years | 20+ years | 20+ years | < 2 years | | License | Apache-2.0 | MIT | MIT | Apache-2.0 | ### Mocking [Section titled “Mocking”](#mocking-1) | Criterion | NSubstitute | Moq | FakeItEasy | | ---------------- | ------------ | ------------------- | ---------- | | License | BSD-3-Clause | MIT (+ SponsorLink) | Apache-2.0 | | SponsorLink risk | No | Yes | No | | API conciseness | Excellent | Good | Medium | | Community | Large | Very large | Medium | ### Test data [Section titled “Test data”](#test-data-1) | Criterion | Bogus | AutoFixture | Faker.NET | | ------------------ | ---------------- | ----------- | --------- | | FR locales | Yes (fr, fr\_BE) | No | Partial | | Realistic data | Yes | No (random) | Yes | | Fluent API | Yes | Partial | No | | Active maintenance | Yes | Yes | Low | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Coherent and modern stack, adopted by the majority of the .NET ecosystem * Zero license risk (no SponsorLink, no commercial license) * Native CancellationToken xUnit v3: interruptible tests, faster CI * Realistic and localized test data (French names, SIRET, etc.) * Cobertura coverage for SonarQube/SonarCloud and CI ### Negative [Section titled “Negative”](#negative) * Migration from another test framework would be costly if necessary * xUnit v3 is recent: some third-party tools may have a support lag * Bogus generates pseudo-random data (non-deterministic by default — use `Seed` for reproducibility)
# ADR-004: Asp.Versioning — REST API Versioning
> Adoption of Asp.Versioning for semantic REST API versioning with OpenAPI integration
> **Date:** 2026-02-22 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.ApiVersioning) ## Context [Section titled “Context”](#context) The platform REST APIs must support versioning to allow contract evolution without breaking existing clients. This need is particularly critical in a healthcare context (ISO 27001) where third-party integrators (laboratories, EHR systems) have long update cycles. Versioning must be: * **Explicit**: each endpoint declares its version * **Negotiable**: the client chooses the version via URL, header or query string * **Documented**: versions appear in the OpenAPI spec (Scalar UI) ## Decision [Section titled “Decision”](#decision) **Asp.Versioning.Mvc** (+ ApiExplorer) for semantic API versioning. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Asp.Versioning (selected) [Section titled “Option 1: Asp.Versioning (selected)”](#option-1-aspversioning-selected) * **License**: MIT (.NET Foundation) * **Advantage**: official .NET Foundation package (formerly Microsoft.AspNetCore.Mvc.Versioning), support for URL segment (`/api/v1/...`), header (`api-version`), query string (`?api-version=1`), media type. ApiExplorer integration for OpenAPI * **Maturity**: 8+ years, migrated from the historical Microsoft package ### Option 2: Manual URL versioning (routing convention) [Section titled “Option 2: Manual URL versioning (routing convention)”](#option-2-manual-url-versioning-routing-convention) * **Advantage**: zero dependency, simple for basic cases * **Disadvantage**: no version negotiation, no sunset policies, code duplication between versions, no automatic OpenAPI integration ### Option 3: Custom naming convention (namespace-based) [Section titled “Option 3: Custom naming convention (namespace-based)”](#option-3-custom-naming-convention-namespace-based) * **Advantage**: clear code organization by namespace/version * **Disadvantage**: requires a homegrown framework, no standard, maintenance and documentation burden on the team ## Justification [Section titled “Justification”](#justification) | Criterion | Asp.Versioning | Manual URL | Custom | | ------------------ | --------------------------- | ---------- | --------- | | .NET standard | Yes (.NET Foundation) | No | No | | Version modes | URL, header, QS, media type | URL only | Variable | | Integrated OpenAPI | Yes (ApiExplorer) | No | No | | Sunset policies | Yes | No | No | | Maintenance effort | None (community) | High | Very high | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * .NET ecosystem standard, abundant documentation * Multi-modal versioning (URL segment by default in Granit) * Automatic integration with Scalar UI via ApiExplorer * Sunset headers for progressive deprecation of old versions ### Negative [Section titled “Negative”](#negative) * Preview version (10.0.0-preview\.1) for .NET 10 — to monitor * Initial configuration required (default convention in `GranitApiVersioningModule`)
# ADR-005: Wolverine + Cronos — Messaging, CQRS and Scheduling
> Selection of Wolverine for messaging and CQRS with PostgreSQL transport, and Cronos for cron scheduling
> **Date:** 2026-02-22 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.Wolverine, Granit.Wolverine.Postgresql, Granit.BackgroundJobs) ## Context [Section titled “Context”](#context) The platform requires: * **Asynchronous messaging**: sending commands and events between modules (domain events, integration events) with delivery guarantees * **Transactional outbox**: messages must be persisted in the same transaction as business changes (eventual consistency without loss) * **CQRS**: command/query separation with an integrated mediator * **Background jobs**: execution of recurring tasks (synchronization, cleanup, reports) with cron scheduling and multi-instance resilience * **No external broker**: for the MVP, avoid the operational complexity of a RabbitMQ or Kafka — PostgreSQL must suffice as transport Cronos is used as the cron expression parser in the `Granit.BackgroundJobs` module for recurring job scheduling. ## Decision [Section titled “Decision”](#decision) * **Wolverine** (WolverineFx) as message bus, mediator and handler framework with PostgreSQL outbox * **Cronos** as cron expression parser for background job scheduling ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Messaging / Mediator [Section titled “Messaging / Mediator”](#messaging--mediator) #### Wolverine (selected) [Section titled “Wolverine (selected)”](#wolverine-selected) * **License**: MIT (JasperFx) * **Outbox**: native EF Core transactional (`WolverineFx.EntityFrameworkCore`) * **Transport**: native PostgreSQL (`WolverineFx.Postgresql`) — no broker required * **Pipeline**: composable middleware (validation, retry, DLQ, logging) * **Handlers**: convention-based (no interface to implement), auto-discovery * **Integration**: native FluentValidation middleware, multi-tenancy support #### MassTransit [Section titled “MassTransit”](#masstransit) * **License**: Apache-2.0 * **Advantage**: very mature, large community, multi-transport support (RabbitMQ, Azure SB, Amazon SQS, in-memory) * **Disadvantage**: requires an external broker for production (RabbitMQ minimum), more verbose configuration, EF Core outbox available but less integrated than Wolverine, no native PostgreSQL-as-transport #### MediatR [Section titled “MediatR”](#mediatr) * **License**: Apache-2.0 * **Advantage**: simple, lightweight, pure mediator pattern * **Disadvantage**: no outbox, no transport, no retry/DLQ, no scheduling — only an in-process mediator. Requires combining with another tool for asynchronous messaging #### Brighter [Section titled “Brighter”](#brighter) * **License**: MIT * **Advantage**: outbox support, middleware pipeline * **Disadvantage**: smaller community, less comprehensive documentation, more complex configuration than Wolverine #### NServiceBus [Section titled “NServiceBus”](#nservicebus) * **License**: commercial (Particular Software) * **Advantage**: complete enterprise solution, saga support, monitoring * **Disadvantage**: paid license, incompatible with the project’s OSS strategy ### Scheduling / Cron [Section titled “Scheduling / Cron”](#scheduling--cron) #### Cronos (selected) [Section titled “Cronos (selected)”](#cronos-selected) * **License**: MIT * **Advantage**: lightweight and fast cron parser, optional seconds support, next occurrence calculation without state * **Usage**: integrated in `RecurringJobAttribute` and `CronSchedulerAgent` #### Quartz.NET [Section titled “Quartz.NET”](#quartznet) * **Advantage**: complete scheduler with persistence, clustering, advanced triggers * **Disadvantage**: oversized (complete scheduler when Wolverine already handles execution), responsibility duplication, heavy configuration #### Hangfire [Section titled “Hangfire”](#hangfire) * **License**: LGPL-3.0 (core), commercial (Pro) * **Advantage**: built-in dashboard, recurring jobs, automatic retry * **Disadvantage**: overlap with Wolverine (transport, retry, DLQ), restrictive license for advanced features #### NCrontab [Section titled “NCrontab”](#ncrontab) * **Advantage**: simple and lightweight cron parser * **Disadvantage**: no seconds support, less modern API than Cronos, reduced maintenance ## Justification [Section titled “Justification”](#justification) ### Messaging [Section titled “Messaging”](#messaging) | Criterion | Wolverine | MassTransit | MediatR | Brighter | NServiceBus | | -------------------- | --------- | ----------- | ----------- | -------- | ----------- | | License | MIT | Apache-2.0 | Apache-2.0 | MIT | Commercial | | EF Core outbox | Native | Yes | No | Yes | Yes | | PostgreSQL transport | Native | No | N/A | No | No | | Broker required | No | Yes (prod) | N/A | Yes | Yes | | Middleware pipeline | Yes | Yes | Yes | Yes | Yes | | FluentValidation | Native | Third-party | Third-party | No | No | | Convention-based | Yes | Partial | No | No | No | | Multi-tenancy | Yes | Yes | No | No | Yes | ### Scheduling [Section titled “Scheduling”](#scheduling) | Criterion | Cronos | Quartz.NET | Hangfire | NCrontab | | --------- | ----------- | ------------------ | ------------------ | ----------- | | License | MIT | Apache-2.0 | LGPL/Commercial | Apache-2.0 | | Scope | Parser only | Complete scheduler | Complete scheduler | Parser only | | Seconds | Optional | Yes | No | No | | Weight | Very light | Heavy | Medium | Light | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * No external broker: PostgreSQL suffices as transport (operational simplicity) * Transactional outbox: zero message loss, guaranteed eventual consistency * Unified Wolverine pipeline: validation, retry, DLQ, logging, tracing * Lightweight Cronos: just a parser, orchestration is handled by Wolverine * MIT license for the entire stack ### Negative [Section titled “Negative”](#negative) * Wolverine is less known than MassTransit (smaller community) * Dependency on JasperFx (primary maintainer: Jeremy D. Miller) * If a broker need arises (RabbitMQ, Kafka), migration required (Wolverine supports RabbitMQ and Azure SB, but not Kafka natively) * PostgreSQL-as-transport has throughput limits vs a dedicated broker
# ADR-006: FluentValidation — Business Validation Framework
> Adoption of FluentValidation for composable business validation with Wolverine pipeline integration
> **Date:** 2026-02-24 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.Validation, Granit.Wolverine) ## Context [Section titled “Context”](#context) The platform requires a validation framework for: * **Business validation**: complex and composable rules (address, SIRET, IBAN, email, locale) with standardized error codes * **Wolverine integration**: automatic command validation before execution via the middleware pipeline (`WolverineFx.FluentValidation`) * **Error codes**: mapping to RFC 7807 ProblemDetails for HTTP responses * **Extensibility**: custom reusable validators across modules ## Decision [Section titled “Decision”](#decision) **FluentValidation** as the business validation framework. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: FluentValidation (selected) [Section titled “Option 1: FluentValidation (selected)”](#option-1-fluentvalidation-selected) * **License**: Apache-2.0 * **Advantage**: composable fluent API (`RuleFor(x => x.Email).EmailAddress()`), native Wolverine integration, large community, easy custom validators * **Maturity**: 15+ years, de facto standard for .NET validation ### Option 2: DataAnnotations only [Section titled “Option 2: DataAnnotations only”](#option-2-dataannotations-only) * **Advantage**: native .NET, zero dependency, integrated with model binding * **Disadvantage**: limited to simple validations (attributes), no composition, no complex conditional validation, no Wolverine middleware integration, non-standardized error codes ### Option 3: MiniValidation [Section titled “Option 3: MiniValidation”](#option-3-minivalidation) * **License**: MIT * **Advantage**: lightweight, based on DataAnnotations with extensions * **Disadvantage**: no composable rules, no Wolverine integration, limited community, does not cover complex business cases ### Option 4: Custom validation (no framework) [Section titled “Option 4: Custom validation (no framework)”](#option-4-custom-validation-no-framework) * **Advantage**: full control, no dependency * **Disadvantage**: considerable development and maintenance effort, reinventing the wheel, no standards, no middleware pipeline ## Justification [Section titled “Justification”](#justification) | Criterion | FluentValidation | DataAnnotations | MiniValidation | Custom | | ---------------------- | --------------------- | --------------- | -------------- | ------ | | License | Apache-2.0 | Native .NET | MIT | N/A | | Composable rules | Yes | No | No | Manual | | Wolverine middleware | Native | No | No | Manual | | Conditional validation | Yes (When/Unless) | No | No | Manual | | Error codes | Yes (WithErrorCode) | Limited | Limited | Manual | | Community | Very large | Standard | Low | N/A | | RFC 7807 mapping | Via Granit.AspNetCore | Manual | Manual | Manual | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Declarative and readable validation in each module * Wolverine integration: commands are validated before execution (DLQ on failure) * Standardized Granit error codes (e.g. `VALIDATION.EMAIL.INVALID`) * Reusable validators across packages (`AddressValidator`, `SiretValidator`) * Automatic mapping to ProblemDetails RFC 7807 via `GranitExceptionHandler` ### Negative [Section titled “Negative”](#negative) * Third-party dependency for validation (risk of major breaking changes) * Partial duplication with DataAnnotations for simple cases (Granit convention: use FluentValidation even for simple cases, for consistency)
# ADR-007: Testcontainers — Containerized Integration Tests
> Adoption of Testcontainers for ephemeral PostgreSQL containers in integration tests
> **Date:** 2026-02-24 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.Wolverine.Postgresql.IntegrationTests) ## Context [Section titled “Context”](#context) Granit integration tests require a real PostgreSQL database to validate DBMS-specific behaviors: EF Core migrations, Wolverine outbox, multi-tenant global filters, JSONB queries, etc. In-memory alternatives (EF Core InMemory, SQLite) do not faithfully reproduce PostgreSQL behavior and mask bugs that only appear in production. ## Decision [Section titled “Decision”](#decision) **Testcontainers** (`Testcontainers.PostgreSql`) to orchestrate ephemeral PostgreSQL containers in integration tests. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Testcontainers (selected) [Section titled “Option 1: Testcontainers (selected)”](#option-1-testcontainers-selected) * **License**: MIT * **Advantage**: real PostgreSQL container started on demand, complete isolation per test, automatic cleanup, .NET fluent API, xUnit support via `IAsyncLifetime` * **CI**: compatible with GitHub Actions (service containers) and GitLab CI ### Option 2: EF Core InMemory [Section titled “Option 2: EF Core InMemory”](#option-2-ef-core-inmemory) * **Advantage**: fast, zero infrastructure dependency * **Disadvantage**: no real SQL (no migrations, no FK constraints, no JSONB, no transactions), false sense of confidence, bugs masked in production ### Option 3: SQLite (EF Core) [Section titled “Option 3: SQLite (EF Core)”](#option-3-sqlite-ef-core) * **Advantage**: real SQL without a server, fast * **Disadvantage**: SQL dialect different from PostgreSQL (no JSONB, no schemas, different types), non-portable migrations, different transactional behavior ### Option 4: Shared PostgreSQL test database [Section titled “Option 4: Shared PostgreSQL test database”](#option-4-shared-postgresql-test-database) * **Advantage**: no Docker, speed (no container startup) * **Disadvantage**: shared state between tests (difficult isolation), manual cleanup, non-reproducible CI (depends on external server), conflicts between developers ## Justification [Section titled “Justification”](#justification) | Criterion | Testcontainers | InMemory | SQLite | Shared DB | | ------------------- | -------------------- | --------- | -------- | --------- | | PostgreSQL fidelity | Full | None | Partial | Full | | Isolation | Per test | Per test | Per test | Difficult | | CI reproducibility | Yes | Yes | Yes | No | | Speed | Medium (\~3-5s init) | Very fast | Fast | Fast | | Zero external infra | Yes (Docker) | Yes | Yes | No | | EF Core migrations | Yes | No | Partial | Yes | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Tests faithful to production behavior (real PostgreSQL) * Complete isolation: each test suite has its own database * Reproducible CI without external dependency * Early detection of DBMS-related bugs (types, constraints, transactions) ### Negative [Section titled “Negative”](#negative) * Requires Docker on development machines and in CI * Container startup time (\~3-5 seconds per test suite) * Higher memory consumption than in-memory alternatives
# ADR-008: SmartFormat.NET — CLDR Pluralization
> Selection of SmartFormat.NET for CLDR-compliant pluralization in the localization system
> **Date:** 2026-02-26 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.Localization) ## Context [Section titled “Context”](#context) The `Granit.Localization` module provides a modular JSON localization system. Pluralization is a critical feature for multilingual applications: CLDR rules (Unicode Common Locale Data Repository) define pluralization categories that vary by language (French: singular/plural, Arabic: 6 forms, etc.). ## Decision [Section titled “Decision”](#decision) **SmartFormat.NET** for pluralization in the localization system. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: SmartFormat.NET (selected) [Section titled “Option 1: SmartFormat.NET (selected)”](#option-1-smartformatnet-selected) * **License**: MIT * **Advantage**: complete CLDR support (all languages), familiar syntax (`{0:plural:...}`), lightweight (\~50 KB), no native dependency (ICU), extensible via custom formatters * **Maturity**: 12+ years, active ### Option 2: ICU4N [Section titled “Option 2: ICU4N”](#option-2-icu4n) * **Advantage**: official ICU implementation for .NET, MessageFormat support * **Disadvantage**: native ICU dependency (cross-platform deployment issues, \~30 MB size), complex API, limited .NET documentation ### Option 3: MessageFormat.NET [Section titled “Option 3: MessageFormat.NET”](#option-3-messageformatnet) * **Advantage**: implementation of the ICU MessageFormat standard * **Disadvantage**: poorly maintained project, limited community, insufficient documentation, no complete CLDR support ### Option 4: Custom pluralization [Section titled “Option 4: Custom pluralization”](#option-4-custom-pluralization) * **Advantage**: full control, zero dependency * **Disadvantage**: reimplementing CLDR rules (200+ languages) is a considerable effort and error-prone, long-term maintenance burden ### Option 5: gettext (.po files) [Section titled “Option 5: gettext (.po files)”](#option-5-gettext-po-files) * **Advantage**: industry standard for localization, mature tooling * **Disadvantage**: .po file format incompatible with Granit’s JSON system, requires a complete overhaul of the localization pipeline, limited .NET ecosystem ## Justification [Section titled “Justification”](#justification) | Criterion | SmartFormat | ICU4N | MessageFormat | Custom | gettext | | ------------------ | ----------- | --------- | ------------- | ------ | ------------ | | License | MIT | MIT | MIT | N/A | GPL/LGPL | | Complete CLDR | Yes | Yes | Partial | No | Yes | | Native dependency | No | Yes (ICU) | No | No | No | | Size | \~50 KB | \~30 MB | \~20 KB | 0 | Variable | | .NET documentation | Good | Low | Low | N/A | Low | | JSON integration | Easy | Possible | Possible | N/A | Incompatible | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Correct pluralization for all languages without native dependency * Concise and readable syntax in JSON translation files * Lightweight: no significant impact on package size * Extensible: custom formatters for specific business cases ### Negative [Section titled “Negative”](#negative) * SmartFormat has its own syntax (not a pure ICU MessageFormat standard) * If a switch to ICU MessageFormat becomes necessary, translation file migration required
# ADR-009: Scalar.AspNetCore — Interactive API Documentation
> Adoption of Scalar as the interactive OpenAPI documentation UI replacing Swagger UI
> **Date:** 2026-02-26 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.ApiDocumentation) ## Context [Section titled “Context”](#context) The platform REST APIs require an interactive documentation interface for developers and integrators. Since .NET 9, Microsoft removed Swashbuckle (Swagger UI) from the default template in favor of `Microsoft.AspNetCore.OpenApi` for OpenAPI spec generation. The documentation UI choice must natively integrate with the new .NET 10 OpenAPI pipeline without depending on Swashbuckle. ## Decision [Section titled “Decision”](#decision) **Scalar.AspNetCore** as the interactive OpenAPI documentation UI. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Scalar (selected) [Section titled “Option 1: Scalar (selected)”](#option-1-scalar-selected) * **License**: MIT * **Advantage**: native `Microsoft.AspNetCore.OpenApi` integration (.NET 9+), modern and responsive UI, built-in “try it”, customizable themes, API search, OpenAPI 3.1 support * **Configuration**: `app.MapScalarApiReference()` — one line ### Option 2: Swagger UI (Swashbuckle) [Section titled “Option 2: Swagger UI (Swashbuckle)”](#option-2-swagger-ui-swashbuckle) * **Advantage**: historical standard, very wide adoption * **Disadvantage**: Swashbuckle is **abandoned** (last release 2023, removed from .NET 9 template by Microsoft), dependency on NSwag for spec generation (overlap with `Microsoft.AspNetCore.OpenApi`), dated UI ### Option 3: ReDoc [Section titled “Option 3: ReDoc”](#option-3-redoc) * **License**: MIT * **Advantage**: elegant static documentation, three-column layout * **Disadvantage**: no native “try it” (read-only), requires custom integration with the .NET OpenAPI pipeline, less interactive ### Option 4: RapiDoc [Section titled “Option 4: RapiDoc”](#option-4-rapidoc) * **License**: MIT * **Advantage**: lightweight, web component, themes * **Disadvantage**: smaller community, no official .NET integration, irregular maintenance ### Option 5: Stoplight Elements [Section titled “Option 5: Stoplight Elements”](#option-5-stoplight-elements) * **License**: Apache-2.0 * **Advantage**: modern UI, API design support * **Disadvantage**: SaaS-oriented (Stoplight Studio), non-official .NET integration, advanced features are paid ## Justification [Section titled “Justification”](#justification) | Criterion | Scalar | Swagger UI | ReDoc | RapiDoc | Elements | | ------------------- | ------ | ---------- | ------ | --------- | ---------- | | License | MIT | MIT | MIT | MIT | Apache-2.0 | | .NET 10 integration | Native | Abandoned | Custom | Custom | Custom | | Interactive try-it | Yes | Yes | No | Yes | Yes | | Modern UI | Yes | No | Yes | Yes | Yes | | Active maintenance | Yes | No (2023) | Yes | Irregular | Yes | | .NET configuration | 1 line | \~10 lines | Custom | Custom | Custom | | OpenAPI 3.1 | Yes | Partial | Yes | Yes | Yes | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Native integration with `Microsoft.AspNetCore.OpenApi` (zero Swashbuckle) * Modern UI with try-it, search, and themes * Minimal configuration (`app.MapScalarApiReference()`) * Compatible with Asp.Versioning (multiple versions in the spec) * Active maintenance and regular releases ### Negative [Section titled “Negative”](#negative) * Scalar is less known than Swagger UI (familiarization curve for the team) * Relatively recent project (2023) — less track record than Swagger UI * Some advanced features (mocking, testing) are in Scalar Cloud (SaaS)
# ADR-010: Scriban — Text Template Engine
> Selection of Scriban as the sandboxed text template engine for document generation
> **Date:** 2026-02-27 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.Templating.Scriban) ## Context [Section titled “Context”](#context) The `Granit.Templating` module provides a document generation pipeline: text template -> HTML rendering -> conversion to final format (PDF, Excel, etc.). The text template engine must: * **Security**: execute templates in a sandbox (no filesystem or network access) * **Extensibility**: custom functions, global variables (date, user, tenant) * **Performance**: template compilation, caching * **Syntax**: intuitive for non-developers (business operations) ## Decision [Section titled “Decision”](#decision) **Scriban** as the text template engine for document generation. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Scriban (selected) [Section titled “Option 1: Scriban (selected)”](#option-1-scriban-selected) * **License**: BSD-2-Clause * **Advantage**: sandboxed by default (no system access), intuitive Liquid-like syntax, extensible (custom functions, `GlobalContext`), template compilation and caching, excellent performance (\~10x faster than Razor) * **Size**: lightweight (\~200 KB) ### Option 2: Razor (RazorLight) [Section titled “Option 2: Razor (RazorLight)”](#option-2-razor-razorlight) * **Advantage**: familiar C# syntax for .NET developers, powerful * **Disadvantage**: **not sandboxed** (full .NET runtime access — security risk if templates are user-editable), dependency on Roslyn compiler (heavy, \~20 MB), high compilation time, RazorLight is a less maintained third-party wrapper ### Option 3: Fluid (Liquid .NET) [Section titled “Option 3: Fluid (Liquid .NET)”](#option-3-fluid-liquid-net) * **License**: MIT * **Advantage**: .NET implementation of Liquid (Shopify standard), sandboxed, similar syntax to Scriban * **Disadvantage**: less performant than Scriban on benchmarks, more limited extensibility (no native `GlobalContext`), smaller .NET community ### Option 4: Handlebars.NET [Section titled “Option 4: Handlebars.NET”](#option-4-handlebarsnet) * **License**: MIT * **Advantage**: .NET port of Handlebars.js, logicless templates * **Disadvantage**: “logicless” too restrictive (no complex conditions, no advanced loops), extensibility via helpers only, lower performance than Scriban ### Option 5: Mustache (Stubble) [Section titled “Option 5: Mustache (Stubble)”](#option-5-mustache-stubble) * **License**: MIT * **Advantage**: multi-language standard, very simple * **Disadvantage**: too minimalistic (no filters, no functions, no expressions), unsuitable for complex document generation ## Justification [Section titled “Justification”](#justification) | Criterion | Scriban | Razor | Fluid | Handlebars | Mustache | | ---------------- | ----------------- | ------------------ | -------- | ---------- | ------------ | | License | BSD-2-Clause | MIT | MIT | MIT | MIT | | Sandbox | Yes (native) | No | Yes | Partial | Yes | | Extensibility | Excellent | Total (C#) | Good | Limited | Very limited | | Performance | Very fast | Slow (compilation) | Fast | Medium | Fast | | Intuitive syntax | Yes (Liquid-like) | C# (dev only) | Yes | Yes | Yes | | Size | \~200 KB | \~20 MB (Roslyn) | \~150 KB | \~100 KB | \~50 KB | | GlobalContext | Yes | No | No | No | No | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Templates executed in a sandbox: no risk of arbitrary code execution * Liquid-like syntax accessible to business users * GlobalContext for enrichment variables (date, tenant, user) * Template caching and compilation for performance * Complete pipeline: Scriban -> HTML -> PDF (via PuppeteerSharp) ### Negative [Section titled “Negative”](#negative) * Scriban-specific syntax (not a standard like Liquid or Mustache) * BSD-2-Clause (very permissive, but less common than MIT/Apache-2.0) * Project maintained by an individual developer (Alexandre Mutel — also author of markdig, SharpDX, etc.)
# ADR-011: ClosedXML — Excel Spreadsheet Generation
> Selection of ClosedXML for Excel file generation from templates with MIT licensing
> **Date:** 2026-02-27 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.DocumentGeneration.Excel) ## Context [Section titled “Context”](#context) The `Granit.DocumentGeneration.Excel` module generates `.xlsx` spreadsheets from Excel templates. Use cases include: data exports, financial reports, dashboards, and regulatory documents. The library must support: * **Templates**: filling named cells in an existing `.xlsx` file * **Styling**: preserving styles, formulas and page layout from the template * **Performance**: generating large files (10,000+ rows) with streaming * **License**: compatible with commercial use without recurring costs ## Decision [Section titled “Decision”](#decision) **ClosedXML** for Excel file (.xlsx) generation. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: ClosedXML (selected) [Section titled “Option 1: ClosedXML (selected)”](#option-1-closedxml-selected) * **License**: MIT * **Advantage**: intuitive high-level API, .xlsx template support, styles/formulas/layout preserved, active maintenance, MIT * **Maturity**: 10+ years, large community ### Option 2: EPPlus [Section titled “Option 2: EPPlus”](#option-2-epplus) * **License**: **Polyform Noncommercial License** (v5+) — commercial use requires a **paid license** (\~$300/dev/year) * **Advantage**: very complete API, excellent performance, exhaustive documentation * **Disadvantage**: license change in 2020 (v5), recurring cost incompatible with the project’s OSS strategy ### Option 3: NPOI [Section titled “Option 3: NPOI”](#option-3-npoi) * **License**: Apache-2.0 * **Advantage**: .NET port of Apache POI, supports .xls and .xlsx * **Disadvantage**: low-level and verbose API (close to Java POI), insufficient .NET documentation, lower performance than ClosedXML and EPPlus, irregular maintenance ### Option 4: Open XML SDK (Microsoft) [Section titled “Option 4: Open XML SDK (Microsoft)”](#option-4-open-xml-sdk-microsoft) * **License**: MIT * **Advantage**: official Microsoft SDK, full access to the OOXML format * **Disadvantage**: **very low-level** API (direct OOXML XML manipulation), extreme verbosity for simple operations (filling a cell = \~20 lines), no native template support, no high-level style management ## Justification [Section titled “Justification”](#justification) | Criterion | ClosedXML | EPPlus | NPOI | Open XML SDK | | ---------------- | --------- | ---------------- | ---------- | ---------------- | | License | MIT | Commercial (v5+) | Apache-2.0 | MIT | | Cost | Free | \~$300/dev/year | Free | Free | | High-level API | Yes | Yes | No | No | | Template support | Yes | Yes | Partial | No | | Documentation | Good | Excellent | Low | Good (low-level) | | Performance | Good | Excellent | Medium | Good | | Maintenance | Active | Active | Irregular | Active (MS) | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * MIT license: no recurring cost, compatible with commercial use * Intuitive API: `worksheet.Cell("A1").Value = "Hello"` vs \~20 lines Open XML SDK * Template support: filling named cells in an existing .xlsx * Style, formula and layout preservation * Large community and documentation ### Negative [Section titled “Negative”](#negative) * Slightly lower performance than EPPlus on very large files * Some advanced features (charts, pivot tables) are less complete than in EPPlus * No native streaming support for very large files (in-memory loading — workaround possible with pagination)
# ADR-012: PuppeteerSharp — HTML to PDF Rendering
> Adoption of PuppeteerSharp with headless Chromium for pixel-perfect HTML to PDF conversion
> **Date:** 2026-02-28 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.DocumentGeneration.Pdf) ## Context [Section titled “Context”](#context) The `Granit.DocumentGeneration.Pdf` module converts HTML (generated by Scriban) into PDF documents. Use cases include: invoices, medical reports, certificates, and regulatory documents. Requirements: * **CSS fidelity**: pixel-perfect HTML/CSS rendering (flexbox, grid, @media print) * **PDF/A-3b**: compliance for long-term archiving (ISO 27001) and Factur-X * **Headers/footers**: dynamic headers and footers (pagination, date) * **Performance**: generation in < 2 seconds for a standard document ## Decision [Section titled “Decision”](#decision) **PuppeteerSharp** (headless Chromium) for HTML to PDF conversion. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: PuppeteerSharp (selected) [Section titled “Option 1: PuppeteerSharp (selected)”](#option-1-puppeteersharp-selected) * **License**: MIT * **Advantage**: perfect CSS fidelity (Chromium Blink engine), PDF/A-3b support (via post-processing), dynamic headers/footers, landscape/portrait, configurable margins, native async .NET API * **Pipeline**: Scriban -> HTML -> PuppeteerSharp -> PDF ### Option 2: QuestPDF [Section titled “Option 2: QuestPDF”](#option-2-questpdf) * **License**: **Community License** (free < $1M revenue, otherwise commercial) * **Advantage**: fluent C# API (code-first), no Chromium, lightweight * **Disadvantage**: no HTML rendering (C# API only — incompatible with the Scriban -> HTML pipeline), restrictive license for enterprises, no native PDF/A, learning curve for non-developers ### Option 3: iText7 [Section titled “Option 3: iText7”](#option-3-itext7) * **License**: **AGPL-3.0** (commercial use requires a paid license) * **Advantage**: reference library for PDF manipulation, native PDF/A, very mature * **Disadvantage**: AGPL license incompatible with a distributed framework (source code publication obligation), expensive commercial license, limited HTML rendering (pdfHTML paid add-on) ### Option 4: wkhtmltopdf [Section titled “Option 4: wkhtmltopdf”](#option-4-wkhtmltopdf) * **Advantage**: lightweight, WebKit rendering * **Disadvantage**: **abandoned project** (last release 2020), obsolete WebKit engine (no flexbox, grid), uncorrected security issues, no native .NET support (CLI wrapper) ### Option 5: Playwright (via Microsoft.Playwright) [Section titled “Option 5: Playwright (via Microsoft.Playwright)”](#option-5-playwright-via-microsoftplaywright) * **License**: Apache-2.0 * **Advantage**: Chromium engine like PuppeteerSharp, modern API, maintained by Microsoft * **Disadvantage**: much heavier package (\~200 MB vs \~50 MB — includes Firefox and WebKit), testing-oriented (no native PDF generation API, requires workarounds), higher initialization overhead ## Justification [Section titled “Justification”](#justification) | Criterion | PuppeteerSharp | QuestPDF | iText7 | wkhtmltopdf | Playwright | | ---------------------- | ------------------ | ------------ | --------------- | ----------------- | --------------- | | License | MIT | Freemium | AGPL/Commercial | MIT | Apache-2.0 | | HTML rendering | Chromium (perfect) | No (C# only) | Limited (paid) | WebKit (obsolete) | Chromium | | Scriban->HTML pipeline | Yes | No | Partial | Yes | Yes | | PDF/A-3b | Post-processing | No | Native | No | Post-processing | | Package size | \~50 MB | \~5 MB | \~10 MB | \~40 MB | \~200 MB | | Maintenance | Active | Active | Active | Abandoned | Active (MS) | | Performance | Good | Excellent | Good | Medium | Good | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Perfect CSS fidelity: PDF is identical to browser rendering * Unified pipeline: Scriban (template) -> HTML -> PuppeteerSharp (PDF) * PDF/A-3b support via post-processing for ISO 27001 archiving and Factur-X * MIT: no license constraints * Native async .NET API with Chromium lifecycle management ### Negative [Section titled “Negative”](#negative) * Chromium dependency: \~50 MB binary to download on first launch * Memory consumption: one Chromium process per rendering instance (browser pool recommended) * Chromium startup time: \~1-2 seconds on first call (mitigated by browser pool) * Chromium requires system dependencies in CI/production (libgbm, libatk, etc. — handled via Docker image)
# ADR-013: Magick.NET — Image Processing
> Selection of Magick.NET (ImageMagick wrapper) for cross-platform image processing with Apache-2.0 licensing
> **Date:** 2026-02-28 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.Imaging.MagickNet) ## Context [Section titled “Context”](#context) The `Granit.Imaging` module provides an image processing pipeline for the platform: resizing, format conversion, compression, watermarking, and metadata extraction. Use cases include: profile photos, scanned documents, medical images (excluding DICOM), and compliance watermarks. Requirements: * **Formats**: JPEG, PNG, WebP, TIFF, BMP, GIF (minimum) * **Transformations**: resize, crop, rotate, watermark, format conversion * **Cross-platform**: Linux (production K8s) and Windows (development) * **License**: compatible with commercial use ## Decision [Section titled “Decision”](#decision) **Magick.NET** (.NET wrapper for ImageMagick, Q8 variant) for image processing. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Magick.NET-Q8-AnyCPU (selected) [Section titled “Option 1: Magick.NET-Q8-AnyCPU (selected)”](#option-1-magicknet-q8-anycpu-selected) * **License**: Apache-2.0 * **Advantage**: 200+ supported formats, complete transformations (resize, crop, watermark, compression), cross-platform (native wrapper), Q8 variant (8 bits/channel — optimized memory), .NET fluent API, very mature (ImageMagick: 25+ years) ### Option 2: ImageSharp (SixLabors) [Section titled “Option 2: ImageSharp (SixLabors)”](#option-2-imagesharp-sixlabors) * **License**: **Six Labors Split License** (v3+) — commercial use requires a **paid license** (enterprise $2,500/year) * **Advantage**: 100% managed .NET (no native dependency), modern API, excellent performance, composable pipeline * **Disadvantage**: license change in 2023 (v3), recurring cost for production features, fewer formats than Magick.NET ### Option 3: SkiaSharp [Section titled “Option 3: SkiaSharp”](#option-3-skiasharp) * **License**: MIT * **Advantage**: Skia engine (Google), performant for 2D rendering, cross-platform * **Disadvantage**: rendering/drawing-oriented (not image processing), low-level API for common transformations (resize, watermark), native Skia dependency (\~10 MB), not all formats supported (no TIFF, partial WebP) ### Option 4: System.Drawing.Common [Section titled “Option 4: System.Drawing.Common”](#option-4-systemdrawingcommon) * **License**: MIT (Microsoft) * **Advantage**: native .NET Framework, familiar API * **Disadvantage**: **Windows-only** since .NET 6 (no Linux support — blocking for K8s production), deprecated by Microsoft, known memory leaks, not thread-safe ## Justification [Section titled “Justification”](#justification) | Criterion | Magick.NET | ImageSharp | SkiaSharp | System.Drawing | | -------------- | -------------- | ---------------- | ------------ | ---------------------- | | License | Apache-2.0 | Commercial (v3+) | MIT | MIT | | Cost | Free | $2,500/year | Free | Free | | Formats | 200+ | \~30 | \~15 | \~10 | | Cross-platform | Yes (native) | Yes (managed) | Yes (native) | Windows only | | Watermark | Native | Native | Manual | Manual | | Maturity | 25+ years (IM) | 7+ years | 10+ years | 20+ years (deprecated) | | Thread-safe | Yes | Yes | Partial | No | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * 200+ supported formats, covering all current and future use cases * Apache-2.0: no license cost (ImageSharp would be \~$2,500/year) * Cross-platform: works on Linux (K8s production) and Windows (dev) * Q8 variant: optimized memory (8 bits/channel sufficient for web) * Native watermark for compliance watermarks ### Negative [Section titled “Negative”](#negative) * Native ImageMagick dependency (libMagickWand) — handled by the AnyCPU package but may cause issues in some Docker environments (requires system libraries) * Less modern API than ImageSharp (C API wrapper) * Larger package size than managed alternatives (\~15 MB) * Historical ImageMagick vulnerabilities (ImageTragick 2016) — mitigated by built-in security policies and regular updates
# ADR-014: Migrate FluentAssertions to Shouldly
> Migration from FluentAssertions to Shouldly due to license change incompatible with commercial use
> **Date:** 2026-02-28 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet, consuming applications ## Context [Section titled “Context”](#context) FluentAssertions is the assertion library used in all test projects (`*.Tests`) of granit-dotnet and consuming applications. Starting with **version 7.x**, FluentAssertions was acquired by **Xceed Software** and changed its license: from **MIT** to the **Xceed Community License Agreement**. This new license **prohibits commercial use** without purchasing a paid license. The platform is a commercial product (healthcare, ISO 27001 certification). Using FluentAssertions 8.x in this context constitutes a **license non-compliance**. ## Decision [Section titled “Decision”](#decision) **Migrate to Shouldly** as the assertion library for all test projects. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Shouldly (selected) [Section titled “Option 1: Shouldly (selected)”](#option-1-shouldly-selected) * **License**: Apache-2.0 (permissive, compatible with commercial use) * **Maturity**: stable project, actively maintained, large community * **Impact**: assertion replacement only, the xUnit test framework remains unchanged * **API**: concise and readable syntax (`actual.ShouldBe(expected)`) ### Option 2: Downgrade to FluentAssertions 6.x (MIT) [Section titled “Option 2: Downgrade to FluentAssertions 6.x (MIT)”](#option-2-downgrade-to-fluentassertions-6x-mit) * **License**: MIT (last version under permissive license) * **Advantage**: no code migration required * **Disadvantage**: end-of-life version, no more security updates or patches. Incompatible with a long-term maintenance strategy ### Option 3: TUnit (complete xUnit + FluentAssertions replacement) [Section titled “Option 3: TUnit (complete xUnit + FluentAssertions replacement)”](#option-3-tunit-complete-xunit--fluentassertions-replacement) * **License**: Apache-2.0 * **Advantage**: modern test framework with built-in assertions (`Assert.That(x).IsEqualTo(42)`), native parallelism via source generators (no runtime reflection), native DI in tests, attribute-based lifecycle (`[Before]`, `[After]`) without `IAsyncLifetime` * **Performance**: significantly faster than xUnit on large test suites thanks to source generators and default parallelism * **Disadvantage**: implies complete test framework replacement (xUnit -> TUnit), not just assertions. Migration: attributes (`[Fact]` -> `[Test]`, `[Theory]` -> `[Test]` + `[Arguments]`), lifecycle, DI, CI runner **Discussion (2026-02-28)**: since the Granit framework is not yet in production and the test volume is low, the migration cost to TUnit would currently be minimal. However: 1. **xUnit v3 just released** with substantial improvements (parallelism, native `CancellationToken`, better DI) that reduce the performance gap 2. **Asymmetric risk**: if TUnit stagnates (young project, single maintainer), the framework ends up on a niche framework without ecosystem or broad community support. The .NET ecosystem (Testcontainers, Verify, etc.) primarily targets xUnit/NUnit 3. **Separation of concerns**: decoupling the assertion choice (immediate license problem) from the framework choice (distinct architectural decision) allows addressing the urgency without speculative bets **Verdict**: TUnit remains an option to re-evaluate when the project reaches sufficient maturity (v2+, significant adoption, official Testcontainers documentation). A dedicated ADR can be opened at that point to evaluate a xUnit -> TUnit migration based on concrete data ### Option 4: Purchase an Xceed commercial license [Section titled “Option 4: Purchase an Xceed commercial license”](#option-4-purchase-an-xceed-commercial-license) * **Advantage**: no migration * **Disadvantage**: recurring cost, dependency on a third-party vendor for a test library, risk of further price increases ## Justification [Section titled “Justification”](#justification) | Criterion | Shouldly | FA 6.x | TUnit | Xceed License | | ------------------------- | ------------------ | ----------- | ------------ | ----------------- | | Permissive license | Apache-2.0 | MIT (EOL) | Apache-2.0 | Paid | | Migration effort | Medium | None | Very high | None | | Longevity | Active maintenance | End of life | Recent | Vendor dependency | | xUnit compatibility | Full | Full | Incompatible | Full | | GDPR/ISO 27001 compliance | Yes | Risk (EOL) | Yes | Yes | Shouldly offers the best compliance / migration effort / longevity ratio. > **Note**: TUnit presents real advantages in performance and modernity, but the risk associated with its youth (v1.x, limited ecosystem) does not justify coupling the license problem (urgent) to a framework change (strategic). Migration to TUnit can be re-evaluated independently via a future ADR. ## Assertion mapping [Section titled “Assertion mapping”](#assertion-mapping) | FluentAssertions | Shouldly | | ------------------------------------- | ----------------------------------------------- | | `x.Should().Be(42)` | `x.ShouldBe(42)` | | `x.Should().NotBeNull()` | `x.ShouldNotBeNull()` | | `x.Should().BeTrue()` | `x.ShouldBeTrue()` | | `x.Should().BeFalse()` | `x.ShouldBeFalse()` | | `x.Should().BeNull()` | `x.ShouldBeNull()` | | `list.Should().HaveCount(3)` | `list.Count.ShouldBe(3)` | | `list.Should().BeEmpty()` | `list.ShouldBeEmpty()` | | `list.Should().Contain(item)` | `list.ShouldContain(item)` | | `list.Should().BeEquivalentTo(other)` | `list.ShouldBe(other, ignoreOrder: true)` | | `list.Should().BeInAscendingOrder()` | `list.ShouldBeInOrder(SortDirection.Ascending)` | | `x.Should().BeGreaterThan(0)` | `x.ShouldBeGreaterThan(0)` | | `act.Should().Throw()` | `Should.Throw(() => act())` | | `await act.Should().ThrowAsync()` | `await Should.ThrowAsync(() => act())` | | `.Because("reason")` | `customMessage: "reason"` | ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * License compliance restored (Apache-2.0) * ISO 27001 audit risk eliminated * Actively maintained library ### Negative [Section titled “Negative”](#negative) * One-time migration effort on all `*.Tests` projects * Team training on Shouldly syntax (low learning curve) * Documentation update (`docs/testing/assertions.md`, `CLAUDE.md`) ## Execution plan [Section titled “Execution plan”](#execution-plan) 1. Add `Shouldly` to `Directory.Packages.props` (granit-dotnet + consuming applications) 2. Replace assertions in each `*.Tests` project 3. Remove `FluentAssertions` from `Directory.Packages.props` 4. Update `THIRD-PARTY-NOTICES.md` 5. Update `docs/testing/assertions.md` 6. Update `CLAUDE.md` (Tests section) 7. Validate: `dotnet test` passes without failures
# ADR-015: Sep — High-Performance CSV Parsing
> Selection of Sep for SIMD-vectorized zero-allocation CSV parsing in the DataExchange pipeline
> **Date:** 2026-03-01 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.DataExchange.Csv) ## Context [Section titled “Context”](#context) The `Granit.DataExchange.Csv` module requires a CSV parser capable of processing files with 100,000+ rows in streaming, without loading the entire file into memory. Use cases include: patient data import, roundtrip reimport, initial reference data loading. The library must support: * **Streaming**: native `IAsyncEnumerable` for the DataExchange pipeline * **Performance**: large files without degradation * **Encodings**: UTF-8, UTF-8 BOM, Latin-1, Windows-1252 * **RFC 4180**: quoted fields, configurable separators * **License**: compatible with commercial use without recurring costs * **Target**: explicit .NET 10 ## Decision [Section titled “Decision”](#decision) **Sep** (nietras) for CSV parsing in `Granit.DataExchange.Csv`. ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Sep (selected) [Section titled “Option 1: Sep (selected)”](#option-1-sep-selected) * **License**: MIT * **Advantage**: zero-allocation after warmup, SIMD vectorization (AVX-512, SSE, ARM NEON), explicit net10.0 target, native `IAsyncEnumerable` (.NET 9+), `Span` / `ISpanParsable`, 9-35x faster than CsvHelper, AOT-compatible * **Maturity**: active releases in 2025 (0.9.0 to 0.12.2), extensive tests * **Disadvantage**: lower-level API (Span-oriented), 0.x version number (conservative versioning by the author, not a sign of instability) ### Option 2: CsvHelper [Section titled “Option 2: CsvHelper”](#option-2-csvhelper) * **License**: MS-PL / Apache-2.0 * **Advantage**: de facto standard (508M NuGet downloads), high-level API (`GetRecordsAsync()`, `ClassMap`, `TypeConverter`), native `IAsyncEnumerable`, excellent documentation, error callbacks (`BadDataFound`) * **Disadvantage**: one `string` allocation per column (significant at 100K+ rows), no SIMD vectorization, no explicit net10.0 target (via netstandard2.0), 9-35x slower than Sep ### Option 3: Sylvan.Data.Csv [Section titled “Option 3: Sylvan.Data.Csv”](#option-3-sylvandatacsv) * **License**: MIT * **Advantage**: 2-3x faster than CsvHelper, familiar `DbDataReader` API, auto-detection of delimiter, Lax mode for malformed data * **Disadvantage**: no explicit net10.0 target, no SIMD, no `IAsyncEnumerable` (only `ReadAsync()`), intermediate performance without decisive advantage over Sep or CsvHelper ### Option 4: RecordParser [Section titled “Option 4: RecordParser”](#option-4-recordparser) * **License**: MIT * **Advantage**: near-zero allocation via expression trees, `Span` * **Disadvantage**: last release November 2023 (18+ months), 116K downloads, no .NET 8/9/10 target, no `IAsyncEnumerable`, minimal documentation, stagnant maintenance ## Justification [Section titled “Justification”](#justification) | Criterion | Sep | CsvHelper | Sylvan.Data.Csv | RecordParser | | ------------------------ | ----------------- | ---------------- | --------------- | ------------ | | License | MIT | MS-PL/Apache-2.0 | MIT | MIT | | Performance vs CsvHelper | **9-35x** | 1x (baseline) | 2-3x | \~2x | | Zero-allocation | **Yes** | No | Low-alloc | Near-zero | | SIMD (AVX-512/NEON) | **Yes** | No | No | No | | Target net10.0 | **Yes** | No (netstandard) | No (net6.0) | No | | IAsyncEnumerable | **Yes** (.NET 9+) | Yes | No | No | | Span/Memory API | **Yes** | No | Partial | Yes | | NuGet downloads | \~1.4M | \~508M | \~3.1M | \~116K | | Active maintenance | **Yes** (2025) | Yes | Yes | No (2023) | | AOT/Trimming | **Yes** | Partial | Partial | Unknown | | API ergonomics | Medium | Excellent | Good | Low | The decisive criterion is **streaming performance** for large files (100K+ rows). Sep is 9-35x faster than CsvHelper thanks to SIMD vectorization and zero allocations. The lower-level API is not a disadvantage as it is encapsulated behind the `IFileParser` interface — consumers never see the Sep API directly. ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Fastest CSV parsing in the .NET ecosystem (SIMD vectorized) * Zero-allocation: no GC pressure on large imports * Explicit net10.0 target: runtime optimizations leveraged * Native `IAsyncEnumerable`: natural integration with the DataExchange pipeline * MIT: no cost, compatible with commercial use * AOT-compatible: no runtime reflection ### Negative [Section titled “Negative”](#negative) * Span-oriented API more verbose than CsvHelper for internal code * 0.x version (though stable and actively maintained) * Smaller community than CsvHelper (1.4M vs 508M downloads) * No native `ClassMap` — mapping is done by `IDataMapper` (by design) * No built-in error callbacks (`BadDataFound`) — handled via `try/catch` in the `SepCsvFileParser` implementation
# ADR-016: Sylvan.Data.Excel — Streaming Excel File Reading
> Selection of Sylvan.Data.Excel for zero-dependency streaming Excel file reading in the DataExchange pipeline
> **Date:** 2026-03-01 **Authors:** Jean-Francois Meyers **Scope:** granit-dotnet (Granit.DataExchange.Excel) ## Context [Section titled “Context”](#context) The `Granit.DataExchange.Excel` module requires an Excel parser capable of reading `.xlsx`, `.xlsb` and `.xls` files in streaming, without loading the entire workbook into memory (DOM model). Use cases include: patient data import, roundtrip reimport, initial loading from legacy `.xls` files. The framework already uses **ClosedXML** for Excel **generation** (`Granit.DocumentGeneration.Excel`). For **reading** (import), ClosedXML is unsuitable as it loads the complete DOM into memory (hundreds of MB for 100K+ rows). The library must support: * **Streaming**: forward-only reading without DOM loading * **Formats**: `.xlsx`, `.xlsb`, `.xls` (legacy files) * **Performance**: 100K+ rows with minimal memory footprint * **Async**: non-blocking support for the DataExchange pipeline * **License**: compatible with commercial use without recurring costs * **Dependencies**: minimal (avoid conflicts with ClosedXML) ## Decision [Section titled “Decision”](#decision) **Sylvan.Data.Excel** for Excel file reading in `Granit.DataExchange.Excel`. > ClosedXML remains for **generation** (`Granit.DocumentGeneration.Excel`). ## Alternatives considered [Section titled “Alternatives considered”](#alternatives-considered) ### Option 1: Sylvan.Data.Excel (selected) [Section titled “Option 1: Sylvan.Data.Excel (selected)”](#option-1-sylvandataexcel-selected) * **License**: MIT * **Advantage**: zero transitive dependencies (pure managed), `DbDataReader` forward-only streaming, `.xlsx`/`.xlsb`/`.xls` support, lowest memory footprint in the ecosystem, native async (`CreateAsync`, `ReadAsync`) * **Maturity**: Sylvan ecosystem (Csv at 3.1M downloads), version 0.5.2 * **Disadvantage**: smaller community (867K downloads), no native `IAsyncEnumerable` (wrapping required) ### Option 2: ClosedXML (already used for generation) [Section titled “Option 2: ClosedXML (already used for generation)”](#option-2-closedxml-already-used-for-generation) * **License**: MIT * **Advantage**: rich API, already in the dependency graph, same library for reading and writing * **Disadvantage**: **DOM model** — loads the entire workbook into memory. For 100K rows, consumption of hundreds of MB (each `XLCell` has its own `XLStyle`). Unsuitable for large file imports. ### Option 3: MiniExcel [Section titled “Option 3: MiniExcel”](#option-3-miniexcel) * **License**: Apache-2.0 * **Advantage**: very simple API (`Query()` in one line), SAX-like streaming (\~17 MB for 1M rows), native `IAsyncEnumerable` (v2 preview) * **Disadvantage**: transitive dependency on `DocumentFormat.OpenXml` (version conflict risk with ClosedXML which depends on the same package), no `.xls` or `.xlsb` support, typed access via dynamic/Dictionary (possible runtime errors) ### Option 4: ExcelDataReader [Section titled “Option 4: ExcelDataReader”](#option-4-exceldatareader) * **License**: MIT * **Advantage**: most popular (92M downloads), `IDataReader` forward-only, `.xls`/`.xlsx`/`.xlsb` support, battle-tested * **Disadvantage**: **no async support** (no `async`, no `ReadAsync`, no `IAsyncEnumerable`), targets only netstandard2.0 (no modern .NET optimizations), basic typed accessors ### Option 5: Open XML SDK (Microsoft) [Section titled “Option 5: Open XML SDK (Microsoft)”](#option-5-open-xml-sdk-microsoft) * **License**: MIT * **Advantage**: official SDK, SAX mode (`OpenXmlReader`) for ultimate streaming * **Disadvantage**: **very low-level** API — direct XML element manipulation, manual shared string table management, cell reference interpretation, style index management. Hundreds of lines for what other libraries accomplish in one line. ## Justification [Section titled “Justification”](#justification) | Criterion | Sylvan.Data.Excel | ClosedXML | MiniExcel | ExcelDataReader | Open XML SDK | | ---------------- | ----------------- | ----------------- | ------------------ | ---------------- | ------------ | | License | MIT | MIT | Apache-2.0 | MIT | MIT | | Reading model | **Forward-only** | DOM (all in RAM) | SAX streaming | Forward-only | DOM or SAX | | Memory 100K rows | **Very low** | Hundreds MB | \~17 MB | Low-medium | SAX: low | | Formats | .xlsx/.xlsb/.xls | .xlsx | .xlsx/.csv | .xlsx/.xlsb/.xls | .xlsx/.xlsb | | Async | **Yes** | No | Yes | **No** | No | | Transitive deps | **Zero** | OpenXml | OpenXml | None | N/A | | API | DbDataReader | Rich object model | dynamic/Dictionary | IDataReader | XML nodes | | NuGet downloads | \~867K | \~45M | \~10.1M | \~92M | \~250M+ | The decisive criterion is the combination of **zero transitive dependencies** + **forward-only streaming** + **async support** + **legacy .xls support**. Sylvan.Data.Excel is the only one to check all four boxes. The **zero dependencies** point is critical: `Granit.DocumentGeneration.Excel` already pulls `ClosedXML` -> `DocumentFormat.OpenXml`. Adding MiniExcel would bring a second transitive dependency on `DocumentFormat.OpenXml` with a version conflict risk. Sylvan.Data.Excel avoids this problem entirely. ## Consequences [Section titled “Consequences”](#consequences) ### Positive [Section titled “Positive”](#positive) * Lowest memory footprint for Excel file reading in .NET * Zero transitive dependencies (no conflict with ClosedXML/OpenXml) * Support for all 3 common formats: `.xlsx`, `.xlsb`, `.xls` (legacy) * Familiar and strongly-typed `DbDataReader` API * Native async (`CreateAsync`, `ReadAsync`) * MIT: no cost, compatible with commercial use ### Negative [Section titled “Negative”](#negative) * Smaller community than ExcelDataReader or MiniExcel * Version 0.5.x (stable Sylvan ecosystem but conservative versioning) * No native `IAsyncEnumerable` — requires a wrapper in `SylvanExcelFileParser` (trivial: `while ReadAsync yield return` loop) * No support for password-protected files (rare case for import)
# Frontend Architecture
> Architecture overview of granit-front — monorepo structure, dependency layers, source-direct pattern, and .NET alignment.
## Overview [Section titled “Overview”](#overview) `granit-front` is a pnpm monorepo containing the `@granit/*` packages — the JavaScript/TypeScript counterpart of the .NET framework `granit-dotnet`. Both frameworks expose symmetric contracts: TypeScript types in `@granit/querying`, `@granit/data-exchange`, `@granit/workflow`, etc. are the direct mirror of C# types in `Granit.Querying`, `Granit.DataExchange`, `Granit.Workflow`, etc. ## Core principles [Section titled “Core principles”](#core-principles) | Principle | Description | | --------------------- | ----------------------------------------------------------------------------------------------- | | **Source-direct** | No build step — packages export `.ts` files consumed directly by Vite | | **Headless** | Packages expose only hooks, types, and providers — UI components live in consuming applications | | **App-agnostic** | No application-specific logic — only reusable abstractions | | **Peer dependencies** | External dependencies are declared as `peerDependencies`, never `dependencies` | ## Technical stack [Section titled “Technical stack”](#technical-stack) * **TypeScript** 5 (strict) / **React** 19 * **Vitest** 4 / **ESLint** 10 / **Prettier** 3 * **pnpm** workspace / **Node** 24+ * **Conventional Commits** via commitlint + husky ## Dependency graph [Section titled “Dependency graph”](#dependency-graph) ``` graph TD subgraph "Foundation layer" logger["@granit/logger"] utils["@granit/utils"] storage["@granit/storage"] cookies["@granit/cookies"] end subgraph "Infrastructure layer" api["@granit/api-client"] react-authn["@granit/react-authentication"] react-authz["@granit/react-authorization"] localization["@granit/localization"] logger-otlp["@granit/logger-otlp"] cookies-klaro["@granit/cookies-klaro"] tracing["@granit/tracing"] error-boundary["@granit/error-boundary"] end subgraph "Business layer" querying["@granit/querying"] data-exchange["@granit/data-exchange"] workflow["@granit/workflow"] timeline["@granit/timeline"] notifications["@granit/notifications"] end react-authn --> api localization --> storage logger-otlp --> logger cookies-klaro --> cookies error-boundary --> logger querying --> utils data-exchange --> utils timeline --> querying notifications --> querying utils -.-> clsx["clsx"] utils -.-> tw["tailwind-merge"] utils -.-> datefns["date-fns"] api -.-> axios["axios"] react-authn -.-> keycloak["keycloak-js"] querying -.-> tanstack["@tanstack/react-query"] data-exchange -.-> tanstack notifications -.-> signalr["@microsoft/signalr"] tracing -.-> otel["@opentelemetry/*"] logger-otlp -.-> otel cookies-klaro -.-> klaro["klaro"] localization -.-> i18next["i18next"] style logger fill:#e8f5e9 style utils fill:#e8f5e9 style storage fill:#e8f5e9 style cookies fill:#e8f5e9 style api fill:#e3f2fd style react-authn fill:#e3f2fd style react-authz fill:#e3f2fd style localization fill:#e3f2fd style logger-otlp fill:#e3f2fd style cookies-klaro fill:#e3f2fd style tracing fill:#e3f2fd style error-boundary fill:#e3f2fd style querying fill:#fff3e0 style data-exchange fill:#fff3e0 style workflow fill:#fff3e0 style timeline fill:#fff3e0 style notifications fill:#fff3e0 ``` **Legend:** * **Green** (foundation) — packages with no internal `@granit` dependency * **Blue** (infrastructure) — packages depending on a foundation package * **Orange** (business) — packages implementing business functionality ## Vertical slice pattern [Section titled “Vertical slice pattern”](#vertical-slice-pattern) Each business package follows a vertical slice structure: ```text packages/@granit//src/ types/ # TypeScript contracts (mirror of .NET types) api/ # HTTP call functions (axios) hooks/ # React hooks (business logic) providers/ # React context providers utils/ # Package-internal utilities __tests__/ # Unit tests (Vitest) index.ts # Single entry point (public re-exports) ``` This reflects the data flow: ```text types/ → api/ → hooks/ → providers/ ``` 1. **types/** defines the data contract (aligned with .NET backend) 2. **api/** encapsulates HTTP calls via `axios` 3. **hooks/** orchestrates data fetching (often via `@tanstack/react-query`) and exposes business logic 4. **providers/** supplies React context for consuming components Foundation packages (`logger`, `utils`, `storage`, `cookies`) are simpler and do not necessarily have all these layers. ## Source resolution flow [Section titled “Source resolution flow”](#source-resolution-flow) ```text import { useQueryEndpoint } from '@granit/querying' → Vite alias → packages/@granit/querying/src/index.ts → TypeScript source → transpiled on the fly by Vite → no dist/, no intermediate build ``` This **source-direct** approach provides: * **Instant HMR** on framework code changes * **No build watch** to maintain for the monorepo * **Direct source maps** to the original code * **Frictionless refactoring** across framework and application For Docker/CI builds, packages are compiled with `tsup` (ESM + `.d.ts`) and published to the [GitHub Packages npm registry](/operations/frontend-npm-registry/). ## Alignment with granit-dotnet [Section titled “Alignment with granit-dotnet”](#alignment-with-granit-dotnet) | granit-dotnet (.NET) | granit-front (TypeScript) | | ---------------------------- | ---------------------------------------- | | `Granit.Querying` | `@granit/querying` | | `Granit.DataExchange.Export` | `@granit/data-exchange` (export) | | `Granit.DataExchange.Import` | `@granit/data-exchange` (import) | | `Granit.Workflow` | `@granit/workflow` | | `Granit.Notifications` | `@granit/notifications` | | `Granit.Timeline` | `@granit/timeline` | | Controllers .NET | Endpoints consumed by `api/` | | `ProblemDetails` | `ProblemDetails` in `@granit/api-client` | | `PagedResult` | `PagedResult` in `@granit/api-client` | TypeScript types in `types/` of each package are the faithful mirror of C# DTOs on the backend. Any change to a .NET contract must be propagated to the corresponding TypeScript type, and vice versa. ## See also [Section titled “See also”](#see-also) * [Frontend SDK Reference](/reference/frontend/) — package documentation * [Backend Architecture](/architecture/) — .NET architecture overview * [Frontend Patterns](/architecture/patterns-frontend/) — design patterns * [Frontend ADRs](/architecture/adr-frontend/) — architecture decision records
# Pattern Library
> 51 design patterns and their implementation in Granit
A catalogue of design patterns used in the Granit framework, organized by category. Each pattern documents the general concept, how it is implemented in Granit, and references to the actual source files where the pattern is applied. ## Architecture patterns [Section titled “Architecture patterns”](#architecture-patterns) | Pattern | Description | | --------------------------------------------------- | ------------------------------------------------------- | | [Module System](./module-system/) | Topological loading with `[DependsOn]` | | [Hexagonal Architecture](./hexagonal-architecture/) | Ports and Adapters for infrastructure decoupling | | [Layered Architecture](./layered-architecture/) | Domain / Application / Infrastructure separation | | [Middleware Pipeline](./middleware-pipeline/) | Dual ASP.NET Core + Wolverine pipeline | | [Event-Driven](./event-driven/) | IDomainEvent (local) + IIntegrationEvent (durable) | | [REPR](./repr/) | Minimal API Request-Endpoint-Response | | [CQRS](./cqrs/) | IReader / IWriter separation, ArchUnitNET enforcement | | [Anti-Corruption Layer](./anti-corruption-layer/) | Isolation of Keycloak, S3, Brevo, FCM via internal DTOs | ## Cloud and SaaS patterns [Section titled “Cloud and SaaS patterns”](#cloud-and-saas-patterns) | Pattern | Description | | ----------------------------------------------------- | ---------------------------------------------------------- | | [Multi-Tenancy](./multi-tenancy/) | 3 isolation strategies, soft dependency, async propagation | | [Feature Flags](./feature-flags/) | Multi-level resolution Tenant to Plan to Default | | [Transactional Outbox](./transactional-outbox/) | Atomic event publishing via Wolverine Outbox | | [Idempotency](./idempotency/) | Stripe-style HTTP idempotency with state machine | | [Pre-Signed URL](./pre-signed-url/) | Direct-to-cloud S3 upload/download | | [Sidecar / Behavior](./sidecar-behavior/) | Context propagation via Wolverine Behaviors | | [Circuit Breaker and Retry](./circuit-breaker-retry/) | Standard resilience + Wolverine RetryWithCooldown | | [Cache-Aside](./cache-aside/) | Double-check locking + HybridCache L1/L2 | | [Rate Limiting](./rate-limiting/) | Per-tenant rate limiting with dynamic quotas | | [Saga / Process Manager](./saga-process-manager/) | GDPR export, import/export orchestrators | | [Fan-Out](./fan-out/) | Wolverine cascade for notifications and webhooks | | [Claim Check](./claim-check/) | Soft dependency IClaimCheckStore for large payloads | | [Bulkhead Isolation](./bulkhead-isolation/) | Queue isolation, parallelism, tenant quotas | ## GoF behavioral patterns [Section titled “GoF behavioral patterns”](#gof-behavioral-patterns) | Pattern | Description | | ----------------------------------------------------- | -------------------------------------------------------------------- | | [Strategy](./strategy/) | TenantIsolationStrategy, IBlobKeyStrategy, IStringEncryptionProvider | | [Chain of Responsibility](./chain-of-responsibility/) | TenantResolverPipeline, blob validation | | [Command](./command/) | SendWebhookCommand, RunMigrationBatchCommand | | [Template Method](./template-method/) | GranitModule lifecycle, GranitValidator | | [State Machine](./state-machine/) | IdempotencyState, BlobStatus | | [Observer / Event](./observer-event/) | Wolverine implicit event subscription | | [Mediator](./mediator/) | Wolverine message bus | | [Null Object](./null-object/) | NullTenantContext, NullCacheValueEncryptor | ## GoF creational patterns [Section titled “GoF creational patterns”](#gof-creational-patterns) | Pattern | Description | | ----------------------------------- | ------------------------------------------------- | | [Factory Method](./factory-method/) | VaultClientFactory, DbContext tenant factories | | [Singleton](./singleton/) | AsyncLocal singletons, NullTenantContext.Instance | | [Builder](./builder/) | Fluent `AddGranit*()` extensions | ## GoF structural patterns [Section titled “GoF structural patterns”](#gof-structural-patterns) | Pattern | Description | | ------------------------- | -------------------------------------------------------- | | [Adapter](./adapter/) | TypedKeyCacheServiceAdapter, S3BlobClient | | [Decorator](./decorator/) | DistributedCacheService, CachedLocalizationOverrideStore | | [Proxy](./proxy/) | FilterProxy for EF Core, Interceptors | | [Facade](./facade/) | DefaultBlobStorage, GranitExceptionHandler | | [Composite](./composite/) | Auditable entity hierarchy | ## Data patterns [Section titled “Data patterns”](#data-patterns) | Pattern | Description | | ----------------------------------- | ----------------------------------------------------- | | [Repository](./repository/) | Store interfaces + EF Core / InMemory implementations | | [Soft Delete](./soft-delete/) | ISoftDeletable + SoftDeleteInterceptor (GDPR) | | [Data Filtering](./data-filtering/) | IDataFilter with ImmutableDictionary AsyncLocal | | [Unit of Work](./unit-of-work/) | Implicit DbContext + interceptor chain | | [Specification](./specification/) | QueryDefinition whitelist-first, expression trees | ## Concurrency patterns [Section titled “Concurrency patterns”](#concurrency-patterns) | Pattern | Description | | --------------------------------------------------- | ----------------------------------------- | | [Scope / Context Manager](./scope-context-manager/) | `using` pattern for context restoration | | [Copy-on-Write](./copy-on-write/) | ImmutableDictionary for thread-safe state | | [Double-Check Locking](./double-check-locking/) | Anti-stampede on cache miss | ## .NET idiom patterns [Section titled “.NET idiom patterns”](#net-idiom-patterns) | Pattern | Description | | --------------------------------------- | ------------------------------------------ | | [Expression Trees](./expression-trees/) | Dynamic EF Core query filter construction | | [Marker Interface](./marker-interface/) | ISoftDeletable, IMultiTenant, IDomainEvent | | [Options Pattern](./options-pattern/) | 93 Options classes, ValidateOnStart | ## Security patterns [Section titled “Security patterns”](#security-patterns) | Pattern | Description | | ------------------------------------------------- | ----------------------------------------- | | [Claims-Based Identity](./claims-based-identity/) | JWT Keycloak + dynamic RBAC | | [Guard Clause](./guard-clause/) | Systematic fail-fast, semantic exceptions | ## Granit-specific variants [Section titled “Granit-specific variants”](#granit-specific-variants) | Pattern | Description | | ------------------------------------- | ----------------------------------- | | [Granit Variants](./granit-variants/) | 10 hybrid patterns unique to Granit |
# Frontend Pattern Library
> 8 design patterns and their implementation in the Granit TypeScript/React SDK
A catalogue of design patterns used in the Granit frontend SDK, organized by category. Each pattern documents the general concept, how it is implemented in the `@granit/*` packages, and concrete code examples from the SDK. ## Creation patterns [Section titled “Creation patterns”](#creation-patterns) | Pattern | Description | | --------------------------------------- | --------------------------------------------------------- | | [Factory](./factory/) | Hide instance creation complexity behind simple functions | | [Module Singleton](./module-singleton/) | Cross-package state sharing via ES module cache | ## Structural patterns [Section titled “Structural patterns”](#structural-patterns) | Pattern | Description | | --------------------- | ------------------------------------------------------- | | [Adapter](./adapter/) | Convert 3rd-party APIs into React-compatible interfaces | ## Behavioral patterns [Section titled “Behavioral patterns”](#behavioral-patterns) | Pattern | Description | | ----------------------------- | ----------------------------------------------------- | | [Interceptor](./interceptor/) | Transparent HTTP request/response pipeline processing | | [Strategy](./strategy/) | Pluggable implementations behind a common interface | | [Observer](./observer/) | Event notification without direct coupling | ## React patterns [Section titled “React patterns”](#react-patterns) | Pattern | Description | | --------------------------------------- | ----------------------------------------------------- | | [Provider](./provider/) | Context-based dependency injection with typed hooks | | [Hook Composition](./hook-composition/) | Layer framework + application logic via hook wrapping |
# Adapter
> Convert third-party APIs into React-compatible interfaces with typed state
## Definition [Section titled “Definition”](#definition) The Adapter pattern converts a third-party library’s API into a React-compatible interface. It isolates application code from library details — transforming imperative callbacks and mutable state into reactive hooks with typed state. ## Diagram [Section titled “Diagram”](#diagram) ``` graph LR Keycloak["keycloak-js\n(imperative)"] --> Adapter["useKeycloakInit\n(adapter)"] Adapter --> React["React state\n(reactive)"] ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) | Source | Adapter | Target | | ------------------------- | ------------------------------------ | ------------------------------------------------- | | `keycloak-js` API | `useKeycloakInit` hook | Reactive `{ authenticated, loading, user }` state | | `klaro/dist/klaro-no-css` | `createKlaroCookieConsentProvider()` | `CookieConsentProvider` interface | ### Keycloak adaptation [Section titled “Keycloak adaptation”](#keycloak-adaptation) | keycloak-js native | Adapted interface | | --------------------------------------- | ---------------------------------------- | | `keycloak.init({ onLoad, pkceMethod })` | Single `useEffect` with init guard | | `keycloak.loadUserInfo()` → Promise | `user: KeycloakUserInfo \| null` (state) | | `keycloak.onTokenExpired = callback` | Automatic renewal every 60s | | `keycloak.token` (mutable string) | Transparent wiring to `setTokenGetter()` | | `keycloak.authenticated` (mutable bool) | `authenticated: boolean` (reactive) | ## Rationale [Section titled “Rationale”](#rationale) Keycloak-js uses callbacks, promises, and mutable properties — a paradigm mismatch with React’s declarative model. The adapter bridges this gap so that the rest of the application works with standard React state. ## Usage example [Section titled “Usage example”](#usage-example) ```tsx import { useKeycloakInit } from '@granit/react-authentication'; function AuthProvider({ children }: { children: React.ReactNode }) { const { authenticated, loading, user, login, logout } = useKeycloakInit({ url: import.meta.env.VITE_KEYCLOAK_URL, realm: import.meta.env.VITE_KEYCLOAK_REALM, clientId: import.meta.env.VITE_KEYCLOAK_CLIENT_ID, }); if (loading) return ; if (!authenticated) { login(); return null; } return ( {children} ); } ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Adapter — refactoring.guru](https://refactoring.guru/design-patterns/adapter)
# Factory
> Encapsulate complex object creation behind simple functions across all @granit packages
## Definition [Section titled “Definition”](#definition) The Factory pattern encapsulates complex object creation logic behind simple functions. Callers provide minimal configuration and receive ready-to-use instances without knowing initialization details. This is the dominant pattern in granit-front — every package exposes at least one factory. ## Diagram [Section titled “Diagram”](#diagram) ``` graph LR Config["Config object"] --> Factory["createXxx()"] Factory --> Instance["Ready-to-use instance"] ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) | Factory | Package | Creates | | ------------------------------------------- | ------------------------------- | ------------------------------------------ | | `createLogger(prefix, options?)` | `@granit/logger` | Logger with configured transports | | `createApiClient(config)` | `@granit/api-client` | Axios instance with Bearer interceptor | | `createAuthContext()` | `@granit/react-authentication` | Generic, type-safe auth context + hook | | `createLocalization(config?)` | `@granit/localization` | Isolated i18next instance | | `createReactLocalization(config?)` | `@granit/react-localization` | i18next with `initReactI18next` plugin | | `createSignalRTransport(config)` | `@granit/notifications-signalr` | SignalR notification transport | | `createSseTransport(config)` | `@granit/notifications-sse` | SSE notification transport | | `createKlaroCookieConsentProvider(options)` | `@granit/cookies-klaro` | Klaro CMP adapter | | `createStorage(key, options?)` | `@granit/storage` | Typed localStorage/sessionStorage accessor | | `createMockProvider()` | `@granit/react-authentication` | Test provider using same context | ## Rationale [Section titled “Rationale”](#rationale) Factory functions keep the public API surface minimal while hiding initialization complexity (transport wiring, plugin injection, default configuration). They also enable tree-shaking — unused factories are eliminated at build time. ## Usage example [Section titled “Usage example”](#usage-example) ```ts import { createLogger } from '@granit/logger'; import { createApiClient } from '@granit/api-client'; import { createAuthContext } from '@granit/react-authentication'; import type { BaseAuthContextType } from '@granit/authentication'; // Logger with console transport const logger = createLogger('app'); // HTTP client with automatic Bearer injection const api = createApiClient({ baseURL: import.meta.env.VITE_API_URL }); // Typed auth context for the application interface AuthContextType extends BaseAuthContextType { register: () => void; } export const { AuthContext, useAuth } = createAuthContext(); ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Factory Method — refactoring.guru](https://refactoring.guru/design-patterns/factory-method)
# Hook Composition
> Layer framework hooks with application logic via wrapper components
## Definition [Section titled “Definition”](#definition) Hook Composition assembles low-level framework hooks with application logic in a wrapper component. The framework hook provides the foundation (authentication, state, lifecycle); the application adds its business logic on top. ## Diagram [Section titled “Diagram”](#diagram) ``` graph TD Framework["useKeycloakInit()\n(framework layer)"] --> Composition["AuthProvider\n(composition point)"] AppLogic["Application logic\n(Capacitor, locale, roles)"] --> Composition Composition --> Context["AuthContext.Provider\n(consumed by app)"] ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) | Framework hook | Application layer adds | Composition point | | ------------------------ | ------------------------------------------- | ------------------------------ | | `useKeycloakInit` | Capacitor browser, locale, custom redirects | `AuthProvider` in consumer app | | `useNotificationContext` | Custom toast rendering, sound effects | Notification wrapper component | | `useQueryEndpoint` | Domain-specific column transforms | Data table component | ## Rationale [Section titled “Rationale”](#rationale) Framework hooks must remain generic — no Capacitor, Electron, or app-specific code. Hook Composition lets applications extend framework behavior without modifying the framework. The boundary is clear: framework hooks handle the universal lifecycle, application wrappers add the specific logic. ## Usage example [Section titled “Usage example”](#usage-example) ### Minimal composition (web-only) [Section titled “Minimal composition (web-only)”](#minimal-composition-web-only) ```tsx function AuthProvider({ children }: { children: React.ReactNode }) { const auth = useKeycloakInit({ url: import.meta.env.VITE_KEYCLOAK_URL, realm: import.meta.env.VITE_KEYCLOAK_REALM, clientId: import.meta.env.VITE_KEYCLOAK_CLIENT_ID, }); if (auth.loading) return ; return ( {children} ); } ``` ### Advanced composition (Capacitor + custom login) [Section titled “Advanced composition (Capacitor + custom login)”](#advanced-composition-capacitor--custom-login) ```tsx function AuthProvider({ children }: { children: React.ReactNode }) { // Framework layer const { keycloak, keycloakRef, authenticated, loading, user, login: hookLogin, logout: hookLogout, } = useKeycloakInit(config); // Application layer — platform-specific logic const login = useCallback(async () => { if (isNative) { const url = keycloakRef.current?.createLoginUrl({ redirectUri }); await Browser.open({ url }); // Capacitor: system browser } else { hookLogin({ locale }); } }, [isNative, keycloakRef, hookLogin]); // Stabilization for context consumers const value = useMemo( () => ({ keycloak, authenticated, loading, user, login, logout }), [keycloak, authenticated, loading, user, login, logout], ); return {children}; } ``` The framework `useKeycloakInit` handles Keycloak init, PKCE, token refresh, and React state. The application wrapper adds Capacitor-specific login flow and `useMemo` stabilization — without any change to the framework hook. ## Further reading [Section titled “Further reading”](#further-reading) * [Custom Hooks — react.dev](https://react.dev/learn/reusing-logic-with-custom-hooks)
# Interceptor
> Transparent HTTP request/response pipeline processing for token injection and error handling
## Definition [Section titled “Definition”](#definition) The Interceptor pattern inserts transparent processing into a request/response pipeline. Calling code is unaware of the interceptor — it sends a request and receives a response as if nothing intervened. ## Diagram [Section titled “Diagram”](#diagram) ``` graph LR App["Application code"] --> Interceptor["Request interceptor\n(Bearer injection)"] Interceptor --> Server["HTTP server"] Server --> RespInterceptor["Response interceptor\n(error handling)"] RespInterceptor --> App ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) | Interceptor | Package | Purpose | | ------------------------- | --------------------- | ------------------------------------------------ | | Request — Bearer token | `@granit/api-client` | Automatic `Authorization` header injection | | Request — Idempotency key | `@granit/idempotency` | Automatic `Idempotency-Key` header on POST/PATCH | | Response — error handling | Application | 401/403 redirect, error normalization | The framework handles token injection. Error handling (401, 403, network failures) is the application’s responsibility — this separation is by design. ## Rationale [Section titled “Rationale”](#rationale) Bearer token injection must happen on every authenticated request. Making this an interceptor means no API call site needs to handle authentication manually. The pipeline is extensible — applications add their own interceptors for error handling without modifying framework code. ## Usage example [Section titled “Usage example”](#usage-example) ```ts import { createApiClient } from '@granit/api-client'; import axios from 'axios'; const api = createApiClient({ baseURL: import.meta.env.VITE_API_URL }); // Framework interceptor already in place — calls are transparent const { data } = await api.get('/patients'); // Application adds its own response interceptor api.interceptors.response.use( (response) => response, async (error: unknown) => { if (axios.isAxiosError(error) && error.response?.status === 401) { window.location.href = '/login'; } throw error; }, ); ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Interceptor — Wikipedia](https://en.wikipedia.org/wiki/Interceptor_pattern)
# Module Singleton
> Cross-package state sharing via ES module cache for token injection and global log level
## Definition [Section titled “Definition”](#definition) The Module Singleton pattern exploits the ES module cache to maintain a unique global state. A private module variable is shared by all importers — no static class or global registry required. ## Diagram [Section titled “Diagram”](#diagram) ``` graph TD A["@granit/react-authentication"] -->|"setTokenGetter()"| Singleton["_tokenGetter\n(module variable)"] B["@granit/api-client\n(interceptor)"] -->|"reads"| Singleton ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) | Singleton variable | Package | Purpose | | ------------------ | -------------------- | ------------------------------------------------------- | | `_tokenGetter` | `@granit/api-client` | Bearer token getter shared between auth and HTTP client | | Global log level | `@granit/logger` | Shared log level across all logger instances | ## Rationale [Section titled “Rationale”](#rationale) Token management requires coordination between `@granit/react-authentication` (which obtains tokens from Keycloak) and `@granit/api-client` (which injects them into HTTP requests). A module singleton avoids direct package coupling — the auth package calls `setTokenGetter()` once during startup, and the interceptor reads it on every request. ## Usage example [Section titled “Usage example”](#usage-example) ```ts // @granit/api-client — private module variable let _tokenGetter: (() => Promise) | null = null; export function setTokenGetter( getter: () => Promise ): void { _tokenGetter = getter; } // Axios interceptor reads _tokenGetter on every request instance.interceptors.request.use(async (req) => { if (_tokenGetter) { const token = await _tokenGetter(); if (token) req.headers.Authorization = `Bearer ${token}`; } return req; }); ``` `useKeycloakInit` calls `setTokenGetter()` internally — the application never wires this manually. ## Further reading [Section titled “Further reading”](#further-reading) * [Singleton — refactoring.guru](https://refactoring.guru/design-patterns/singleton)
# Observer
> Event notification for Keycloak lifecycle and notification transport state changes
## Definition [Section titled “Definition”](#definition) The Observer pattern allows a subject to notify subscribers when an event occurs, without direct coupling. Subscribers react to events according to their own logic. ## Diagram [Section titled “Diagram”](#diagram) ``` graph TD Subject["Keycloak (subject)"] -->|"onTokenExpired"| Framework["Framework\n(auto-renew token)"] Subject -->|"onTokenExpired"| App["Application callback\n(log warning)"] Subject -->|"all events"| Generic["Generic observer\n(centralized logging)"] ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) | Subject | Events | Package | | ----------------------- | -------------------------------------------------- | ------------------------------ | | `keycloak-js` | `onReady`, `onAuthSuccess`, `onTokenExpired`, etc. | `@granit/react-authentication` | | `NotificationTransport` | `onNotification`, `onStateChange` | `@granit/notifications` | | `CookieConsentProvider` | `onConsentChange` | `@granit/cookies` | ### Keycloak dual-callback design [Section titled “Keycloak dual-callback design”](#keycloak-dual-callback-design) The hook supports two callback types: * **Specific observers**: `onTokenExpired?`, `onAuthRefreshError?`, `onAuthLogout?` * **Generic observer**: `onEvent?(event, error?)` — receives all events The framework executes default behavior first (e.g. `updateToken(60)` on expiry), then propagates to application callbacks. ## Rationale [Section titled “Rationale”](#rationale) Keycloak emits lifecycle events that require both framework-level behavior (token renewal) and application-level behavior (logging, UI updates). The Observer pattern lets both coexist without coupling. ## Usage example [Section titled “Usage example”](#usage-example) ```ts type KeycloakEvent = | 'onReady' | 'onAuthSuccess' | 'onAuthError' | 'onAuthRefreshSuccess' | 'onAuthRefreshError' | 'onAuthLogout' | 'onTokenExpired'; const auth = useKeycloakInit({ url: import.meta.env.VITE_KEYCLOAK_URL, realm: import.meta.env.VITE_KEYCLOAK_REALM, clientId: import.meta.env.VITE_KEYCLOAK_CLIENT_ID, // Specific observer onTokenExpired: () => { logger.warn('Token expired — renewing'); }, // Generic observer — centralized logging onEvent: (event, error) => { logger.debug('Keycloak event', { event, error }); }, }); ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Observer — refactoring.guru](https://refactoring.guru/design-patterns/observer)
# Provider (React Context)
> Context-based dependency injection with typed hooks and generic factory
## Definition [Section titled “Definition”](#definition) The Provider pattern uses React Context to inject shared state into the component tree. A high-level component provides a value; descendant components access it via a hook — no prop drilling. Combined with a generic factory, each application can type its own context. ## Diagram [Section titled “Diagram”](#diagram) ``` graph TD Provider["XxxProvider\n(context value)"] --> Hook["useXxx()\n(typed access)"] Hook --> ComponentA["Component A"] Hook --> ComponentB["Component B"] Hook --> ComponentC["Component C"] ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) Every `@granit/react-*` package follows this pattern: | Provider | Hook | Package | | ----------------------------------- | -------------------------- | ------------------------------ | | `WorkflowProvider` | `useWorkflowConfig()` | `@granit/react-workflow` | | `TimelineProvider` | `useTimelineConfig()` | `@granit/react-timeline` | | `NotificationProvider` | `useNotificationContext()` | `@granit/react-notifications` | | `QueryProvider` | `useQueryConfig()` | `@granit/react-querying` | | `SettingsProvider` | `useSettingsConfig()` | `@granit/react-settings` | | `TemplatingProvider` | `useTemplatingConfig()` | `@granit/templating` | | `ExportProvider` / `ImportProvider` | context hooks | `@granit/react-data-exchange` | | `ErrorContextProvider` | `useErrorContext()` | `@granit/react-error-boundary` | | `CookieConsentProvider` | `useCookieConsent()` | `@granit/react-cookies` | | `TracingProvider` | `useTracer()` | `@granit/react-tracing` | ### Generic factory for authentication [Section titled “Generic factory for authentication”](#generic-factory-for-authentication) ```ts // createAuthContext() creates both the context and the hook export const { AuthContext, useAuth } = createAuthContext(); ``` Each application defines its own `AuthContextType` extending `BaseAuthContextType`. ## Rationale [Section titled “Rationale”](#rationale) React applications need to share configuration (API client, base path) and state (connection status, current user) across deeply nested components. Context avoids prop drilling while keeping the dependency explicit — the hook throws if called outside its Provider. ## Usage example [Section titled “Usage example”](#usage-example) ```tsx // 1. Define typed context interface AuthContextType extends BaseAuthContextType { register: () => void; } export const { AuthContext, useAuth } = createAuthContext(); // 2. Provide value at the top of the tree function AuthProvider({ children }: { children: React.ReactNode }) { const value: AuthContextType = { /* ... */ }; return {children}; } // 3. Consume anywhere below function NavBar() { const { user, logout } = useAuth(); return ; } // 4. Mock for testing / Storybook import { createMockProvider } from '@granit/react-authentication'; const MockAuth = createMockProvider(AuthContext, { keycloak: null, authenticated: true, loading: false, user: { sub: '1', name: 'Test' }, login: () => {}, logout: () => {}, register: () => {}, }); ``` ## Further reading [Section titled “Further reading”](#further-reading) * [React Context — react.dev](https://react.dev/learn/passing-data-deeply-with-context)
# Strategy
> Pluggable implementations behind a common interface for log transports and notification channels
## Definition [Section titled “Definition”](#definition) The Strategy pattern defines a family of interchangeable algorithms behind a common interface. The context delegates behavior to a strategy object without knowing its implementation. ## Diagram [Section titled “Diagram”](#diagram) ``` classDiagram class LogTransport { <> +send(entry: LogEntry): void +flush?(): Promise~void~ } class ConsoleTransport class OtlpTransport class SentryTransport LogTransport <|.. ConsoleTransport LogTransport <|.. OtlpTransport LogTransport <|.. SentryTransport class NotificationTransport { <> +connect(): Promise~void~ +disconnect(): Promise~void~ +onNotification(cb): unsubscribe } class SignalRTransport class SseTransport NotificationTransport <|.. SignalRTransport NotificationTransport <|.. SseTransport ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) | Strategy interface | Package | Implementations | | ----------------------- | ----------------------- | --------------------------------------------------- | | `LogTransport` | `@granit/logger` | `createConsoleTransport()`, `createOtlpTransport()` | | `NotificationTransport` | `@granit/notifications` | `createSignalRTransport()`, `createSseTransport()` | | `CookieConsentProvider` | `@granit/cookies` | `createKlaroCookieConsentProvider()` | ## Rationale [Section titled “Rationale”](#rationale) Log destinations and real-time notification channels vary by deployment. The Strategy pattern lets applications compose the exact set of transports they need without modifying framework code. A Sentry transport can be added alongside the console transport without either knowing about the other. ## Usage example [Section titled “Usage example”](#usage-example) ```ts import { createLogger, createConsoleTransport } from '@granit/logger'; import type { LogTransport, LogEntry } from '@granit/logger'; // Custom Sentry transport const sentryTransport: LogTransport = { send(entry: LogEntry) { if (entry.level === 'ERROR') { Sentry.captureMessage(entry.message, { level: 'error', extra: entry.context, }); } }, }; // Combine console + Sentry const logger = createLogger('app', { transports: [createConsoleTransport(), sentryTransport], }); logger.error('Critical failure', new Error('timeout')); // → Both console and Sentry receive the entry ``` Transports with a `flush()` method are automatically flushed on `beforeunload`. ## Further reading [Section titled “Further reading”](#further-reading) * [Strategy — refactoring.guru](https://refactoring.guru/design-patterns/strategy)
# Adapter
> Interface translation for typed cache keys, S3 storage, and SMTP transport in Granit
## Definition [Section titled “Definition”](#definition) The Adapter pattern converts the interface of an existing class into the interface expected by the client, allowing incompatible components to collaborate. The adapter wraps the existing class and translates calls. ## Diagram [Section titled “Diagram”](#diagram) ``` classDiagram class ICacheService_TCacheItem_TKey { +GetOrAddAsync(key: TKey) } class ICacheService_TCacheItem { +GetOrAddAsync(key: string) } class TypedKeyCacheServiceAdapter { -inner : ICacheService_TCacheItem +GetOrAddAsync(key: TKey) } ICacheService_TCacheItem_TKey <|.. TypedKeyCacheServiceAdapter TypedKeyCacheServiceAdapter --> ICacheService_TCacheItem : delegates via key.ToString() class IBlobStorageClient { +DeleteObjectAsync() +HeadObjectAsync() } class AmazonS3Client { +DeleteObjectAsync() +GetObjectMetadataAsync() } class S3BlobClient { -s3Client : AmazonS3Client } IBlobStorageClient <|.. S3BlobClient S3BlobClient --> AmazonS3Client : adapts ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) | Adapter | File | Target interface | Adapted class | | ----------------------------------------------- | ------------------------------------------------------------- | --------------------------------- | ----------------------------------------- | | `TypedKeyCacheServiceAdapter` | `src/Granit.Caching/TypedKeyCacheServiceAdapter.cs` | `ICacheService` | `ICacheService` (string keys) | | `S3BlobClient` | `src/Granit.BlobStorage.S3/Internal/S3BlobClient.cs` | `IBlobStorageClient` | `AmazonS3Client` (AWS SDK) | | `MailKitSmtpTransport` | `src/Granit.Notifications.Email.Smtp/MailKitSmtpTransport.cs` | `ISmtpTransport` | `SmtpClient` (MailKit, sealed) | ## Rationale [Section titled “Rationale”](#rationale) `TypedKeyCacheServiceAdapter` enables strongly-typed keys (Guid, int, composite) while delegating to the existing string-key-based cache service. `S3BlobClient` isolates the framework from the AWS SDK, allowing provider changes (European hosting, MinIO) without touching the core. ## Usage example [Section titled “Usage example”](#usage-example) ```csharp // Application uses typed keys -- the adapter converts to string ICacheService cache = serviceProvider .GetRequiredService>(); PatientDto patient = await cache.GetOrAddAsync( patientId, // Guid -- converted to string by the adapter async ct => await db.Patients.FindAsync([patientId], ct), cancellationToken); ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Adapter — refactoring.guru](https://refactoring.guru/design-patterns/adapter)
# Anti-Corruption Layer
> Translation layer isolating domain models from external API models and SDKs
## Definition [Section titled “Definition”](#definition) The Anti-Corruption Layer (ACL) isolates the internal domain model from external models (third-party APIs, SDKs, legacy formats). A translation layer converts external DTOs into Granit domain types, preventing foreign concepts from “polluting” the framework core. ## Diagram [Section titled “Diagram”](#diagram) ``` flowchart LR subgraph External["External services"] KC["Keycloak Admin API"] S3["AWS S3 SDK"] BR["Brevo API"] FCM["Firebase FCM"] MK["MailKit / SMTP"] end subgraph ACL["Anti-Corruption Layer"] KC_DTO["KeycloakUserRepresentation"] KC_MAP["ToIdentityUser()"] S3_ADP["S3BlobClient"] BR_ADP["BrevoNotificationProvider"] FCM_ADP["GoogleFcmMobilePushSender"] MK_ADP["MailKitEmailSender"] end subgraph Domain["Granit domain model"] IU["IdentityUser"] BD["BlobDescriptor"] EM["EmailMessage"] PM["MobilePushMessage"] end KC --> KC_DTO --> KC_MAP --> IU S3 --> S3_ADP --> BD BR --> BR_ADP --> EM FCM --> FCM_ADP --> PM MK --> MK_ADP --> EM style External fill:#fef0f0,stroke:#c44e4e style ACL fill:#fef3e0,stroke:#e8a317 style Domain fill:#e8fde8,stroke:#2d8a4e ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) ### Keycloak — the most comprehensive case [Section titled “Keycloak — the most comprehensive case”](#keycloak--the-most-comprehensive-case) `Granit.Identity.Keycloak` is the framework’s canonical ACL. Keycloak responses are deserialized into `internal` DTOs (`KeycloakUserRepresentation`, `KeycloakSessionRepresentation`, etc.) then converted to domain models via `private static` methods: ```csharp // External DTO (internal, never exposed) internal sealed record KeycloakUserRepresentation( [property: JsonPropertyName("id")] string Id, [property: JsonPropertyName("username")] string Username, [property: JsonPropertyName("email")] string? Email, // ... ); // Conversion to domain model private static IdentityUser ToIdentityUser(KeycloakUserRepresentation user) => new(user.Id, user.Username, user.Email, user.FirstName, user.LastName, user.Enabled, FlattenAttributes(user.Attributes)); ``` **Specifics**: * `FlattenAttributes()` — converts Keycloak `Dictionary(string, List(string))` to Granit `Dictionary(string, string)` (multi-value attributes to single value) * `ToIdentitySession()` — converts Unix timestamps (milliseconds) to `DateTimeOffset` * `ToIdentityGroup()` — recursive mapping for subgroups ### Claims transformation (JWT) [Section titled “Claims transformation (JWT)”](#claims-transformation-jwt) `KeycloakClaimsTransformation` and `EntraIdClaimsTransformation` extract roles from proprietary JSON structures (`realm_access.roles`, `resource_access.{client}.roles`) and convert them to standard .NET `ClaimTypes.Role` claims. ### ACL inventory in Granit [Section titled “ACL inventory in Granit”](#acl-inventory-in-granit) | External service | ACL package | External DTO to internal model | | ------------------ | ------------------------------------------- | ------------------------------------------------------------------- | | Keycloak Admin API | `Granit.Identity.Keycloak` | `KeycloakUserRepresentation` to `IdentityUser` | | Keycloak JWT | `Granit.Authentication.Keycloak` | JSON `realm_access` to `ClaimTypes.Role` | | Entra ID JWT | `Granit.Authentication.EntraId` | JSON `roles` (v1.0/v2.0) to `ClaimTypes.Role` | | AWS S3 SDK | `Granit.BlobStorage.S3` | `GetPreSignedUrlRequest` from `BlobUploadRequest` | | MailKit SMTP | `Granit.Notifications.Email.Smtp` | `MimeMessage` from `EmailMessage` | | Brevo API | `Granit.Notifications.Brevo` | JSON payload from `EmailMessage` / `SmsMessage` / `WhatsAppMessage` | | Firebase FCM | `Granit.Notifications.MobilePush.GoogleFcm` | `FcmPayload` from `MobilePushMessage` | | ImageMagick | `Granit.Imaging.MagickNet` | `MagickFormat` to/from `ImageFormat` (bidirectional) | | Import systems | `Granit.DataExchange.EntityFrameworkCore` | External ID (Odoo `__export__`) to internal Entity ID | ### Architectural principles [Section titled “Architectural principles”](#architectural-principles) 1. **External DTOs are `internal`** — never exposed outside the adapter package 2. **Conversion methods are `private static`** — isolated from the rest of the code 3. **Error transformation** — external errors are parsed and encapsulated in domain exceptions 4. **Graceful degradation** — reads log a warning and return `null`; writes propagate the exception 5. **`[ExcludeFromCodeCoverage]`** — on adapters requiring a live service (S3, SMTP) ### Reference files [Section titled “Reference files”](#reference-files) | File | Role | | ------------------------------------------------------------------------------------- | ---------------------------------- | | `src/Granit.Identity.Keycloak/Internal/KeycloakIdentityProvider.cs` | Primary ACL (9 conversion methods) | | `src/Granit.Identity.Keycloak/Internal/KeycloakUserRepresentation.cs` | Keycloak external DTO | | `src/Granit.Authentication.Keycloak/Authentication/KeycloakClaimsTransformation.cs` | JWT claims to Role | | `src/Granit.BlobStorage.S3/Internal/S3BlobClient.cs` | S3 adapter | | `src/Granit.Notifications.Email.Smtp/Internal/MailKitEmailSender.cs` | SMTP adapter | | `src/Granit.Notifications.Brevo/Internal/BrevoNotificationProvider.cs` | Multi-channel Brevo | | `src/Granit.Notifications.MobilePush.GoogleFcm/Internal/GoogleFcmMobilePushSender.cs` | FCM adapter | | `src/Granit.Imaging.MagickNet/Internal/MagickFormatMapper.cs` | Image format mapping | ## Rationale [Section titled “Rationale”](#rationale) | Problem | ACL solution | | ------------------------------------------------------------ | ---------------------------------------------------------- | | Keycloak models in the domain = tight coupling | `internal` DTOs + static conversion | | Keycloak API change = impact across the entire codebase | Only the adapter changes, the domain remains stable | | JWT claims format differs between Keycloak and Entra ID | Dedicated transformations, unified `ClaimTypes.Role` model | | Multi-value Keycloak attributes vs single-value Granit | `FlattenAttributes()` in the ACL | | Unix timestamps (ms) in Keycloak vs `DateTimeOffset` in .NET | Conversion in `ToIdentitySession()` | ## Usage example [Section titled “Usage example”](#usage-example) ```csharp // The endpoint only knows the IdentityUser domain model, // never the Keycloak DTOs private static async Task, NotFound>> GetUserAsync( string userId, IIdentityProvider identityProvider, CancellationToken cancellationToken) { // IIdentityProvider is implemented by KeycloakIdentityProvider // which does: API call > KeycloakUserRepresentation > ToIdentityUser() IdentityUser? user = await identityProvider .FindByIdAsync(userId, cancellationToken) .ConfigureAwait(false); return user is not null ? TypedResults.Ok(user) : TypedResults.NotFound(); } ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Anti-Corruption Layer — Microsoft Cloud Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/anti-corruption-layer)
# Builder
> Fluent AddGranit*() extension methods for composable module registration
## Definition [Section titled “Definition”](#definition) The Builder pattern separates the construction of a complex object from its representation, enabling different configurations via a fluent interface. In Granit, this pattern manifests in the `AddGranit*()` extension methods that configure services module by module. ## Diagram [Section titled “Diagram”](#diagram) ``` sequenceDiagram participant App as Program.cs participant Ext as AddGranitWolverine() participant Opts as WolverineMessagingOptions participant DI as IServiceCollection participant Val as ValidateOnStart App->>Ext: builder.AddGranitWolverine(configure?) Ext->>Opts: AddOptions().BindConfiguration("Wolverine") Ext->>Val: ValidateDataAnnotations().ValidateOnStart() Ext->>DI: AddScoped of ICurrentUserService Ext->>DI: AddSingleton of WolverineActivitySource opt configure provided Ext->>Opts: configure.Invoke(options) end Ext-->>App: IHostApplicationBuilder (fluent) ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) Each module exposes an `AddGranit*()` extension method: | Extension | File | Receiver | | --------------------------- | ---------------------------------------------------------------------------------------- | ------------------------- | | `AddGranit()` | `src/Granit.Core/Extensions/GranitHostBuilderExtensions.cs` | `IHostApplicationBuilder` | | `AddGranitWolverine()` | `src/Granit.Wolverine/Extensions/WolverineHostApplicationBuilderExtensions.cs` | `IHostApplicationBuilder` | | `AddGranitBackgroundJobs()` | `src/Granit.BackgroundJobs/Extensions/BackgroundJobsHostApplicationBuilderExtensions.cs` | `IHostApplicationBuilder` | | `AddGranitFeatures()` | `src/Granit.Features/ServiceCollectionExtensions.cs` | `IServiceCollection` | | `AddGranitLocalization()` | `src/Granit.Localization/Extensions/LocalizationServiceCollectionExtensions.cs` | `IServiceCollection` | **Audit note**: the signatures are not yet symmetric across modules (see finding C2 in the critical dashboard). The target is the `AddOptions().BindConfiguration().ValidateOnStart()` pattern. ## Rationale [Section titled “Rationale”](#rationale) The Builder allows each NuGet package to self-configure without the host application needing to know internal details. A single call replaces dozens of DI registration lines. ## Usage example [Section titled “Usage example”](#usage-example) ```csharp WebApplicationBuilder builder = WebApplication.CreateBuilder(args); // One call per module -- fluent and composable builder.AddGranit(); // Internally, the ModuleLoader calls AddGranitWolverine(), // AddGranitPersistence(), AddGranitFeatures(), etc. // in topological dependency order ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Builder — refactoring.guru](https://refactoring.guru/design-patterns/builder)
# Bulkhead Isolation
> Resource compartmentalization preventing cascade failures across queues, tenants, and services
## Definition [Section titled “Definition”](#definition) The **Bulkhead** (watertight compartment) isolates system resources into independent compartments, so that a failure or overload in one compartment does not propagate to others. In a multi-tenant SaaS context, the pattern prevents a greedy tenant, a slow notification channel, or a failing external service from degrading the entire platform. Granit implements this pattern through a combination of mechanisms: Wolverine queue partitioning, per-pipeline parallelism limits, per-tenant quota isolation, and HTTP circuit breakers. ## Diagram [Section titled “Diagram”](#diagram) ``` flowchart TB subgraph Bulkheads direction LR B1[Queue domain-events
Parallelism: local] B2[Queue notification-delivery
MaxParallel: 8] B3[Queue webhook-delivery
MaxParallel: 20] B4[HttpClient Keycloak
Circuit Breaker] B5[HttpClient Brevo
Circuit Breaker] end REQ[Incoming requests] --> B1 REQ --> B2 REQ --> B3 REQ --> B4 REQ --> B5 B3 --x|Saturated| B3 B3 -.->|No impact| B1 B3 -.->|No impact| B2 ``` ``` sequenceDiagram participant T1 as Tenant A (high load) participant RL as RateLimiter per-tenant participant Q as Queue webhook-delivery participant T2 as Tenant B (normal load) T1->>RL: 500 webhooks/min RL-->>T1: 429 Too Many Requests (quota exceeded) Note over RL: Tenant A isolated by its quota T2->>RL: 10 webhooks/min RL-->>Q: Allowed Q->>Q: Processing (max 20 parallel) Note over T2,Q: Tenant B unaffected ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) Granit implements Bulkhead Isolation through 5 complementary mechanisms, each targeting a different isolation level: ### 1. Queue partitioning — isolation by message type (Wolverine) [Section titled “1. Queue partitioning — isolation by message type (Wolverine)”](#1-queue-partitioning--isolation-by-message-type-wolverine) Messages are routed to dedicated queues with controlled parallelism. Each queue operates as an independent compartment. | Queue | Messages | Isolation | | ----------------------- | ---------------------------- | ------------------------------------------------- | | `domain-events` | `IDomainEvent` | Local only, never routed to an external transport | | `notification-delivery` | `DeliverNotificationCommand` | Configurable parallelism (default: 8) | | `webhook-delivery` | `SendWebhookCommand` | Configurable parallelism (default: 20) | | Error queue (DLQ) | `ValidationException` | Deterministic failures, no retry | ```csharp // Explicit routing to local queue (AddGranitWolverine) opts.PublishMessage() .ToLocalQueue("domain-events"); ``` ### 2. MaxParallelDeliveries — concurrency limits per pipeline [Section titled “2. MaxParallelDeliveries — concurrency limits per pipeline”](#2-maxparalleldeliveries--concurrency-limits-per-pipeline) Each delivery pipeline limits the number of simultaneous operations, preventing a saturated channel from consuming all pod resources. | Module | Parameter | Default | Range | | ---------------------- | ----------------------- | ------- | ----- | | `Granit.Notifications` | `MaxParallelDeliveries` | 8 | 1—100 | | `Granit.Webhooks` | `MaxParallelDeliveries` | 20 | 1—100 | ### 3. Per-tenant rate limiting — quota isolation by tenant [Section titled “3. Per-tenant rate limiting — quota isolation by tenant”](#3-per-tenant-rate-limiting--quota-isolation-by-tenant) `Granit.RateLimiting` partitions Redis counters by tenant. Each tenant has its own independent counters — a tenant can never consume another’s quota. | Element | Detail | | --------- | ---------------------------------------------------- | | Redis key | `{prefix}:{tenantId}:{policyName}` | | Hash tag | `{tenantId}` guarantees co-location in Redis Cluster | | Bypass | Configurable exempt roles | ### 4. Circuit breaker — external HTTP service isolation [Section titled “4. Circuit breaker — external HTTP service isolation”](#4-circuit-breaker--external-http-service-isolation) `AddStandardResilienceHandler()` (Microsoft.Extensions.Http.Resilience) is applied to each `HttpClient` targeting an external service. When an external service is down, the circuit opens and requests fail immediately without consuming resources. | Service | Circuit Breaker | Retry | Timeout | | ------------------ | --------------- | ------------------------------- | ---------- | | Keycloak Admin API | Yes | 3 attempts, exponential backoff | 30s / 2min | | Brevo API | Yes | 3 attempts, exponential backoff | 30s / 2min | ```csharp // Each external HttpClient is isolated by its own circuit breaker services.AddHttpClient("keycloak-admin") .AddStandardResilienceHandler(); ``` ### 5. SemaphoreSlim — anti-stampede per resource [Section titled “5. SemaphoreSlim — anti-stampede per resource”](#5-semaphoreslim--anti-stampede-per-resource) Shared resources (token cache, distributed cache) use `SemaphoreSlim(1, 1)` to serialize concurrent access during a cache miss. This mechanism prevents a request spike from generating N parallel calls to the same external service. | Component | Protected resource | Pattern | | --------------------------- | ------------------ | -------------------- | | `KeycloakAdminTokenService` | Keycloak token | Double-check locking | | `DistributedCacheService` | Distributed cache | Double-check locking | ### 6. Channel-based isolation — Webhooks [Section titled “6. Channel-based isolation — Webhooks”](#6-channel-based-isolation--webhooks) The `WebhookDispatchWorker` uses two separate `System.Threading.Channels.Channel(T)` to isolate the fan-out phase (trigger to commands) from the delivery phase (command to HTTP POST). Both phases execute in parallel via `Task.WhenAll()` without interference. ### Reference files [Section titled “Reference files”](#reference-files) | File | Role | | ------------------------------------------------------------------------------ | ---------------------------------- | | `src/Granit.Wolverine/Extensions/WolverineHostApplicationBuilderExtensions.cs` | Queue routing for domain-events | | `src/Granit.RateLimiting/Internal/TenantPartitionedRateLimiter.cs` | Per-tenant isolation | | `src/Granit.Identity.Keycloak/Internal/KeycloakAdminTokenService.cs` | SemaphoreSlim anti-stampede | | `src/Granit.Caching/DistributedCacheService.cs` | SemaphoreSlim cache miss | | `src/Granit.Webhooks/Internal/WebhookDispatchWorker.cs` | Separate trigger/delivery channels | ## Rationale [Section titled “Rationale”](#rationale) | Problem | Solution | | ------------------------------------------------- | ---------------------------------------------- | | Webhook to a slow service blocks notifications | Separate queues with independent parallelism | | Greedy tenant saturates the API for everyone | Quota counters partitioned by tenant | | Down external service consumes threads | Circuit breaker cuts calls after threshold | | Simultaneous cache miss = N calls to the provider | SemaphoreSlim serializes, only one call passes | | Webhook fan-out blocks the delivery phase | Separate channels for trigger vs delivery | ## Usage example [Section titled “Usage example”](#usage-example) ```csharp // --- Isolation by Wolverine queue --- // IDomainEvent -> dedicated local queue, no interference with integration events opts.PublishMessage() .ToLocalQueue("domain-events"); // --- Configurable parallelism per module --- // appsettings.json // { // "Webhooks": { "MaxParallelDeliveries": 20 }, // "Notifications": { "MaxParallelDeliveries": 8 } // } // --- Circuit breaker per external service --- services.AddHttpClient("keycloak-admin", client => client.BaseAddress = new Uri(keycloakUrl)) .AddStandardResilienceHandler(); // --- Per-tenant rate limiting (automatic isolation) --- app.MapGet("/api/v1/patients", GetPatientsAsync) .RequireGranitRateLimiting("api"); // Each tenant has its own Redis counters -- independent quota ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Bulkhead pattern — Microsoft Cloud Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/bulkhead)
# Cache-Aside
> Lazy-loading cache with HybridCache (L1 + L2 Redis) and anti-stampede protection
## Definition [Section titled “Definition”](#definition) The Cache-Aside pattern loads data into the cache on demand: on a miss, data is retrieved from the source (DB), stored in cache, then returned. Subsequent accesses are served from the cache. Granit uses a **HybridCache** (L1 in-process + L2 Redis) with anti-stampede protection via double-check locking. ## Diagram [Section titled “Diagram”](#diagram) ``` flowchart TD REQ[GetOrAddAsync] --> L1{L1 Memory Cache} L1 -->|hit| RET[Return value] L1 -->|miss| L2{L2 Redis Cache} L2 -->|hit| SET1[Store in L1] --> RET L2 -->|miss| LOCK[Acquire SemaphoreSlim] LOCK --> DC{Double-check L2} DC -->|hit| REL1[Release lock] --> SET1 DC -->|miss| FAC[Execute factory
= DB query] FAC --> SET2[Store in L1 + L2] SET2 --> REL2[Release lock] --> RET ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) | Component | File | Role | | --------------------------------- | ------------------------------------------------------------ | ------------------------------------------------------------- | | `DistributedCacheService` | `src/Granit.Caching/DistributedCacheService.cs` | Cache-aside with double-check locking and optional encryption | | `FeatureChecker` | `src/Granit.Features/Checker/FeatureChecker.cs` | HybridCache for feature resolution | | `CachedLocalizationOverrideStore` | `src/Granit.Localization/CachedLocalizationOverrideStore.cs` | In-memory cache for localization overrides | ### Anti-stampede [Section titled “Anti-stampede”](#anti-stampede) The `SemaphoreSlim` in `DistributedCacheService` prevents the “thundering herd” problem: when 100 simultaneous requests have a cache miss, only one executes the factory. The other 99 wait for the lock then find the value in cache (double-check). ### Per-tenant keys [Section titled “Per-tenant keys”](#per-tenant-keys) In `FeatureChecker`, cache keys include the tenant: `t:{tenantId}:{featureName}`. Invalidation targets only the affected tenant. ## Rationale [Section titled “Rationale”](#rationale) | Problem | Solution | | ------------------------------------------------------- | ----------------------------------------------------- | | Feature resolution too slow (DB query on every request) | L1 cache (nanoseconds) + L2 Redis (microseconds) | | Stampede on cache miss (100 requests = 100 DB queries) | SemaphoreSlim + double-check locking | | Sensitive data in Redis cache | Conditional AES-256 encryption via `[CacheEncrypted]` | ## Usage example [Section titled “Usage example”](#usage-example) ```csharp // Cache-aside is transparent to the caller ICacheService cache = serviceProvider .GetRequiredService>(); PatientDto patient = await cache.GetOrAddAsync( $"patient:{patientId}", async ct => await LoadPatientFromDbAsync(patientId, ct), cancellationToken); // 1st call -> DB + stores in cache // 2nd call -> returned from cache (L1 or L2) ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Cache-Aside pattern — Microsoft Cloud Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/cache-aside)
# Chain of Responsibility
> Ordered handler pipeline for tenant resolution and blob validation in Granit
## Definition [Section titled “Definition”](#definition) The Chain of Responsibility pattern passes a request along an ordered chain of handlers. Each handler decides whether to process the request or forward it to the next one. The first capable handler short-circuits the chain. ## Diagram [Section titled “Diagram”](#diagram) ``` sequenceDiagram participant P as TenantResolverPipeline participant HR as HeaderTenantResolver (100) participant JR as JwtClaimTenantResolver (200) participant CR as CustomResolver (300) P->>HR: ResolveAsync(httpContext) alt X-Tenant-Id header present HR-->>P: TenantInfo (short-circuit) else Header absent HR-->>P: null P->>JR: ResolveAsync(httpContext) alt JWT claim present JR-->>P: TenantInfo (short-circuit) else Claim absent JR-->>P: null P->>CR: ResolveAsync(httpContext) CR-->>P: TenantInfo or null end end ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) ### Tenant resolution [Section titled “Tenant resolution”](#tenant-resolution) | Component | File | Role | | ------------------------ | ------------------------------------------------------------- | --------------------------------------------------------- | | `TenantResolverPipeline` | `src/Granit.MultiTenancy/Pipeline/TenantResolverPipeline.cs` | Iterates `ITenantResolver` instances by ascending `Order` | | `HeaderTenantResolver` | `src/Granit.MultiTenancy/Resolvers/HeaderTenantResolver.cs` | Order=100, reads `X-Tenant-Id` | | `JwtClaimTenantResolver` | `src/Granit.MultiTenancy/Resolvers/JwtClaimTenantResolver.cs` | Order=200, reads the JWT claim | ### Blob validation [Section titled “Blob validation”](#blob-validation) | Component | File | Order | Role | | --------------------- | ---------------------------------------------------------- | ----- | ----------------------------- | | `MagicBytesValidator` | `src/Granit.BlobStorage/Validators/MagicBytesValidator.cs` | 10 | Verifies the actual MIME type | | `MaxSizeValidator` | `src/Granit.BlobStorage/Validators/MaxSizeValidator.cs` | 20 | Verifies the file size | The blob validation pipeline is a special case: **all** validators are executed (no short-circuit), but the order determines error message priority. ## Rationale [Section titled “Rationale”](#rationale) The tenant resolution chain allows adding resolvers (query string, cookie, subdomain) without modifying existing code. The ordering via the `Order` property is configurable without recompilation. ## Usage example [Section titled “Usage example”](#usage-example) ```csharp // Add a custom resolver -- inserts into the chain by Order public sealed class SubdomainTenantResolver : ITenantResolver { public int Order => 50; // Before HeaderTenantResolver (100) public Task ResolveAsync(HttpContext context, CancellationToken cancellationToken) { string host = context.Request.Host.Host; // Extract tenant from subdomain... return Task.FromResult(tenantInfo); } } // Registration services.AddSingleton(); ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Chain of Responsibility — refactoring.guru](https://refactoring.guru/design-patterns/chain-of-responsibility)
# Circuit Breaker and Retry
> Resilience for HTTP clients and async messaging with exponential backoff and automatic recovery
## Definition [Section titled “Definition”](#definition) The **Circuit Breaker** cuts calls to a failing service to prevent cascade saturation. **Retry** automatically replays failed requests with exponential backoff. Granit combines both via `Microsoft.Extensions.Http.Resilience` for outgoing HTTP calls and Wolverine `RetryWithCooldown` for asynchronous messages. ## Diagram [Section titled “Diagram”](#diagram) ``` stateDiagram-v2 [*] --> Closed Closed --> Open : Failure rate > threshold Open --> HalfOpen : Timeout expired HalfOpen --> Closed : Test request succeeded HalfOpen --> Open : Test request failed state Closed { [*] --> Normal Normal --> Retry : Transient failure Retry --> Normal : Success Retry --> Retry : Exponential backoff } ``` ``` sequenceDiagram participant S as Granit Service participant R as Resilience Handler participant E as External Service S->>R: POST /api/send-email R->>E: Attempt 1 E-->>R: 503 Service Unavailable R->>R: Backoff 1s R->>E: Attempt 2 E-->>R: 503 Service Unavailable R->>R: Backoff 2s R->>E: Attempt 3 E-->>R: 200 OK R-->>S: 200 OK ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) ### HTTP — AddStandardResilienceHandler [Section titled “HTTP — AddStandardResilienceHandler”](#http--addstandardresiliencehandler) Each `HttpClient` targeting an external service is configured with the .NET standard resilience handler: | Service | Registration file | | -------------------------- | ------------------------------------------------------------------------------------------------------------ | | Keycloak Admin API | `src/Granit.Identity.Keycloak/Extensions/IdentityKeycloakServiceCollectionExtensions.cs` | | Microsoft Graph (Entra ID) | `src/Granit.Identity.EntraId/Extensions/IdentityEntraIdServiceCollectionExtensions.cs` | | Brevo (email/SMS/WhatsApp) | `src/Granit.Notifications.Brevo/Extensions/BrevoNotificationsServiceCollectionExtensions.cs` | | Zulip (chat) | `src/Granit.Notifications.Zulip/Extensions/ZulipNotificationsServiceCollectionExtensions.cs` | | Firebase FCM (push) | `src/Granit.Notifications.MobilePush.GoogleFcm/Extensions/GoogleFcmMobilePushServiceCollectionExtensions.cs` | ```csharp services.AddHttpClient("KeycloakAdmin", client => { client.BaseAddress = new Uri(options.BaseUrl); client.Timeout = TimeSpan.FromSeconds(options.TimeoutSeconds); }) .AddStandardResilienceHandler(); ``` `AddStandardResilienceHandler()` automatically adds: * **Retry** — 3 attempts, exponential backoff on transient errors (429, 5xx) * **Circuit Breaker** — opens after exceeding the failure threshold over 30s * **Timeout** — per request (30s) and total (2min) * **Rate Limiter** — concurrency control ### Messaging — Wolverine RetryWithCooldown [Section titled “Messaging — Wolverine RetryWithCooldown”](#messaging--wolverine-retrywithcooldown) For asynchronous messages, Wolverine provides retry with progressive cooldown. Example with webhooks (6 levels, 30s to 12h): ```csharp opts.OnException() .RetryWithCooldown( TimeSpan.FromSeconds(30), // Level 1 TimeSpan.FromMinutes(2), // Level 2 TimeSpan.FromMinutes(10), // Level 3 TimeSpan.FromMinutes(30), // Level 4 TimeSpan.FromHours(2), // Level 5 TimeSpan.FromHours(12)); // Level 6 -> Dead-Letter Queue ``` ### Resilience matrix by service [Section titled “Resilience matrix by service”](#resilience-matrix-by-service) | Service | HTTP Resilience | Messaging retry | Special behavior | | -------------- | ---------------------- | --------------------- | ----------------------------- | | Keycloak Admin | Standard handler | — | Graceful degradation on reads | | Brevo | Standard handler | Wolverine retry | — | | SMTP | Configurable timeout | Wolverine retry | — | | Web Push | Standard handler | Wolverine retry | Auto-cleanup on HTTP 410 | | Webhooks | Timeout 5-120s | 6 levels (30s to 12h) | Auto-suspend on 401/403/410 | | Vault | — | Lease renewal | Auto-refresh credentials | | S3 | AWS SDK built-in retry | — | Native SDK backoff | | OTLP | — | Buffer batch export | — | ### Configurable timeouts [Section titled “Configurable timeouts”](#configurable-timeouts) Each external service exposes a timeout via the Options pattern: | Options | Property | Default | Range | | ---------------------- | -------------------- | ------- | ----- | | `KeycloakAdminOptions` | `TimeoutSeconds` | 30 | — | | `BrevoOptions` | `TimeoutSeconds` | 30 | 1—300 | | `SmtpOptions` | `TimeoutSeconds` | 30 | — | | `WebhooksOptions` | `HttpTimeoutSeconds` | 10 | 5—120 | ### Reference files [Section titled “Reference files”](#reference-files) | File | Role | | -------------------------------------------------------------------------------------------- | ------------------------------- | | `src/Granit.Identity.Keycloak/Extensions/IdentityKeycloakServiceCollectionExtensions.cs` | Standard resilience on Keycloak | | `src/Granit.Notifications.Brevo/Extensions/BrevoNotificationsServiceCollectionExtensions.cs` | Standard resilience on Brevo | | `src/Granit.Webhooks/Extensions/WebhooksHostApplicationBuilderExtensions.cs` | RetryWithCooldown 6 levels | ## Rationale [Section titled “Rationale”](#rationale) | Problem | Solution | | --------------------------------------------------- | ------------------------------------------------------------------ | | Temporarily down external service = cascade of 500s | Circuit Breaker cuts calls, prevents saturation | | Transient network error = data loss | Retry with backoff replays automatically | | Webhook endpoint down for hours | 6 progressive levels (30s to 12h) before dead-letter | | Expired token on an external service | Auto-refresh via Vault lease renewal | | `new HttpClient()` without resilience | `IHttpClientFactory` + systematic `AddStandardResilienceHandler()` | ## Usage example [Section titled “Usage example”](#usage-example) ```csharp // --- Registration with standard resilience --- services.AddHttpClient(client => { client.BaseAddress = new Uri("https://api.geo.example.com"); }) .AddStandardResilienceHandler(); // --- The service has no awareness of the resilience --- public sealed class GeoService(HttpClient httpClient) { public async Task GeocodeAsync( string address, CancellationToken cancellationToken = default) { // Retry + Circuit Breaker + Timeout are transparent return await httpClient .GetFromJsonAsync( $"/geocode?q={Uri.EscapeDataString(address)}", cancellationToken) .ConfigureAwait(false); } } ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Circuit Breaker pattern — Microsoft Cloud Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/circuit-breaker) * [Retry pattern — Microsoft Cloud Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/retry)
# Claim Check
> Replace large message payloads with lightweight references to external storage
## Definition [Section titled “Definition”](#definition) The **Claim Check** (or Reference-Based Messaging) replaces large payloads in messages with a lightweight reference to external storage. The producer serializes the payload, stores it in a blob store or cache, and sends only a reference identifier. The consumer uses this reference to retrieve the full payload before processing. This pattern reduces message size, decreases pressure on the message bus, and avoids transport size limit overflows. ## Diagram [Section titled “Diagram”](#diagram) ``` sequenceDiagram participant P as Producer participant S as IClaimCheckStore participant Bus as Wolverine Bus participant C as Consumer P->>S: StorePayloadAsync(largePayload) S-->>P: ClaimCheckReference (Guid) P->>Bus: Command (Reference) Bus->>C: HandleAsync(command) C->>S: RetrievePayloadAsync(reference) S-->>C: largePayload C->>C: Processing C->>S: DeleteAsync(reference) [optional] ``` ``` flowchart LR A[Large payload] --> B{Size > threshold?} B -- Yes --> C[Store -> Reference] B -- No --> D[Direct message] C --> E[Lightweight message + reference] E --> F[Consumer retrieve] F --> G[Processing] ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) Granit provides an `IClaimCheckStore` abstraction in `Granit.Wolverine` with a **soft dependency**: if no implementation is registered in the DI container, the feature is simply unavailable. Handlers resolve the store via `IServiceProvider.GetService()`. ### Abstraction [Section titled “Abstraction”](#abstraction) | Element | Detail | | --------------- | ------------------------------------------------------ | | Interface | `IClaimCheckStore` | | Package | `Granit.Wolverine` | | Methods | `StoreAsync`, `RetrieveAsync`, `DeleteAsync` | | Soft dependency | Resolved via `GetService()`, no `[DependsOn]` required | ```csharp public interface IClaimCheckStore { Task StoreAsync( ReadOnlyMemory data, string? contentType = null, TimeSpan? expiry = null, CancellationToken cancellationToken = default); Task RetrieveAsync( Guid referenceId, CancellationToken cancellationToken = default); Task DeleteAsync( Guid referenceId, CancellationToken cancellationToken = default); } ``` ### Typed extensions [Section titled “Typed extensions”](#typed-extensions) `ClaimCheckExtensions` handle JSON serialization automatically: | Method | Role | | ------------------------- | --------------------------------------------------- | | `StorePayloadAsync(T)` | Serializes to UTF-8 JSON and stores | | `RetrievePayloadAsync(T)` | Retrieves and deserializes | | `ConsumePayloadAsync(T)` | Retrieves, deserializes, and deletes (consume-once) | ### Reference [Section titled “Reference”](#reference) `ClaimCheckReference` is an immutable record carrying the storage identifier, the payload type, and the content type: ```csharp public sealed record ClaimCheckReference( Guid ReferenceId, string PayloadType, string ContentType = "application/json"); ``` ### Available implementations [Section titled “Available implementations”](#available-implementations) | Implementation | Package | Usage | | --------------------------- | ------------------ | ---------------------------------- | | `InMemoryClaimCheckStore` | `Granit.Wolverine` | Development and tests | | BlobStorage-backed (custom) | Application | Production (S3, Azure Blob, Redis) | The in-memory implementation uses a `ConcurrentDictionary` and does not enforce expiry. In production, the application registers its own implementation via DI. ### Soft dependency pattern [Section titled “Soft dependency pattern”](#soft-dependency-pattern) The Claim Check follows the same pattern as `IFeatureChecker` in `Granit.RateLimiting`: 1. The interface is defined in the framework package (`Granit.Wolverine`) 2. No implementation is registered by default by `AddGranitWolverine()` 3. Handlers that need it resolve via `GetService()` 4. If `Granit.BlobStorage` is installed, the application can register a store backed by blob storage 5. If no store is registered, the feature is disabled ### Reference files [Section titled “Reference files”](#reference-files) | File | Role | | -------------------------------------------------------------------------- | ------------------ | | `src/Granit.Wolverine/ClaimCheck/IClaimCheckStore.cs` | Store abstraction | | `src/Granit.Wolverine/ClaimCheck/ClaimCheckReference.cs` | Reference record | | `src/Granit.Wolverine/ClaimCheck/ClaimCheckExtensions.cs` | Typed JSON helpers | | `src/Granit.Wolverine/ClaimCheck/Internal/InMemoryClaimCheckStore.cs` | Dev/test store | | `src/Granit.Wolverine/ClaimCheck/ClaimCheckServiceCollectionExtensions.cs` | DI registration | ## Rationale [Section titled “Rationale”](#rationale) | Problem | Solution | | ------------------------------------------------ | ------------------------------------------------------------------- | | Wolverine message > 1 MB = transport pressure | Payload stored externally, message reduced to a Guid | | GDPR export with large data in saga state | Only the `BlobReferenceId` is stored (ISO 27001) | | Tight coupling between Wolverine and BlobStorage | Soft dependency — works without BlobStorage installed | | Temporary payload forgotten in the store | `ConsumePayloadAsync` (consume-once) + configurable TTL | | Manual serialization/deserialization | Typed extensions `StorePayloadAsync(T)` / `RetrievePayloadAsync(T)` | ## Usage example [Section titled “Usage example”](#usage-example) ```csharp // --- Producer: offload a large payload --- public sealed record ProcessMedicalRecordCommand( Guid PatientId, ClaimCheckReference RecordDataRef); // In a handler or service IClaimCheckStore claimCheckStore = serviceProvider .GetService() ?? throw new InvalidOperationException("Claim check store not configured."); MedicalRecordData largePayload = await BuildLargePayloadAsync(patientId, cancellationToken) .ConfigureAwait(false); ClaimCheckReference reference = await claimCheckStore .StorePayloadAsync(largePayload, TimeSpan.FromHours(1), cancellationToken) .ConfigureAwait(false); await messageBus.PublishAsync( new ProcessMedicalRecordCommand(patientId, reference), cancellationToken).ConfigureAwait(false); // --- Consumer: retrieve and consume --- public static async Task HandleAsync( ProcessMedicalRecordCommand command, IClaimCheckStore claimCheckStore, CancellationToken cancellationToken) { MedicalRecordData? data = await claimCheckStore .ConsumePayloadAsync( command.RecordDataRef, cancellationToken) .ConfigureAwait(false) ?? throw new InvalidOperationException("Payload expired or already consumed."); // Process the medical record... } // --- DI registration (application) --- // Development: builder.Services.AddInMemoryClaimCheckStore(); // Production (custom implementation): builder.Services.AddSingleton(); ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Claim-Check pattern — Microsoft Cloud Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/claim-check) * [Enterprise Integration Patterns — Claim Check](https://www.enterpriseintegrationpatterns.com/patterns/messaging/StoreInLibrary.html)
# Claims-Based Identity / RBAC
> How Granit combines JWT Keycloak authentication with strict role-based access control
## Definition [Section titled “Definition”](#definition) The Claims-Based Identity pattern represents a user’s identity as claims (key-value pairs) extracted from a JWT token. RBAC (Role-Based Access Control) restricts access by verifying that the user’s role holds the required permissions. Granit combines JWT Keycloak + strict RBAC (permissions on roles only, never per user) with a dynamic policy system. ## Diagram [Section titled “Diagram”](#diagram) ``` sequenceDiagram participant C as Client participant KC as Keycloak participant API as Granit API participant CT as ClaimsTransformation participant PC as PermissionChecker participant PS as PermissionGrantStore C->>KC: Authentication KC-->>C: JWT (access_token) C->>API: Request + Bearer token API->>CT: KeycloakClaimsTransformation CT->>CT: Extract realm_access.roles from JWT CT->>CT: Add roles as claims API->>PC: Check permission "Patients.Create" PC->>PS: GetGrantedPermissionsAsync(roles, tenantId) PS-->>PC: Permission list alt Permission granted PC-->>API: true else Permission denied PC-->>API: false -- 403 Forbidden end ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) ### Authentication layer [Section titled “Authentication layer”](#authentication-layer) | Component | File | Role | | ------------------------------ | ----------------------------------------------------------------------------------- | --------------------------------------------------------- | | `ICurrentUserService` | `src/Granit.Security/ICurrentUserService.cs` | `UserId`, `UserName`, `Email`, `GetRoles()`, `IsInRole()` | | `KeycloakClaimsTransformation` | `src/Granit.Authentication.Keycloak/Authentication/KeycloakClaimsTransformation.cs` | Extracts `realm_access.roles` from the Keycloak JWT | | `WolverineCurrentUserService` | `src/Granit.Wolverine/Internal/WolverineCurrentUserService.cs` | `AsyncLocal` fallback for background handlers | ### Authorization layer [Section titled “Authorization layer”](#authorization-layer) | Component | File | Role | | --------------------------------- | ----------------------------------------------------------------------------------- | -------------------------------------------------------------- | | `DynamicPermissionPolicyProvider` | `src/Granit.Authorization/Authorization/DynamicPermissionPolicyProvider.cs` | Creates `AuthorizationPolicy` on the fly from permission names | | `PermissionChecker` | `src/Granit.Authorization/Services/PermissionChecker.cs` | Evaluates permissions against role grants | | `IPermissionDefinitionProvider` | `src/Granit.Authorization/Definitions/IPermissionDefinitionProvider.cs` | Permission declaration (code-first) | | `EfCorePermissionGrantStore` | `src/Granit.Authorization.EntityFrameworkCore/Stores/EfCorePermissionGrantStore.cs` | EF Core grant persistence | ### Strict RBAC [Section titled “Strict RBAC”](#strict-rbac) * Permissions are assigned to **roles**, never to individual users * Cache is per **role** (not per user) for performance * `AdminRoles` bootstraps roles with all permissions at startup ## Rationale [Section titled “Rationale”](#rationale) | Problem | Solution | | ---------------------------------------------------------- | ------------------------------------------------------------- | | Keycloak returns roles in a custom format (`realm_access`) | `KeycloakClaimsTransformation` normalizes to standard claims | | Background handlers have no `HttpContext` | `WolverineCurrentUserService` maintains user via `AsyncLocal` | | Creating one policy per permission would be explosive | `DynamicPermissionPolicyProvider` creates policies on demand | | Per-user grants do not scale (10K users x 100 permissions) | Strict RBAC: grants on roles (10 roles x 100 permissions) | ## Usage example [Section titled “Usage example”](#usage-example) ```csharp // Declare permissions (code-first) public sealed class PatientPermissionDefinitionProvider : IPermissionDefinitionProvider { public void DefinePermissions(IPermissionDefinitionContext context) { PermissionGroup group = context.AddGroup("Patients"); group.AddPermission("Patients.Create"); group.AddPermission("Patients.Read"); group.AddPermission("Patients.Delete"); } } ``` > Localized `displayName` values are optional in this simplified example. See the full authorization documentation for adding `LocalizableString`. ```csharp // Protect an endpoint app.MapPost("/api/patients", CreatePatientEndpoint.Handle) .RequireAuthorization("Patients.Create"); // Programmatic check public static class DischargePatientHandler { public static async Task Handle( DischargePatientCommand cmd, IPermissionChecker permissionChecker, CancellationToken cancellationToken) { bool canDischarge = await permissionChecker.IsGrantedAsync("Patients.Discharge", ct); if (!canDischarge) throw new ForbiddenException(); } } ``` ## Further reading [Section titled “Further reading”](#further-reading) * [Federated Identity pattern — Microsoft Cloud Design Patterns](https://learn.microsoft.com/en-us/azure/architecture/patterns/federated-identity)
# Command
> Wolverine message-based commands with transactional outbox in Granit
## Definition [Section titled “Definition”](#definition) The Command pattern encapsulates a request as an object, enabling parameterization, queuing, logging, and undo of operations. The command is a serializable DTO that decouples the sender from the executor. In Granit, commands are Wolverine messages processed by automatically discovered handlers. ## Diagram [Section titled “Diagram”](#diagram) ``` sequenceDiagram participant E as Sender participant BUS as Wolverine Bus participant OB as Outbox participant H as Handler E->>BUS: Publish SendWebhookCommand BUS->>OB: Persist in Outbox Note over OB: Atomic transaction OB->>H: Dispatch post-commit H->>H: SendWebhookHandler.Handle() ``` ## Implementation in Granit [Section titled “Implementation in Granit”](#implementation-in-granit) | Command | File | Handler | | -------------------------- | ------------------------------------------------------------------------ | -------------------------- | | `SendWebhookCommand` | `src/Granit.Webhooks/Messages/SendWebhookCommand.cs` | `SendWebhookHandler` | | `RunMigrationBatchCommand` | `src/Granit.Persistence.Migrations/Messages/RunMigrationBatchCommand.cs` | `RunMigrationBatchHandler` | Commands are plain C# classes (serializable DTOs). Wolverine discovers handlers by naming convention (`Handle()` method). ## Rationale [Section titled “Rationale”](#rationale) Commands decouple the sender (HTTP handler) from the executor (background handler). Serialization via the Outbox guarantees delivery even in case of crash. Wolverine’s automatic retry handles transient failures. ## Usage example [Section titled “Usage example”](#usage-example) ```csharp // Define a command public sealed class SendInvoiceEmailCommand { public required Guid InvoiceId { get; init; } public required string RecipientEmail { get; init; } } // The handler is discovered automatically by Wolverine public static class SendInvoiceEmailHandler { public static async Task Handle( SendInvoiceEmailCommand command, IEmailService emailService, CancellationToken cancellationToken) { await emailService.SendInvoiceAsync(command.InvoiceId, command.RecipientEmail, ct); } } // Emission from an HTTP handler public static class CreateInvoiceHandler { public static IEnumerable