Skip to content

AI Workspace

The AI Workspace pattern provides named, per-tenant AI provider configurations resolved at runtime through a central factory. It is the IHttpClientFactory pattern applied to AI clients: each named workspace encapsulates a provider (OpenAI, AzureOpenAI, Anthropic, Ollama), model selection, API credentials, and usage limits. Callers request a workspace by name; the factory resolves the correct IChatClient and IEmbeddingGenerator transparently.

This decouples application logic from the choice of provider — switching a tenant from GPT-4o to Claude requires only a configuration change, not code change.

flowchart TD
    APP[Application handler] --> FAC[IAIChatClientFactory]

    subgraph Factory resolution
        FAC --> WSCFG[WorkspaceConfiguration\nper-tenant DB row]
        WSCFG --> PROV{Provider?}
        PROV -- OpenAI --> OAI[Granit.AI.OpenAI\nIChatClient]
        PROV -- AzureOpenAI --> AOAI[Granit.AI.AzureOpenAI\nIChatClient]
        PROV -- Anthropic --> ANT[Granit.AI.Anthropic\nIChatClient]
        PROV -- Ollama --> OLL[Granit.AI.Ollama\nIChatClient]
    end

    OAI --> CHAT[IChatClient]
    AOAI --> CHAT
    ANT --> CHAT
    OLL --> CHAT

    CHAT --> RESULT[Typed response]

Granit.AI registers IAIChatClientFactory via AddGranitAI(). Provider packages register their factory implementations (e.g., AddGranitAIOpenAI()).

Granit.AI
public interface IAIChatClientFactory
{
Task<AIWorkspace> CreateAsync(
string workspaceId,
CancellationToken ct = default);
}
public sealed record AIWorkspace(
IChatClient Chat,
IEmbeddingGenerator<string, Embedding<float>>? Embeddings,
AIWorkspaceOptions Options);

Workspaces are defined per application (or per tenant) in appsettings.json or in the database via Granit.AI.EntityFrameworkCore:

{
"AI": {
"Workspaces": {
"default": {
"Provider": "OpenAI",
"Model": "gpt-4o",
"EmbeddingModel": "text-embedding-3-small"
},
"compliance": {
"Provider": "AzureOpenAI",
"Endpoint": "https://my-eu.openai.azure.com/",
"Deployment": "gpt-4o-eu",
"EmbeddingModel": "text-embedding-3-large"
},
"local-dev": {
"Provider": "Ollama",
"Model": "llama3.2"
}
}
}
}
public class SummaryService(IAIChatClientFactory factory)
{
public async Task<string> SummarizeAsync(
string text, string tenantWorkspaceId, CancellationToken ct)
{
var workspace = await factory.CreateAsync(tenantWorkspaceId, ct);
var response = await workspace.Chat.CompleteAsync(
[
new ChatMessage(ChatRole.System, "Summarize the following text in 3 bullet points."),
new ChatMessage(ChatRole.User, text),
], cancellationToken: ct);
return response.Message.Text ?? string.Empty;
}
}

When Granit.MultiTenancy is registered, IAIChatClientFactory resolves the workspace configuration for the current tenant automatically — overriding the application-level default with a tenant-specific provider or model.

[DependsOn(typeof(GranitAIModule))]
[DependsOn(typeof(GranitAIOpenAIModule))] // add providers as needed
[DependsOn(typeof(GranitAIAzureOpenAIModule))]
public class AppModule : GranitModule { }
FileRole
src/Granit.AI/IAIChatClientFactory.csFactory interface
src/Granit.AI/AIWorkspace.csResolved workspace record
src/Granit.AI/GranitAIOptions.csTop-level options with workspace dictionary
src/Granit.AI.EntityFrameworkCore/DB-persisted workspace configurations
ProblemAI Workspace solution
Hard-coded provider in application codeNamed workspaces resolved at runtime
Same provider for all tenantsPer-tenant workspace override
API key management scattered across servicesCentralized in workspace config + Vault
Switching provider requires code changeConfiguration-only change
Test isolationInject Ollama workspace in tests, OpenAI in production