Skip to content

Access Anomaly Detection — AI-Powered Auth

Traditional authorization checks are binary: allowed or denied. They cannot detect that a user who is allowed to read invoices is reading 800 of them in 5 minutes at 2am — which is suspicious even if each individual check passes.

Granit.Authorization.AI adds IAIAccessAnomalyDetector: an optional decorator on the permission check pipeline that scores behavioral risk without blocking access.

The detector is called after the permission check succeeds. It analyzes three signals:

SignalWhat’s passed to the LLM
userIdPseudonymized (SHA-256 hash) — the real user ID never leaves the system
permissionThe permission being checked (e.g. invoices.read)
contextOptional JSON: resource type, count, time, previous access patterns

It returns an AccessRiskScore that your code can act on — log, alert, or throttle — without blocking the operation:

public sealed record AccessRiskScore(
double Score, // 0.0 (normal) – 1.0 (high risk), clamped
string Reasoning, // "800 reads in 5 minutes at 2am"
IReadOnlyList<string> RiskFactors) // ["unusual time", "high volume", "bulk export"]
{
public bool IsAdvisoryOnly => true; // MUST NOT be used as sole deny factor
}
[DependsOn(
typeof(GranitAuthorizationAIModule),
typeof(GranitAIOpenAIModule))]
public class AppModule : GranitModule { }

Note the 2-second timeout: access evaluation is in the request path. Fail-open design: timeout returns Score = UnavailableRiskScore (default 0.5) and access proceeds normally.

Inject IAIAccessAnomalyDetector alongside your permission checks. The recommended pattern is a decorator on your data access layer:

public class AnomalyAwareInvoiceRepository(
IInvoiceRepository inner,
IAIAccessAnomalyDetector detector,
ICurrentUserService currentUser,
ISecurityAlerter alerter) : IInvoiceRepository
{
public async Task<PagedResult<Invoice>> SearchAsync(
InvoiceQuery query,
CancellationToken ct)
{
PagedResult<Invoice> result = await inner.SearchAsync(query, ct)
.ConfigureAwait(false);
// Evaluate after the query — never blocks the response
AccessRiskScore risk = await detector.EvaluateAccessAsync(
userId: currentUser.UserId,
permission: "invoices.read",
context: JsonSerializer.Serialize(new
{
ResultCount = result.TotalCount,
HasExport = query.ExportRequested,
Hour = DateTimeOffset.UtcNow.Hour,
}),
ct).ConfigureAwait(false);
if (risk.Score > 0.7)
{
await alerter.RaiseAlertAsync(currentUser.UserId, risk, ct)
.ConfigureAwait(false);
}
return result;
}
}
Score rangeSuggested action
0.0 – 0.3Normal — no action
0.3 – 0.6Log and monitor
0.6 – 0.8Alert security team
0.8 – 1.0Alert + throttle or require re-authentication

These are starting points — tune to your application’s access patterns.

The LLM is not aware of what’s “normal” for your application by default. Use the context parameter to provide relevant behavioral signals:

// Good context — gives the LLM useful signals
var context = JsonSerializer.Serialize(new
{
ResourceType = "Invoice",
Count = 847, // How many records accessed
IsExport = true, // Bulk export flag
LocalHour = 2, // Hour of day (local time)
DayOfWeek = "Sunday",
IsFirstAccess = false,
TenantPlan = "starter", // Expected volume for this plan
});

The more context you provide, the more accurate the scoring.

Flagged access events can be posted to the Timeline for audit trail compliance:

if (risk.Score > 0.6)
{
await timelineWriter.PostEntryAsync(
entityType: "User",
entityId: currentUser.UserId,
entryType: TimelineEntryType.SystemLog,
body: $"[Security] Suspicious access detected: {risk.Reasoning} " +
$"(score: {risk.Score:F2}, factors: {string.Join(", ", risk.RiskFactors)})",
parentEntryId: null,
ct).ConfigureAwait(false);
}

Authorization must never be blocked by an AI timeout. When the LLM is unavailable (timeout, exception), the detector returns a configurable uncertainty score:

  • Score = UnavailableRiskScore (default 0.5 — “uncertain”)
  • Reasoning = "AI evaluation unavailable — flagged for manual review"
  • RiskFactors = []

The default 0.5 signals “uncertain — apply your policy”. Callers should decide:

UnavailableRiskScoreBehaviorWhen to use
0.0Fail-open — treat as normalHigh-volume, non-sensitive APIs
0.5 (default)Uncertain — log and monitorGeneral-purpose (recommended)
1.0Fail-closed — treat as suspiciousFinancial, medical, or compliance-sensitive access
{
"AI": {
"Authorization": {
"UnavailableRiskScore": 0.5
}
}
}

A warning is logged on every unavailability event. The detector never becomes a denial-of-service vector — it always returns a score, never throws.

All user-controlled inputs are sanitized before being sent to the LLM:

  • userId — pseudonymized via LlmInputSanitizer.PseudonymizeUserId() (SHA-256 hash, GDPR Art. 5 data minimization). This utility lives in Granit.AI and is available to all AI modules that send user identifiers to an LLM.
  • All inputsPromptBuilder strips Unicode control characters, zero-width chars, and bidirectional overrides, wraps user data in structured <data> delimiters, and neutralizes injection patterns, preventing an attacker from manipulating the risk score via crafted context strings
PropertyTypeDefaultDescription
WorkspaceNamestring"default"AI workspace for access evaluation
TimeoutSecondsint5Timeout in seconds (validated: 1–30)
UnavailableRiskScoredouble0.5Risk score when LLM is unavailable (validated: 0.0–1.0)