Skip to content

Access Anomaly Detection

Traditional authorization checks are binary: allowed or denied. They cannot detect that a user who is allowed to read invoices is reading 800 of them in 5 minutes at 2am — which is suspicious even if each individual check passes.

Granit.Authorization.AI adds IAIAccessAnomalyDetector: an optional decorator on the permission check pipeline that scores behavioral risk without blocking access.

The detector is called after the permission check succeeds. It analyzes three signals:

SignalWhat’s passed to the LLM
userIdThe user performing the access
permissionThe permission being checked (e.g. invoices.read)
contextOptional JSON: resource type, count, time, previous access patterns

It returns an AccessRiskScore that your code can act on — log, alert, or throttle — without blocking the operation:

public sealed record AccessRiskScore(
double Score, // 0.0 (normal) – 1.0 (high risk)
string Reasoning, // "800 reads in 5 minutes at 2am"
IReadOnlyList<string> RiskFactors); // ["unusual time", "high volume", "bulk export"]
[DependsOn(
typeof(GranitAuthorizationAIModule),
typeof(GranitAIOpenAIModule))]
public class AppModule : GranitModule { }

Note the 2-second timeout: access evaluation is in the request path. Fail-open design: timeout returns Score = 0.0 and access proceeds normally.

Inject IAIAccessAnomalyDetector alongside your permission checks. The recommended pattern is a decorator on your data access layer:

public class AnomalyAwareInvoiceRepository(
IInvoiceRepository inner,
IAIAccessAnomalyDetector detector,
ICurrentUserService currentUser,
ISecurityAlerter alerter) : IInvoiceRepository
{
public async Task<PagedResult<Invoice>> SearchAsync(
InvoiceQuery query,
CancellationToken ct)
{
PagedResult<Invoice> result = await inner.SearchAsync(query, ct)
.ConfigureAwait(false);
// Evaluate after the query — never blocks the response
AccessRiskScore risk = await detector.EvaluateAccessAsync(
userId: currentUser.UserId,
permission: "invoices.read",
context: JsonSerializer.Serialize(new
{
ResultCount = result.TotalCount,
HasExport = query.ExportRequested,
Hour = DateTimeOffset.UtcNow.Hour,
}),
ct).ConfigureAwait(false);
if (risk.Score > 0.7)
{
await alerter.RaiseAlertAsync(currentUser.UserId, risk, ct)
.ConfigureAwait(false);
}
return result;
}
}
Score rangeSuggested action
0.0 – 0.3Normal — no action
0.3 – 0.6Log and monitor
0.6 – 0.8Alert security team
0.8 – 1.0Alert + throttle or require re-authentication

These are starting points — tune to your application’s access patterns.

The LLM is not aware of what’s “normal” for your application by default. Use the context parameter to provide relevant behavioral signals:

// Good context — gives the LLM useful signals
var context = JsonSerializer.Serialize(new
{
ResourceType = "Invoice",
Count = 847, // How many records accessed
IsExport = true, // Bulk export flag
LocalHour = 2, // Hour of day (local time)
DayOfWeek = "Sunday",
IsFirstAccess = false,
TenantPlan = "starter", // Expected volume for this plan
});

The more context you provide, the more accurate the scoring.

Flagged access events can be posted to the Timeline for audit trail compliance:

if (risk.Score > 0.6)
{
await timelineWriter.PostEntryAsync(
entityType: "User",
entityId: currentUser.UserId,
entryType: TimelineEntryType.SystemLog,
body: $"[Security] Suspicious access detected: {risk.Reasoning} " +
$"(score: {risk.Score:F2}, factors: {string.Join(", ", risk.RiskFactors)})",
parentEntryId: null,
ct).ConfigureAwait(false);
}

Authorization must never be blocked by an AI timeout. When the LLM is unavailable:

  • Score = 0.0 (no risk assumed)
  • Reasoning = "Evaluation unavailable"
  • RiskFactors = []

Access proceeds normally. A warning is logged. The AI decorator never becomes a denial-of-service vector for your authorization layer.

PropertyTypeDefaultDescription
WorkspaceNamestring"default"AI workspace for access evaluation
TimeoutSecondsint2Timeout — keep low, access evaluation is in the request path