Skip to content

Natural Language Query

Data grids with 50 filter dropdowns are powerful but intimidating. Users don’t want to learn which field is “Status”, which operator is “Greater than”, and how to combine date ranges with text filters. They want to type “unpaid invoices from last week for Belgian clients” and get results.

Granit.Querying.AI makes this possible by translating natural language into the structured QueryRequest format that Granit’s query engine already understands.

sequenceDiagram
    participant U as User
    participant API as Endpoint
    participant NLQ as INaturalLanguageQueryTranslator
    participant LLM as IChatClient
    participant QE as IQueryEngine

    U->>API: "unpaid invoices from last week"
    API->>NLQ: TranslateAsync(phrase, metadata)
    NLQ->>NLQ: Build prompt with column metadata
    NLQ->>LLM: "Translate to QueryRequest JSON"
    LLM-->>NLQ: { filter: {"status.eq":"Unpaid","date.gte":"2026-03-09"}, sort: "-date" }
    NLQ-->>API: QueryRequest
    API->>QE: ExecuteAsync(source, request)
    QE-->>API: PagedResult<Invoice>
    API-->>U: Filtered results

The key insight: the LLM never sees your data. It only sees the query metadata (column names, filter types, sort options) and translates user intent into filters. The actual query runs through the same type-safe IQueryEngine as manual filters.

The LLM output is a QueryRequest JSON — a structured object with typed filters. It’s not SQL. The query engine validates every filter against the QueryDefinition:

  • Only declared columns can be filtered
  • Only valid operators for each column type are accepted
  • Values are parameterized (no string concatenation)
User types: "DROP TABLE invoices; --"
LLM returns: { "filter": { "search.contains": "DROP TABLE invoices" } }
↑ just a text search, not SQL
Query engine: WHERE SearchField LIKE @p0
↑ parameterized, harmless

Even if the LLM were compromised, the worst it could do is generate a valid filter that returns unexpected results — never execute arbitrary SQL.

Only query metadata — the schema of what’s filterable, sortable, and searchable:

You are a query translator. Convert natural language to a JSON QueryRequest.
Available filterable fields:
| Field | Type | Operators |
|-----------|----------------|------------------------------|
| status | String | eq, contains, in |
| amount | Decimal | eq, gt, gte, lt, lte, between|
| createdAt | DateTimeOffset | eq, gt, gte, lt, lte, between|
| country | String | eq, contains, in |
Available sortable fields: status, amount, createdAt, country
Available quick filters:
- overdue: "Overdue invoices"
- highValue: "Amount > 10000"
Today's date: 2026-03-16
Filter format: { "field.operator": "value" }
Sort format: "-field" for descending, "field" for ascending

The LLM never sees: row data, customer names, amounts, or any business information.

User saysGenerated QueryRequest
”unpaid invoices from last week”Filter: { "status.eq": "Unpaid", "createdAt.gte": "2026-03-09" }
”top 10 clients by revenue”Sort: "-revenue", PageSize: 10
”Belgian customers created this month”Filter: { "country.eq": "BE", "createdAt.gte": "2026-03-01" }
”overdue high-value invoices”QuickFilters: ["overdue", "highValue"]
”doctors sorted by name”Filter: { "role.eq": "Doctor" }, Sort: "lastName"
builder.AddGranitAI();
builder.AddGranitAIOllama();
builder.AddGranitQueryingAI();
public static class InvoiceSearchEndpoints
{
public static void MapInvoiceSearch(this IEndpointRouteBuilder routes)
{
routes.MapGet("/api/invoices/search", async (
string q,
INaturalLanguageQueryTranslator nlq,
IQueryEngine<Invoice> engine,
InvoiceDbContext db,
CancellationToken ct) =>
{
QueryMetadata metadata = engine.GetMetadata();
QueryRequest? request = await nlq
.TranslateAsync(q, metadata, ct)
.ConfigureAwait(false);
// Fallback: if NLQ can't translate, use full-text search
request ??= new QueryRequest { Search = q };
PagedResult<Invoice> results = await engine
.ExecuteAsync(db.Invoices, request, ct)
.ConfigureAwait(false);
return TypedResults.Ok(results);
});
}
}

NLQ returns null when it can’t translate the phrase (gibberish input, LLM timeout, service unavailable). The calling code decides what to do — typically falls back to the existing full-text search (QueryRequest.Search).

QueryRequest? request = await nlq.TranslateAsync(userQuery, metadata, ct);
request ??= new QueryRequest { Search = userQuery }; // graceful fallback

This means NLQ is always safe to deploy — worst case, it’s a no-op and the user gets full-text search results instead.

NLQ is one of the few AI features that runs synchronously in the request path, because the user is waiting for search results.

MetricValue
Typical latency200-500ms
Timeout5s (configurable)
FallbackReturn null → full-text search
Token cost~200 input + ~100 output tokens per query

For most use cases, the added latency is acceptable because it replaces the user manually configuring 3-5 filter dropdowns (which takes 10-30 seconds).

  • Intuitive UX — users search in their own words, no filter training needed
  • Zero data exposure — only metadata sent to the LLM
  • Type-safe — output is a validated QueryRequest, not raw SQL
  • Progressive enhancement — works alongside existing filters, not a replacement
  • Multilingual — users can query in any language the LLM supports
RiskMitigation
Ambiguous queriesLLM returns null, fallback to full-text search
Wrong filterUser sees results and can refine; no destructive action possible
Latency spike5s timeout, null return on timeout
LLM cost~300 tokens per query, cached metadata prompt
Date misinterpretationCurrent date injected in prompt for “last week”, “this month”
PropertyTypeDefaultDescription
WorkspaceNamestring"default"AI workspace for NLQ translation
TimeoutSecondsint5Short timeout — NLQ must be fast