Natural Language Query
Data grids with 50 filter dropdowns are powerful but intimidating. Users don’t want to learn which field is “Status”, which operator is “Greater than”, and how to combine date ranges with text filters. They want to type “unpaid invoices from last week for Belgian clients” and get results.
Granit.Querying.AI makes this possible by translating natural language into the
structured QueryRequest format that Granit’s query engine already understands.
How it works
Section titled “How it works”sequenceDiagram
participant U as User
participant API as Endpoint
participant NLQ as INaturalLanguageQueryTranslator
participant LLM as IChatClient
participant QE as IQueryEngine
U->>API: "unpaid invoices from last week"
API->>NLQ: TranslateAsync(phrase, metadata)
NLQ->>NLQ: Build prompt with column metadata
NLQ->>LLM: "Translate to QueryRequest JSON"
LLM-->>NLQ: { filter: {"status.eq":"Unpaid","date.gte":"2026-03-09"}, sort: "-date" }
NLQ-->>API: QueryRequest
API->>QE: ExecuteAsync(source, request)
QE-->>API: PagedResult<Invoice>
API-->>U: Filtered results
The key insight: the LLM never sees your data. It only sees the query metadata
(column names, filter types, sort options) and translates user intent into filters.
The actual query runs through the same type-safe IQueryEngine as manual filters.
Why it’s safe (no SQL injection)
Section titled “Why it’s safe (no SQL injection)”The LLM output is a QueryRequest JSON — a structured object with typed filters.
It’s not SQL. The query engine validates every filter against the QueryDefinition:
- Only declared columns can be filtered
- Only valid operators for each column type are accepted
- Values are parameterized (no string concatenation)
User types: "DROP TABLE invoices; --"
LLM returns: { "filter": { "search.contains": "DROP TABLE invoices" } } ↑ just a text search, not SQL
Query engine: WHERE SearchField LIKE @p0 ↑ parameterized, harmlessEven if the LLM were compromised, the worst it could do is generate a valid filter that returns unexpected results — never execute arbitrary SQL.
What the LLM receives
Section titled “What the LLM receives”Only query metadata — the schema of what’s filterable, sortable, and searchable:
You are a query translator. Convert natural language to a JSON QueryRequest.
Available filterable fields:| Field | Type | Operators ||-----------|----------------|------------------------------|| status | String | eq, contains, in || amount | Decimal | eq, gt, gte, lt, lte, between|| createdAt | DateTimeOffset | eq, gt, gte, lt, lte, between|| country | String | eq, contains, in |
Available sortable fields: status, amount, createdAt, country
Available quick filters:- overdue: "Overdue invoices"- highValue: "Amount > 10000"
Today's date: 2026-03-16
Filter format: { "field.operator": "value" }Sort format: "-field" for descending, "field" for ascendingThe LLM never sees: row data, customer names, amounts, or any business information.
Translation examples
Section titled “Translation examples”| User says | Generated QueryRequest |
|---|---|
| ”unpaid invoices from last week” | Filter: { "status.eq": "Unpaid", "createdAt.gte": "2026-03-09" } |
| ”top 10 clients by revenue” | Sort: "-revenue", PageSize: 10 |
| ”Belgian customers created this month” | Filter: { "country.eq": "BE", "createdAt.gte": "2026-03-01" } |
| ”overdue high-value invoices” | QuickFilters: ["overdue", "highValue"] |
| ”doctors sorted by name” | Filter: { "role.eq": "Doctor" }, Sort: "lastName" |
builder.AddGranitAI();builder.AddGranitAIOllama();builder.AddGranitQueryingAI();public static class InvoiceSearchEndpoints{ public static void MapInvoiceSearch(this IEndpointRouteBuilder routes) { routes.MapGet("/api/invoices/search", async ( string q, INaturalLanguageQueryTranslator nlq, IQueryEngine<Invoice> engine, InvoiceDbContext db, CancellationToken ct) => { QueryMetadata metadata = engine.GetMetadata();
QueryRequest? request = await nlq .TranslateAsync(q, metadata, ct) .ConfigureAwait(false);
// Fallback: if NLQ can't translate, use full-text search request ??= new QueryRequest { Search = q };
PagedResult<Invoice> results = await engine .ExecuteAsync(db.Invoices, request, ct) .ConfigureAwait(false);
return TypedResults.Ok(results); }); }}The fallback pattern
Section titled “The fallback pattern”NLQ returns null when it can’t translate the phrase (gibberish input, LLM timeout,
service unavailable). The calling code decides what to do — typically falls back to
the existing full-text search (QueryRequest.Search).
QueryRequest? request = await nlq.TranslateAsync(userQuery, metadata, ct);
request ??= new QueryRequest { Search = userQuery }; // graceful fallbackThis means NLQ is always safe to deploy — worst case, it’s a no-op and the user gets full-text search results instead.
Performance
Section titled “Performance”NLQ is one of the few AI features that runs synchronously in the request path, because the user is waiting for search results.
| Metric | Value |
|---|---|
| Typical latency | 200-500ms |
| Timeout | 5s (configurable) |
| Fallback | Return null → full-text search |
| Token cost | ~200 input + ~100 output tokens per query |
For most use cases, the added latency is acceptable because it replaces the user manually configuring 3-5 filter dropdowns (which takes 10-30 seconds).
Advantages and risks
Section titled “Advantages and risks”Advantages
Section titled “Advantages”- Intuitive UX — users search in their own words, no filter training needed
- Zero data exposure — only metadata sent to the LLM
- Type-safe — output is a validated
QueryRequest, not raw SQL - Progressive enhancement — works alongside existing filters, not a replacement
- Multilingual — users can query in any language the LLM supports
Risks and mitigations
Section titled “Risks and mitigations”| Risk | Mitigation |
|---|---|
| Ambiguous queries | LLM returns null, fallback to full-text search |
| Wrong filter | User sees results and can refine; no destructive action possible |
| Latency spike | 5s timeout, null return on timeout |
| LLM cost | ~300 tokens per query, cached metadata prompt |
| Date misinterpretation | Current date injected in prompt for “last week”, “this month” |
Configuration reference
Section titled “Configuration reference”| Property | Type | Default | Description |
|---|---|---|---|
WorkspaceName | string | "default" | AI workspace for NLQ translation |
TimeoutSeconds | int | 5 | Short timeout — NLQ must be fast |
See also
Section titled “See also”- Granit.AI overview — core module, providers, workspaces
- AI: Import Mapping — AI for CSV/Excel column mapping
- AI: Semantic Search — vector-based similarity search