Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
5 changes: 4 additions & 1 deletion apl/tabular-operators/join-operator.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,10 @@ The kinds of join and their typical use cases are the following:
- `rightsemi`: Returns rows from the right dataset that have at least one match in the left dataset. Only columns from the right dataset are included. Filters rows in the right dataset based on existence in the left dataset.

<Note>
The preview of the `join` operator currently only supports `inner` join. Support for other kinds of join is coming soon.
The preview of the `join` operator currently only supports the following types of join:
- `inner`
- `innerunique`
- `leftouter`
</Note>

### Summary of kinds of join
Expand Down
6 changes: 5 additions & 1 deletion apl/tabular-operators/limit-operator.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,11 @@ SELECT * FROM sample_http_logs LIMIT 10;

### Parameters

- `N`: The maximum number of rows to return. This must be a non-negative integer.
- `N`: The maximum number of rows to return. This must be a non-negative integer up to 50,000.

<Note>
The maximum value for `N` is 50,000 rows. If you need to export or process more than 50,000 rows, consider using pagination with time-based filtering or the API endpoints with cursor-based pagination.
</Note>

### Returns

Expand Down
6 changes: 5 additions & 1 deletion apl/tabular-operators/take-operator.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -54,7 +54,11 @@ SELECT * FROM sample_http_logs LIMIT 10;

### Parameters

- `N`: The number of rows to take from the dataset. `N` must be a positive integer.
- `N`: The number of rows to take from the dataset. `N` must be a positive integer up to 50,000.

<Note>
The maximum value for `N` is 50,000 rows. If you need to export or process more than 50,000 rows, consider using pagination with time-based filtering or the API endpoints with cursor-based pagination.
</Note>

### Returns

Expand Down
12 changes: 11 additions & 1 deletion monitor-data/anomaly-monitors.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,17 @@
1. To define your query, use one of the following options:
- To use the visual query builder, click **Simple query builder**. Click **Visualize** to select an aggregation method, and then click **Run query** to preview the results.
- To use Axiom Processing Language (APL), click **Advanced query language**. Write a query where the final clause uses the `summarize` operator and bins the results by `_time`, and then click **Run query** to preview the results. For more information, see [Introduction to APL](/apl/introduction).
In the preview, the boundary where the monitor triggers is displayed as a dashed line. Where there isn’t enough data to compute a boundary, the chart is grayed out. If the monitor preview shows that it alerts when you don’t want it to, try increasing the tolerance. Inversely, try decreasing the tolerance if the monitor preview shows that it doesn’t alert when you want it to.

<Warning>
Anomaly monitors require a time series query. Your query must end with a `summarize` operator that bins data by `_time` using `bin(_time, interval)` or `bin_auto(_time)`. Without time binning, the monitor cannot detect anomalies over time. For example:

Check warning on line 31 in monitor-data/anomaly-monitors.mdx

View check run for this annotation

Mintlify / Mintlify Validation (axiom) - vale-spellcheck

monitor-data/anomaly-monitors.mdx#L31

Use 'can't' instead of 'cannot'.

```kusto
['your-dataset']
| summarize count() by bin(_time, 5m)
```
</Warning>

In the preview, the boundary where the monitor triggers is displayed as a dashed line. Where there isn't enough data to compute a boundary, the chart is grayed out. If the monitor preview shows that it alerts when you don't want it to, try increasing the tolerance. Inversely, try decreasing the tolerance if the monitor preview shows that it doesn't alert when you want it to.

Check warning on line 39 in monitor-data/anomaly-monitors.mdx

View check run for this annotation

Mintlify / Mintlify Validation (axiom) - vale-spellcheck

monitor-data/anomaly-monitors.mdx#L39

Use 'isn’t' instead of 'isn't'.

Check warning on line 39 in monitor-data/anomaly-monitors.mdx

View check run for this annotation

Mintlify / Mintlify Validation (axiom) - vale-spellcheck

monitor-data/anomaly-monitors.mdx#L39

Use 'don’t' instead of 'don't'.

Check warning on line 39 in monitor-data/anomaly-monitors.mdx

View check run for this annotation

Mintlify / Mintlify Validation (axiom) - vale-spellcheck

monitor-data/anomaly-monitors.mdx#L39

Use 'doesn’t' instead of 'doesn't'.
1. Click **Create**.

You have created an anomaly monitor. Axiom alerts you when the results from your query are too high or too low compared to what’s expected based on the event history.
Expand Down
48 changes: 48 additions & 0 deletions monitor-data/match-monitors.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -41,6 +41,54 @@ If you define your query using APL, you can use the following limited set of tab
This restriction only applies to tabular operators.
</Info>

## Handle ingestion delays

If your data experiences ingestion delays (the time between when an event occurs and when Axiom receives it), you may need to configure your match monitor to account for this delay. Without accounting for delays, your monitor may miss events that arrive late.

<Steps>
<Step title="Identify your ingestion delay">
Use the `ingestion_time()` function to measure the delay between event time (`_time`) and ingestion time:

```kusto
['your-dataset']
| extend ingest_time = ingestion_time()
| extend delay_seconds = datetime_diff('second', ingest_time, _time)
| summarize avg(delay_seconds), max(delay_seconds)
```

This query shows you the average and maximum ingestion delay for your dataset.
</Step>

<Step title="Configure the secondDelay parameter">
When creating or updating your monitor through the API, add the `secondDelay` parameter to account for ingestion delays. This parameter tells the monitor to wait before evaluating events, ensuring late-arriving data is included.

For example, if your ingestion delay is 45 minutes (2,700 seconds), set `secondDelay` to 2700 or higher:

```json
{
"name": "My Match Monitor",
"type": "MatchEvent",
"aplQuery": "['your-dataset'] | where severity == 'error'",
"intervalMinutes": 1,
"secondDelay": 2700,
"notifierIds": ["notifier_id"]
}
```

<Note>
The `secondDelay` parameter is currently only available through the API and not in the UI. For more information, see the [Monitors API documentation](/restapi/endpoints/createMonitor).
</Note>
</Step>

<Step title="Verify the configuration">
After configuring `secondDelay`, monitor the alerts to ensure events are being captured correctly. You may need to adjust the value based on your actual ingestion patterns.
</Step>
</Steps>

<Tip>
Match monitors evaluate events based on their `_time` field (when the event occurred), not when Axiom received them. If you have a 45-minute ingestion delay and your monitor runs every minute, it checks for events that occurred roughly one minute ago according to `_time`. Without `secondDelay`, these events may not have arrived yet.
</Tip>

## Examples

For real-world use cases, see [Monitor examples](/monitor-data/monitor-examples).