diff --git a/apl/apl-features.mdx b/apl/apl-features.mdx
index 40aa89e7..d45b958f 100644
--- a/apl/apl-features.mdx
+++ b/apl/apl-features.mdx
@@ -322,10 +322,22 @@ keywords: ['axiom documentation', 'documentation', 'axiom', 'APL', 'axiom proces
| Time series function | [series_greater](/apl/scalar-functions/time-series/series-greater) | Returns the elements of a series that are greater than a specified value. |
| Time series function | [series_greater_equals](/apl/scalar-functions/time-series/series-greater-equals) | Returns the elements of a series that are greater than or equal to a specified value. |
| Time series function | [series_ifft](/apl/scalar-functions/time-series/series-ifft) | Performs an Inverse Fast Fourier Transform on a series, converting frequency-domain data back into time-domain representation. |
+| Time series function | [series_iir](/apl/scalar-functions/time-series/series-iir) | Applies an Infinite Impulse Response filter to a series. |
| Time series function | [series_less](/apl/scalar-functions/time-series/series-less) | Returns the elements of a series that are less than a specified value. |
| Time series function | [series_less_equals](/apl/scalar-functions/time-series/series-less-equals) | Returns the elements of a series that are less than or equal to a specified value. |
+| Time series function | [series_log](/apl/scalar-functions/time-series/series-log) | Returns the natural logarithm of each element in a series. |
+| Time series function | [series_magnitude](/apl/scalar-functions/time-series/series-magnitude) | Calculates the Euclidean norm (magnitude) of a series. |
+| Time series function | [series_max](/apl/scalar-functions/time-series/series-max) | Returns the maximum value from a series. |
+| Time series function | [series_min](/apl/scalar-functions/time-series/series-min) | Returns the minimum value from a series. |
+| Time series function | [series_multiply](/apl/scalar-functions/time-series/series-multiply) | Performs element-wise multiplication of two series. |
| Time series function | [series_not_equals](/apl/scalar-functions/time-series/series-not-equals) | Returns the elements of a series that aren’t equal to a specified value. |
+| Time series function | [series_pearson_correlation](/apl/scalar-functions/time-series/series-pearson-correlation) | Calculates the Pearson correlation coefficient between two series. |
+| Time series function | [series_pow](/apl/scalar-functions/time-series/series-pow) | Raises each element in a series to a specified power. |
+| Time series function | [series_sign](/apl/scalar-functions/time-series/series-sign) | Returns the sign of each element in a series. |
| Time series function | [series_sin](/apl/scalar-functions/time-series/series-sin) | Returns the sine of a series. |
+| Time series function | [series_stats](/apl/scalar-functions/time-series/series-stats) | Computes comprehensive statistical measures for a series. |
+| Time series function | [series_stats_dynamic](/apl/scalar-functions/time-series/series-stats-dynamic) | Computes statistical measures and returns them in a dynamic object format. |
+| Time series function | [series_subtract](/apl/scalar-functions/time-series/series-subtract) | Performs element-wise subtraction between two series. |
| Time series function | [series_sum](/apl/scalar-functions/time-series/series-sum) | Returns the sum of a series. |
| Time series function | [series_tan](/apl/scalar-functions/time-series/series-tan) | Returns the tangent of a series. |
| Type function | [iscc](/apl/scalar-functions/type-functions/iscc) | Checks whether a value is a valid credit card (CC) number. |
diff --git a/apl/scalar-functions/time-series/overview.mdx b/apl/scalar-functions/time-series/overview.mdx
index 29255821..0881144d 100644
--- a/apl/scalar-functions/time-series/overview.mdx
+++ b/apl/scalar-functions/time-series/overview.mdx
@@ -30,10 +30,22 @@ The table summarizes the time series functions available in APL.
| [series_greater](/apl/scalar-functions/time-series/series-greater) | Returns the elements of a series that are greater than a specified value. |
| [series_greater_equals](/apl/scalar-functions/time-series/series-greater-equals) | Returns the elements of a series that are greater than or equal to a specified value. |
| [series_ifft](/apl/scalar-functions/time-series/series-ifft) | Performs an Inverse Fast Fourier Transform on a series, converting frequency-domain data back into time-domain representation. |
+| [series_iir](/apl/scalar-functions/time-series/series-iir) | Applies an Infinite Impulse Response filter to a series. |
| [series_less](/apl/scalar-functions/time-series/series-less) | Returns the elements of a series that are less than a specified value. |
| [series_less_equals](/apl/scalar-functions/time-series/series-less-equals) | Returns the elements of a series that are less than or equal to a specified value. |
+| [series_log](/apl/scalar-functions/time-series/series-log) | Returns the natural logarithm of each element in a series. |
+| [series_magnitude](/apl/scalar-functions/time-series/series-magnitude) | Calculates the Euclidean norm (magnitude) of a series. |
+| [series_max](/apl/scalar-functions/time-series/series-max) | Returns the maximum value from a series. |
+| [series_min](/apl/scalar-functions/time-series/series-min) | Returns the minimum value from a series. |
+| [series_multiply](/apl/scalar-functions/time-series/series-multiply) | Performs element-wise multiplication of two series. |
| [series_not_equals](/apl/scalar-functions/time-series/series-not-equals) | Returns the elements of a series that aren’t equal to a specified value. |
+| [series_pearson_correlation](/apl/scalar-functions/time-series/series-pearson-correlation) | Calculates the Pearson correlation coefficient between two series. |
+| [series_pow](/apl/scalar-functions/time-series/series-pow) | Raises each element in a series to a specified power. |
+| [series_sign](/apl/scalar-functions/time-series/series-sign) | Returns the sign of each element in a series. |
| [series_sin](/apl/scalar-functions/time-series/series-sin) | Returns the sine of a series. |
+| [series_stats](/apl/scalar-functions/time-series/series-stats) | Computes comprehensive statistical measures for a series. |
+| [series_stats_dynamic](/apl/scalar-functions/time-series/series-stats-dynamic) | Computes statistical measures and returns them in a dynamic object format. |
+| [series_subtract](/apl/scalar-functions/time-series/series-subtract) | Performs element-wise subtraction between two series. |
| [series_sum](/apl/scalar-functions/time-series/series-sum) | Returns the sum of a series. |
| [series_tan](/apl/scalar-functions/time-series/series-tan) | Returns the tangent of a series. |
diff --git a/apl/scalar-functions/time-series/series-iir.mdx b/apl/scalar-functions/time-series/series-iir.mdx
new file mode 100644
index 00000000..383aa57c
--- /dev/null
+++ b/apl/scalar-functions/time-series/series-iir.mdx
@@ -0,0 +1,165 @@
+---
+title: series_iir
+description: 'This page explains how to use the series_iir function in APL.'
+---
+
+The `series_iir` function applies an Infinite Impulse Response (IIR) filter to a numeric dynamic array (series). This filter processes the input series using coefficients for both the numerator (feedforward) and denominator (feedback) components, creating a filtered output series that incorporates both current and past values.
+
+You can use `series_iir` when you need to apply digital signal processing techniques to time-series data. This is particularly useful for smoothing noisy data, removing high-frequency components, implementing custom filters, or applying frequency-selective transformations to time-series measurements.
+
+## For users of other query languages
+
+If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
+
+
+
+
+In Splunk SPL, signal processing typically requires external tools or complex manual calculations with `streamstats`. In APL, `series_iir` provides built-in digital filtering capabilities for array data.
+
+
+```sql Splunk example
+... | streamstats window=5 avg(value) as smoothed_value
+... (limited to basic moving averages)
+```
+
+```kusto APL equivalent
+datatable(values: dynamic)
+[
+ dynamic([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
+]
+| extend filtered = series_iir(values, dynamic([0.25, 0.5, 0.25]), dynamic([1.0, -0.5]))
+```
+
+
+
+
+
+In SQL, implementing IIR filters requires complex recursive queries or user-defined functions. In APL, `series_iir` provides this functionality as a built-in operation on array data.
+
+
+```sql SQL example
+-- Complex recursive CTE required for IIR filtering
+WITH RECURSIVE filtered AS (...)
+SELECT * FROM filtered;
+```
+
+```kusto APL equivalent
+datatable(values: dynamic)
+[
+ dynamic([1, 2, 3, 4, 5, 6, 7, 8, 9, 10])
+]
+| extend filtered = series_iir(values, dynamic([0.25, 0.5, 0.25]), dynamic([1.0, -0.5]))
+```
+
+
+
+
+
+## Usage
+
+### Syntax
+
+```kusto
+series_iir(array, numerator, denominator)
+```
+
+### Parameters
+
+| Parameter | Type | Description |
+| ------------- | ------- | -------------------------------------------------------------- |
+| `array` | dynamic | A dynamic array of numeric values (input series). |
+| `numerator` | dynamic | A dynamic array of numerator (feedforward) coefficients. |
+| `denominator` | dynamic | A dynamic array of denominator (feedback) coefficients. |
+
+### Returns
+
+A dynamic array containing the filtered output series after applying the IIR filter defined by the numerator and denominator coefficients.
+
+## Use case examples
+
+
+
+
+In log analysis, you can use `series_iir` to smooth noisy request duration measurements, making trends and patterns more visible.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize durations = make_list(req_duration_ms) by id
+| extend smoothed = series_iir(durations, dynamic([0.2, 0.6, 0.2]), dynamic([1.0]))
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20durations%20%3D%20make_list(req_duration_ms)%20by%20id%20%7C%20extend%20smoothed%20%3D%20series_iir(durations%2C%20dynamic(%5B0.2%2C%200.6%2C%200.2%5D)%2C%20dynamic(%5B1.0%5D))%20%7C%20take%205%22%7D)
+
+**Output**
+
+| id | durations | smoothed |
+| ---- | -------------------------- | -------------------------- |
+| u123 | [50, 120, 45, 200, 60] | [50, 91, 62, 128, 88] |
+| u456 | [30, 35, 80, 40, 45] | [30, 33, 54, 46, 45] |
+
+This query applies an IIR filter to smooth request duration measurements, reducing noise while preserving the underlying trend.
+
+
+
+
+In OpenTelemetry traces, you can use `series_iir` to filter span duration data, removing high-frequency noise to better identify sustained performance trends.
+
+**Query**
+
+```kusto
+['otel-demo-traces']
+| extend duration_ms = duration / 1ms
+| summarize durations = make_list(duration_ms) by ['service.name']
+| extend filtered = series_iir(durations, dynamic([0.1, 0.8, 0.1]), dynamic([1.0, -0.3]))
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20duration_ms%20%3D%20duration%20%2F%201ms%20%7C%20summarize%20durations%20%3D%20make_list(duration_ms)%20by%20%5B'service.name'%5D%20%7C%20extend%20filtered%20%3D%20series_iir(durations%2C%20dynamic(%5B0.1%2C%200.8%2C%200.1%5D)%2C%20dynamic(%5B1.0%2C%20-0.3%5D))%20%7C%20take%205%22%7D)
+
+**Output**
+
+| service.name | durations | filtered |
+| ------------ | --------------------------- | --------------------------- |
+| frontend | [100, 150, 95, 200, 120] | [100, 130, 108, 152, 133] |
+| checkout | [200, 250, 180, 300, 220] | [200, 230, 202, 248, 232] |
+
+This query applies an IIR filter with feedback to span durations, smoothing out transient spikes while maintaining sensitivity to sustained changes.
+
+
+
+
+In security logs, you can use `series_iir` to filter request rate data, separating sustained traffic changes from brief anomalies.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize request_counts = make_list(req_duration_ms) by status
+| extend filtered = series_iir(request_counts, dynamic([0.15, 0.7, 0.15]), dynamic([1.0, -0.4]))
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20request_counts%20%3D%20make_list(req_duration_ms)%20by%20status%20%7C%20extend%20filtered%20%3D%20series_iir(request_counts%2C%20dynamic(%5B0.15%2C%200.7%2C%200.15%5D)%2C%20dynamic(%5B1.0%2C%20-0.4%5D))%20%7C%20take%205%22%7D)
+
+**Output**
+
+| status | request_counts | filtered |
+| ------ | -------------------------- | -------------------------- |
+| 200 | [100, 105, 300, 110, 95] | [100, 103, 180, 142, 120] |
+| 401 | [10, 12, 50, 15, 11] | [10, 11, 27, 20, 16] |
+
+This query uses IIR filtering to smooth security event patterns, helping distinguish between brief anomalies and sustained attack patterns.
+
+
+
+
+## List of related functions
+
+- [series_sum](/apl/scalar-functions/time-series/series-sum): Returns the sum of series elements. Use for simple aggregation instead of filtering.
+- [series_stats](/apl/scalar-functions/time-series/series-stats): Returns statistical measures. Use for statistical analysis instead of signal processing.
+- [series_abs](/apl/scalar-functions/time-series/series-abs): Returns absolute values. Often used after IIR filtering to analyze magnitude.
+- [make_series](/apl/tabular-operators/make-series): Creates time-series from tabular data. Often used before applying `series_iir` for signal processing.
+
diff --git a/apl/scalar-functions/time-series/series-log.mdx b/apl/scalar-functions/time-series/series-log.mdx
new file mode 100644
index 00000000..442bdf90
--- /dev/null
+++ b/apl/scalar-functions/time-series/series-log.mdx
@@ -0,0 +1,161 @@
+---
+title: series_log
+description: 'This page explains how to use the series_log function in APL.'
+---
+
+The `series_log` function computes the natural logarithm (base e) of each element in a numeric dynamic array (series). This performs element-wise logarithmic transformation across the entire series.
+
+You can use `series_log` when you need to apply logarithmic transformations to time-series data. This is particularly useful for normalizing exponentially distributed data, linearizing exponential growth patterns, compressing wide value ranges, or preparing data for analysis that assumes log-normal distributions.
+
+## For users of other query languages
+
+If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
+
+
+
+
+In Splunk SPL, you typically use the `log()` function within an `eval` command to calculate logarithms. In APL, `series_log` applies the logarithm operation to every element in an array simultaneously.
+
+
+```sql Splunk example
+... | eval log_value=log(value)
+```
+
+```kusto APL equivalent
+datatable(x: dynamic)
+[
+ dynamic([1, 10, 100, 1000])
+]
+| extend log_values = series_log(x)
+```
+
+
+
+
+
+In SQL, you use the `LOG()` or `LN()` function to calculate natural logarithms on individual rows. In APL, `series_log` operates on entire arrays, applying the logarithm operation element-wise.
+
+
+```sql SQL example
+SELECT LN(value) AS log_value
+FROM measurements;
+```
+
+```kusto APL equivalent
+datatable(x: dynamic)
+[
+ dynamic([1, 10, 100, 1000])
+]
+| extend log_values = series_log(x)
+```
+
+
+
+
+
+## Usage
+
+### Syntax
+
+```kusto
+series_log(array)
+```
+
+### Parameters
+
+| Parameter | Type | Description |
+| --------- | ------- | ----------------------------------------------- |
+| `array` | dynamic | A dynamic array of numeric values. Values must be positive. |
+
+### Returns
+
+A dynamic array where each element is the natural logarithm of the corresponding input element. Returns `null` for non-positive values.
+
+## Use case examples
+
+
+
+
+In log analysis, you can use `series_log` to normalize request durations that follow an exponential distribution, making patterns easier to visualize and analyze.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize durations = make_list(req_duration_ms) by id
+| extend log_durations = series_log(durations)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20durations%20%3D%20make_list(req_duration_ms)%20by%20id%20%7C%20extend%20log_durations%20%3D%20series_log(durations)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| id | durations | log_durations |
+| ---- | ---------------------- | -------------------------- |
+| u123 | [50, 100, 500, 1000] | [3.91, 4.61, 6.21, 6.91] |
+| u456 | [25, 75, 200, 800] | [3.22, 4.32, 5.30, 6.68] |
+
+This query applies logarithmic transformation to request durations, compressing the range and making it easier to compare values across different scales.
+
+
+
+
+In OpenTelemetry traces, you can use `series_log` to linearize exponentially growing span durations, making trends more apparent in visualization.
+
+**Query**
+
+```kusto
+['otel-demo-traces']
+| extend duration_ms = duration / 1ms
+| summarize durations = make_list(duration_ms) by ['service.name']
+| extend log_durations = series_log(durations)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20duration_ms%20%3D%20duration%20%2F%201ms%20%7C%20summarize%20durations%20%3D%20make_list(duration_ms)%20by%20%5B'service.name'%5D%20%7C%20extend%20log_durations%20%3D%20series_log(durations)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| service.name | durations | log_durations |
+| ------------ | ------------------------ | -------------------------- |
+| frontend | [10, 50, 250, 1000] | [2.30, 3.91, 5.52, 6.91] |
+| checkout | [20, 100, 500, 2000] | [3.00, 4.61, 6.21, 7.60] |
+
+This query applies logarithmic transformation to span durations, making exponential growth patterns appear linear for easier analysis.
+
+
+
+
+In security logs, you can use `series_log` to normalize request volumes that follow exponential patterns, making anomaly detection more effective.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize request_counts = make_list(req_duration_ms) by status
+| extend log_counts = series_log(request_counts)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20request_counts%20%3D%20make_list(req_duration_ms)%20by%20status%20%7C%20extend%20log_counts%20%3D%20series_log(request_counts)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| status | request_counts | log_counts |
+| ------ | ---------------------- | -------------------------- |
+| 200 | [100, 500, 1000, 5000] | [4.61, 6.21, 6.91, 8.52] |
+| 401 | [10, 50, 100, 500] | [2.30, 3.91, 4.61, 6.21] |
+
+This query applies logarithmic transformation to request counts, making it easier to detect unusual patterns in security events across different scales.
+
+
+
+
+## List of related functions
+
+- [series_pow](/apl/scalar-functions/time-series/series-pow): Raises series elements to a power. Use as the inverse operation to logarithms when working with exponentials.
+- [series_abs](/apl/scalar-functions/time-series/series-abs): Returns the absolute value of each element. Use before `series_log` to ensure positive values.
+- [series_magnitude](/apl/scalar-functions/time-series/series-magnitude): Computes the magnitude of a series. Use when you need Euclidean norm instead of logarithmic transformation.
+- [log](/apl/scalar-functions/mathematical-functions#log): Scalar function for single values. Use for individual calculations instead of array operations.
+
diff --git a/apl/scalar-functions/time-series/series-magnitude.mdx b/apl/scalar-functions/time-series/series-magnitude.mdx
new file mode 100644
index 00000000..4d6f5667
--- /dev/null
+++ b/apl/scalar-functions/time-series/series-magnitude.mdx
@@ -0,0 +1,169 @@
+---
+title: series_magnitude
+description: 'This page explains how to use the series_magnitude function in APL.'
+---
+
+The `series_magnitude` function calculates the Euclidean norm (magnitude) of a numeric dynamic array (series). This computes the square root of the sum of squared elements, representing the length or magnitude of the vector.
+
+You can use `series_magnitude` when you need to measure the overall magnitude of a series, compare vector lengths, normalize data, or calculate distances in multi-dimensional space. This is particularly useful in signal processing, similarity analysis, and feature scaling for machine learning applications.
+
+## For users of other query languages
+
+If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
+
+
+
+
+In Splunk SPL, you would typically implement magnitude calculation manually using `eval` with square root and sum operations. In APL, `series_magnitude` provides this calculation as a built-in function.
+
+
+```sql Splunk example
+... | eval squared_sum=pow(val1,2)+pow(val2,2)+pow(val3,2)
+| eval magnitude=sqrt(squared_sum)
+```
+
+```kusto APL equivalent
+datatable(values: dynamic)
+[
+ dynamic([3, 4, 5])
+]
+| extend magnitude = series_magnitude(values)
+```
+
+
+
+
+
+In SQL, you would need to manually compute the magnitude using square root and sum of squares. In APL, `series_magnitude` provides this calculation in a single function for array data.
+
+
+```sql SQL example
+SELECT SQRT(SUM(value * value)) AS magnitude
+FROM measurements
+GROUP BY group_id;
+```
+
+```kusto APL equivalent
+datatable(values: dynamic)
+[
+ dynamic([3, 4, 5])
+]
+| extend magnitude = series_magnitude(values)
+```
+
+
+
+
+
+## Usage
+
+### Syntax
+
+```kusto
+series_magnitude(array)
+```
+
+### Parameters
+
+| Parameter | Type | Description |
+| --------- | ------- | ---------------------------------- |
+| `array` | dynamic | A dynamic array of numeric values. |
+
+### Returns
+
+A numeric scalar representing the Euclidean norm (magnitude) of the series, calculated as the square root of the sum of squared elements.
+
+## Use case examples
+
+
+
+
+In log analysis, you can use `series_magnitude` to calculate the overall load magnitude from multiple request duration measurements, creating a single metric representing total system stress.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize durations = make_list(req_duration_ms) by ['geo.city']
+| extend load_magnitude = series_magnitude(durations)
+| project ['geo.city'], load_magnitude
+| order by load_magnitude desc
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20durations%20%3D%20make_list(req_duration_ms)%20by%20%5B'geo.city'%5D%20%7C%20extend%20load_magnitude%20%3D%20series_magnitude(durations)%20%7C%20project%20%5B'geo.city'%5D%2C%20load_magnitude%20%7C%20order%20by%20load_magnitude%20desc%22%7D)
+
+**Output**
+
+| geo.city | load_magnitude |
+| ---------- | -------------- |
+| Seattle | 325.5 ms |
+| Portland | 285.2 ms |
+| Denver | 245.8 ms |
+
+This query calculates the magnitude of request duration vectors for each city, providing a single metric that represents the overall load intensity.
+
+
+
+
+In OpenTelemetry traces, you can use `series_magnitude` to compute a composite performance metric that captures the overall latency footprint of each service.
+
+**Query**
+
+```kusto
+['otel-demo-traces']
+| extend duration_ms = duration / 1ms
+| summarize durations = make_list(duration_ms) by ['service.name']
+| extend performance_magnitude = series_magnitude(durations)
+| project ['service.name'], performance_magnitude
+| order by performance_magnitude desc
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20duration_ms%20%3D%20duration%20%2F%201ms%20%7C%20summarize%20durations%20%3D%20make_list(duration_ms)%20by%20%5B'service.name'%5D%20%7C%20extend%20performance_magnitude%20%3D%20series_magnitude(durations)%20%7C%20project%20%5B'service.name'%5D%2C%20performance_magnitude%20%7C%20order%20by%20performance_magnitude%20desc%22%7D)
+
+**Output**
+
+| service.name | performance_magnitude |
+| ------------ | --------------------- |
+| checkout | 1250.5 |
+| frontend | 895.3 |
+| cart | 650.2 |
+
+This query computes a magnitude metric for each service's latency profile, helping prioritize optimization efforts for services with the highest overall latency impact.
+
+
+
+
+In security logs, you can use `series_magnitude` to calculate an overall threat intensity score based on multiple security metrics, creating a composite risk indicator.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize request_metrics = make_list(req_duration_ms) by status
+| extend threat_magnitude = series_magnitude(request_metrics)
+| project status, threat_magnitude
+| order by threat_magnitude desc
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20request_metrics%20%3D%20make_list(req_duration_ms)%20by%20status%20%7C%20extend%20threat_magnitude%20%3D%20series_magnitude(request_metrics)%20%7C%20project%20status%2C%20threat_magnitude%20%7C%20order%20by%20threat_magnitude%20desc%22%7D)
+
+**Output**
+
+| status | threat_magnitude |
+| ------ | ---------------- |
+| 401 | 2850.5 ms |
+| 500 | 1250.3 ms |
+| 200 | 425.8 ms |
+
+This query calculates the magnitude of request patterns for each HTTP status code, providing a single metric that represents the overall intensity of potentially concerning traffic.
+
+
+
+
+## List of related functions
+
+- [series_sum](/apl/scalar-functions/time-series/series-sum): Returns the sum of all values. Use when you need simple addition instead of Euclidean norm.
+- [series_abs](/apl/scalar-functions/time-series/series-abs): Returns absolute values of elements. Often used before magnitude calculation to handle negative values.
+- [series_pearson_correlation](/apl/scalar-functions/time-series/series-pearson-correlation): Computes correlation between series. Use when measuring similarity instead of magnitude.
+- [series_stats](/apl/scalar-functions/time-series/series-stats): Returns comprehensive statistics. Use when you need multiple measures instead of just magnitude.
+
diff --git a/apl/scalar-functions/time-series/series-max.mdx b/apl/scalar-functions/time-series/series-max.mdx
new file mode 100644
index 00000000..5489f922
--- /dev/null
+++ b/apl/scalar-functions/time-series/series-max.mdx
@@ -0,0 +1,139 @@
+---
+title: series_max
+description: 'This page explains how to use the series_max function in APL.'
+---
+
+The `series_max` function compares two numeric arrays element by element and returns a new array. Each position in the result contains the maximum value between the corresponding elements from the two input arrays.
+
+You use `series_max` when you want to create an envelope or upper bound from multiple series, combine baseline metrics with actual values, or merge data from different sources by keeping the higher value at each point. For example, you can compare response times across different servers and keep the higher value at each time point, or combine SLA thresholds with actual measurements.
+
+## For users of other query languages
+
+If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
+
+
+
+
+In Splunk SPL, element-wise maximum comparisons typically require custom logic with `eval` or `foreach`. In contrast, APL provides the specialized `series_max` function to directly compare arrays element by element and return the maximum values.
+
+
+```sql Splunk example
+... | timechart avg(cpu_usage) as cpu1, avg(cpu_usage_backup) as cpu2
+| eval max_cpu = if(cpu1 > cpu2, cpu1, cpu2)
+```
+
+```kusto APL equivalent
+['sample-http-logs']
+| make-series primary = avg(req_duration_ms), backup = avg(req_duration_ms) on _time step 1m
+| extend max_values = series_max(primary, backup)
+```
+
+
+
+
+
+
+In ANSI SQL, you use the `GREATEST()` function to compare scalar values. To compare sequences element-wise, you need window functions or complex joins. In APL, `series_max` simplifies this by applying the maximum operation across arrays in a single step.
+
+
+```sql SQL example
+SELECT _time,
+ GREATEST(t1.req_duration_ms, t2.req_duration_ms) AS max_duration
+FROM logs t1
+JOIN logs t2
+ ON t1._time = t2._time
+```
+
+```kusto APL equivalent
+['sample-http-logs']
+| make-series series1 = avg(req_duration_ms), series2 = avg(req_duration_ms) on _time step 1m
+| extend max_series = series_max(series1, series2)
+```
+
+
+
+
+
+
+## Usage
+
+### Syntax
+
+```kusto
+series_max(array1, array2)
+```
+
+### Parameters
+
+| Parameter | Type | Description |
+| --------- | ----- | -------------------------------------------------------------------------- |
+| `array1` | array | The first array of numeric values. |
+| `array2` | array | The second array of numeric values. Must have the same length as `array1`. |
+
+### Returns
+
+An array of numeric values. Each element is the maximum of the corresponding elements from `array1` and `array2`.
+
+## Use case examples
+
+
+
+
+You want to create an upper bound by comparing request durations across two different cities and keeping the higher value at each time point.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| take 50
+| make-series london_avg = avgif(req_duration_ms, ['geo.city'] == 'London'),
+ paris_avg = avgif(req_duration_ms, ['geo.city'] == 'Paris')
+ on _time step 1h
+| extend max_duration = series_max(london_avg, paris_avg)
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2050%20%7C%20make-series%20london_avg%20%3D%20avgif(req_duration_ms%2C%20%5B'geo.city'%5D%20%3D%3D%20'London')%2C%20paris_avg%20%3D%20avgif(req_duration_ms%2C%20%5B'geo.city'%5D%20%3D%3D%20'Paris')%20on%20_time%20step%201h%20%7C%20extend%20max_duration%20%3D%20series_max(london_avg%2C%20paris_avg)%22%7D)
+
+**Output**
+
+| london_avg | paris_avg | max_duration |
+| --------------- | --------------- | --------------- |
+| [120, 150, 100] | [180, 130, 190] | [180, 150, 190] |
+
+This query compares response times between two cities and creates a series containing the higher value at each time point.
+
+
+
+
+You want to track the maximum count between successful and failed requests at each time point to identify the dominant request type.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| take 50
+| make-series success_count = countif(status == '200'),
+ failure_count = countif(status != '200')
+ on _time step 1h
+| extend max_count = series_max(success_count, failure_count)
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2050%20%7C%20make-series%20success_count%20%3D%20countif(status%20%3D%3D%20'200')%2C%20failure_count%20%3D%20countif(status%20!%3D%20'200')%20on%20_time%20step%201h%20%7C%20extend%20max_count%20%3D%20series_max(success_count%2C%20failure_count)%22%7D)
+
+**Output**
+
+| success_count | failure_count | max_count |
+| -------------- | ------------- | -------------- |
+| [300, 280, 310] | [10, 290, 15] | [300, 290, 310] |
+
+This query compares success and failure counts and returns the higher value at each time point, helping you understand the dominant traffic pattern.
+
+
+
+
+## List of related functions
+
+- [series_min](/apl/scalar-functions/time-series/series-min): Compares two arrays and returns the minimum value at each position.
+- [series_less](/apl/scalar-functions/time-series/series-less): Compares two arrays and returns `true` where elements in the first array are less than the second.
+- [series_greater](/apl/scalar-functions/time-series/series-greater): Compares two arrays and returns `true` where the first array element is greater than the second.
+- [max](/apl/aggregation-function/max): Aggregation function that returns the maximum value across grouped records.
diff --git a/apl/scalar-functions/time-series/series-min.mdx b/apl/scalar-functions/time-series/series-min.mdx
new file mode 100644
index 00000000..0e940ec2
--- /dev/null
+++ b/apl/scalar-functions/time-series/series-min.mdx
@@ -0,0 +1,139 @@
+---
+title: series_min
+description: 'This page explains how to use the series_min function in APL.'
+---
+
+The `series_min` function compares two numeric arrays element by element and returns a new array. Each position in the result contains the minimum value between the corresponding elements from the two input arrays.
+
+You use `series_min` when you want to create a lower bound from multiple series, combine baseline metrics with actual values while keeping the smaller value, or merge data from different sources by selecting the lower value at each point. For example, you can compare response times across different servers and keep the lower value at each time point, or create minimum thresholds from multiple sources.
+
+## For users of other query languages
+
+If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
+
+
+
+
+In Splunk SPL, element-wise minimum comparisons typically require custom logic with `eval` or `foreach`. In contrast, APL provides the specialized `series_min` function to directly compare arrays element by element and return the minimum values.
+
+
+```sql Splunk example
+... | timechart avg(latency) as latency1, avg(latency_backup) as latency2
+| eval min_latency = if(latency1 < latency2, latency1, latency2)
+```
+
+```kusto APL equivalent
+['sample-http-logs']
+| make-series primary = avg(req_duration_ms), backup = avg(req_duration_ms) on _time step 1m
+| extend min_values = series_min(primary, backup)
+```
+
+
+
+
+
+
+In ANSI SQL, you use the `LEAST()` function to compare scalar values. To compare sequences element-wise, you need window functions or complex joins. In APL, `series_min` simplifies this by applying the minimum operation across arrays in a single step.
+
+
+```sql SQL example
+SELECT _time,
+ LEAST(t1.req_duration_ms, t2.req_duration_ms) AS min_duration
+FROM logs t1
+JOIN logs t2
+ ON t1._time = t2._time
+```
+
+```kusto APL equivalent
+['sample-http-logs']
+| make-series series1 = avg(req_duration_ms), series2 = avg(req_duration_ms) on _time step 1m
+| extend min_series = series_min(series1, series2)
+```
+
+
+
+
+
+
+## Usage
+
+### Syntax
+
+```kusto
+series_min(array1, array2)
+```
+
+### Parameters
+
+| Parameter | Type | Description |
+| --------- | ----- | -------------------------------------------------------------------------- |
+| `array1` | array | The first array of numeric values. |
+| `array2` | array | The second array of numeric values. Must have the same length as `array1`. |
+
+### Returns
+
+An array of numeric values. Each element is the minimum of the corresponding elements from `array1` and `array2`.
+
+## Use case examples
+
+
+
+
+You want to create a lower bound by comparing request durations across two different cities and keeping the lower value at each time point.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| take 50
+| make-series london_avg = avgif(req_duration_ms, ['geo.city'] == 'London'),
+ paris_avg = avgif(req_duration_ms, ['geo.city'] == 'Paris')
+ on _time step 1h
+| extend min_duration = series_min(london_avg, paris_avg)
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2050%20%7C%20make-series%20london_avg%20%3D%20avgif(req_duration_ms%2C%20%5B'geo.city'%5D%20%3D%3D%20'London')%2C%20paris_avg%20%3D%20avgif(req_duration_ms%2C%20%5B'geo.city'%5D%20%3D%3D%20'Paris')%20on%20_time%20step%201h%20%7C%20extend%20min_duration%20%3D%20series_min(london_avg%2C%20paris_avg)%22%7D)
+
+**Output**
+
+| london_avg | paris_avg | min_duration |
+| --------------- | --------------- | --------------- |
+| [120, 150, 100] | [180, 130, 190] | [120, 130, 100] |
+
+This query compares response times between two cities and creates a series containing the lower value at each time point.
+
+
+
+
+You want to track the minimum count between successful and failed requests at each time point to identify which type has less traffic.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| take 50
+| make-series success_count = countif(status == '200'),
+ failure_count = countif(status != '200')
+ on _time step 1h
+| extend min_count = series_min(success_count, failure_count)
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20take%2050%20%7C%20make-series%20success_count%20%3D%20countif(status%20%3D%3D%20'200')%2C%20failure_count%20%3D%20countif(status%20!%3D%20'200')%20on%20_time%20step%201h%20%7C%20extend%20min_count%20%3D%20series_min(success_count%2C%20failure_count)%22%7D)
+
+**Output**
+
+| success_count | failure_count | min_count |
+| -------------- | ------------- | ------------ |
+| [300, 280, 310] | [10, 290, 15] | [10, 280, 15] |
+
+This query compares success and failure counts and returns the lower value at each time point, helping you identify the minority traffic pattern.
+
+
+
+
+## List of related functions
+
+- [series_max](/apl/scalar-functions/time-series/series-max): Compares two arrays and returns the maximum value at each position.
+- [series_less](/apl/scalar-functions/time-series/series-less): Compares two arrays and returns `true` where elements in the first array are less than the second.
+- [series_greater](/apl/scalar-functions/time-series/series-greater): Compares two arrays and returns `true` where the first array element is greater than the second.
+- [min](/apl/aggregation-function/min): Aggregation function that returns the minimum value across grouped records.
diff --git a/apl/scalar-functions/time-series/series-multiply.mdx b/apl/scalar-functions/time-series/series-multiply.mdx
new file mode 100644
index 00000000..70a519c4
--- /dev/null
+++ b/apl/scalar-functions/time-series/series-multiply.mdx
@@ -0,0 +1,165 @@
+---
+title: series_multiply
+description: 'This page explains how to use the series_multiply function in APL.'
+---
+
+The `series_multiply` function performs element-wise multiplication between two numeric dynamic arrays (series). Each element in the first series is multiplied by the corresponding element at the same position in the second series.
+
+You can use `series_multiply` when you need to scale time-series data, apply weights, or combine multiple metrics through multiplication. This is particularly useful for calculating weighted scores, applying normalization factors, or computing products of related measurements.
+
+## For users of other query languages
+
+If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
+
+
+
+
+In Splunk SPL, you typically use the `eval` command with the multiplication operator to calculate products between fields. In APL, `series_multiply` operates on entire arrays at once, performing element-wise multiplication efficiently.
+
+
+```sql Splunk example
+... | eval product=value1 * value2
+```
+
+```kusto APL equivalent
+datatable(series1: dynamic, series2: dynamic)
+[
+ dynamic([10, 20, 30]), dynamic([2, 3, 4])
+]
+| extend product = series_multiply(series1, series2)
+```
+
+
+
+
+
+In SQL, you multiply values using the `*` operator on individual columns. In APL, `series_multiply` performs element-wise multiplication across entire arrays stored in single columns.
+
+
+```sql SQL example
+SELECT value1 * value2 AS product
+FROM measurements;
+```
+
+```kusto APL equivalent
+datatable(series1: dynamic, series2: dynamic)
+[
+ dynamic([10, 20, 30]), dynamic([2, 3, 4])
+]
+| extend product = series_multiply(series1, series2)
+```
+
+
+
+
+
+## Usage
+
+### Syntax
+
+```kusto
+series_multiply(series1, series2)
+```
+
+### Parameters
+
+| Parameter | Type | Description |
+| ---------- | ------- | ---------------------------------------------------- |
+| `series1` | dynamic | A dynamic array of numeric values. |
+| `series2` | dynamic | A dynamic array of numeric values. |
+
+### Returns
+
+A dynamic array where each element is the result of multiplying the corresponding elements of `series1` and `series2`. If the arrays have different lengths, the shorter array is extended with `null` values.
+
+## Use case examples
+
+
+
+
+In log analysis, you can use `series_multiply` to apply weighting factors to request durations, calculating weighted performance metrics.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize durations = make_list(req_duration_ms) by ['geo.city']
+| extend weights = dynamic([1.0, 1.2, 0.8, 1.1, 0.9])
+| extend weighted_durations = series_multiply(durations, weights)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20durations%20%3D%20make_list(req_duration_ms)%20by%20%5B'geo.city'%5D%20%7C%20extend%20weights%20%3D%20dynamic(%5B1.0%2C%201.2%2C%200.8%2C%201.1%2C%200.9%5D)%20%7C%20extend%20weighted_durations%20%3D%20series_multiply(durations%2C%20weights)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| geo.city | durations | weights | weighted_durations |
+| ---------- | ------------------ | ---------------------- | --------------------- |
+| Seattle | [50, 60, 55, 58, 52] | [1.0, 1.2, 0.8, 1.1, 0.9] | [50, 72, 44, 63.8, 46.8] |
+| Portland | [45, 50, 48, 52, 47] | [1.0, 1.2, 0.8, 1.1, 0.9] | [45, 60, 38.4, 57.2, 42.3] |
+
+This query applies priority weights to request durations, emphasizing certain time periods or request types in performance analysis.
+
+
+
+
+In OpenTelemetry traces, you can use `series_multiply` to calculate resource cost estimates by multiplying span durations with cost factors.
+
+**Query**
+
+```kusto
+['otel-demo-traces']
+| extend duration_ms = duration / 1ms
+| summarize durations = make_list(duration_ms) by ['service.name']
+| extend cost_factor = dynamic([0.001, 0.001, 0.001, 0.001, 0.001])
+| extend estimated_cost = series_multiply(durations, cost_factor)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20duration_ms%20%3D%20duration%20%2F%201ms%20%7C%20summarize%20durations%20%3D%20make_list(duration_ms)%20by%20%5B'service.name'%5D%20%7C%20extend%20cost_factor%20%3D%20dynamic(%5B0.001%2C%200.001%2C%200.001%2C%200.001%2C%200.001%5D)%20%7C%20extend%20estimated_cost%20%3D%20series_multiply(durations%2C%20cost_factor)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| service.name | durations | cost_factor | estimated_cost |
+| ------------ | ---------------------- | -------------------------------- | ------------------------ |
+| frontend | [100, 120, 95, 110, 105] | [0.001, 0.001, 0.001, 0.001, 0.001] | [0.1, 0.12, 0.095, 0.11, 0.105] |
+| checkout | [200, 220, 195, 210, 205] | [0.001, 0.001, 0.001, 0.001, 0.001] | [0.2, 0.22, 0.195, 0.21, 0.205] |
+
+This query multiplies span durations by a cost factor to estimate resource costs, useful for cost optimization analysis.
+
+
+
+
+In security logs, you can use `series_multiply` to calculate risk scores by multiplying request frequencies with severity factors.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize request_counts = make_list(req_duration_ms) by status
+| extend severity = dynamic([1.0, 3.0, 2.0, 5.0, 4.0])
+| extend risk_scores = series_multiply(request_counts, severity)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20request_counts%20%3D%20make_list(req_duration_ms)%20by%20status%20%7C%20extend%20severity%20%3D%20dynamic(%5B1.0%2C%203.0%2C%202.0%2C%205.0%2C%204.0%5D)%20%7C%20extend%20risk_scores%20%3D%20series_multiply(request_counts%2C%20severity)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| status | request_counts | severity | risk_scores |
+| ------ | ------------------ | ------------------- | --------------------- |
+| 200 | [50, 55, 48, 52, 49] | [1.0, 3.0, 2.0, 5.0, 4.0] | [50, 165, 96, 260, 196] |
+| 401 | [10, 12, 8, 15, 11] | [1.0, 3.0, 2.0, 5.0, 4.0] | [10, 36, 16, 75, 44] |
+
+This query multiplies request metrics by severity factors to calculate weighted risk scores for security analysis.
+
+
+
+
+## List of related functions
+
+- [series_subtract](/apl/scalar-functions/time-series/series-subtract): Performs element-wise subtraction of two series. Use when you need to subtract values instead of multiplying them.
+- [series_pow](/apl/scalar-functions/time-series/series-pow): Raises series elements to a power. Use when you need exponentiation instead of multiplication.
+- [series_sum](/apl/scalar-functions/time-series/series-sum): Returns the sum of all values in a series. Use to aggregate the results after multiplication.
+- [series_abs](/apl/scalar-functions/time-series/series-abs): Returns absolute values of elements. Use when you need magnitude without direction.
+
diff --git a/apl/scalar-functions/time-series/series-pearson-correlation.mdx b/apl/scalar-functions/time-series/series-pearson-correlation.mdx
new file mode 100644
index 00000000..03f0ebab
--- /dev/null
+++ b/apl/scalar-functions/time-series/series-pearson-correlation.mdx
@@ -0,0 +1,170 @@
+---
+title: series_pearson_correlation
+description: 'This page explains how to use the series_pearson_correlation function in APL.'
+---
+
+The `series_pearson_correlation` function calculates the Pearson correlation coefficient between two numeric dynamic arrays (series). This measures the linear relationship between the two series, returning a value between -1 and 1, where 1 indicates perfect positive correlation, -1 indicates perfect negative correlation, and 0 indicates no linear correlation.
+
+You can use `series_pearson_correlation` when you need to measure the strength and direction of linear relationships between time-series datasets. This is particularly useful for identifying related metrics, detecting causal relationships, validating hypotheses about system behavior, or finding leading indicators of performance issues.
+
+## For users of other query languages
+
+If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
+
+
+
+
+In Splunk SPL, you would typically need to export data and use external statistical tools to calculate correlation. In APL, `series_pearson_correlation` provides built-in correlation analysis for array data.
+
+
+```sql Splunk example
+... | stats list(metric1) as m1, list(metric2) as m2 by group
+... (manual correlation calculation or external tool)
+```
+
+```kusto APL equivalent
+datatable(series1: dynamic, series2: dynamic)
+[
+ dynamic([1, 2, 3, 4, 5]), dynamic([2, 4, 6, 8, 10])
+]
+| extend correlation = series_pearson_correlation(series1, series2)
+```
+
+
+
+
+
+In SQL, correlation functions exist but typically operate on row-based data. In APL, `series_pearson_correlation` works directly on array columns, making time-series correlation analysis more straightforward.
+
+
+```sql SQL example
+SELECT CORR(metric1, metric2) AS correlation
+FROM measurements
+GROUP BY group_id;
+```
+
+```kusto APL equivalent
+datatable(series1: dynamic, series2: dynamic)
+[
+ dynamic([1, 2, 3, 4, 5]), dynamic([2, 4, 6, 8, 10])
+]
+| extend correlation = series_pearson_correlation(series1, series2)
+```
+
+
+
+
+
+## Usage
+
+### Syntax
+
+```kusto
+series_pearson_correlation(series1, series2)
+```
+
+### Parameters
+
+| Parameter | Type | Description |
+| ---------- | ------- | ---------------------------------------------------- |
+| `series1` | dynamic | A dynamic array of numeric values. |
+| `series2` | dynamic | A dynamic array of numeric values. |
+
+### Returns
+
+A numeric value between -1 and 1 representing the Pearson correlation coefficient:
+- `1`: Perfect positive linear correlation
+- `0`: No linear correlation
+- `-1`: Perfect negative linear correlation
+
+## Use case examples
+
+
+
+
+In log analysis, you can use `series_pearson_correlation` to identify relationships between request durations across different geographic regions, helping understand if performance issues are correlated.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| extend city1 = iff(['geo.city'] == 'Tokyo', req_duration_ms, 0)
+| extend city2 = iff(['geo.city'] == 'Nagasaki', req_duration_ms, 0)
+| summarize tokyo_times = make_list(city1), nagasaki_times = make_list(city2)
+| extend correlation = series_pearson_correlation(tokyo_times, nagasaki_times)
+| project correlation
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20city1%20%3D%20iff(%5B'geo.city'%5D%20%3D%3D%20'Tokyo'%2C%20req_duration_ms%2C%200)%20%7C%20extend%20city2%20%3D%20iff(%5B'geo.city'%5D%20%3D%3D%20'Nagasaki'%2C%20req_duration_ms%2C%200)%20%7C%20summarize%20tokyo_times%20%3D%20make_list(city1)%2C%20nagasaki_times%20%3D%20make_list(city2)%20%7C%20extend%20correlation%20%3D%20series_pearson_correlation(tokyo_times%2C%20nagasaki_times)%20%7C%20project%20correlation%22%7D)
+
+**Output**
+
+| correlation |
+| ----------- |
+| 0.87 |
+
+This query calculates the correlation between request durations in Tokyo and Nagasaki, revealing if performance issues in one region tend to coincide with issues in another.
+
+
+
+
+In OpenTelemetry traces, you can use `series_pearson_correlation` to analyze relationships between service latencies, identifying dependencies and bottlenecks.
+
+**Query**
+
+```kusto
+['otel-demo-traces']
+| extend duration_ms = duration / 1ms
+| extend frontend_dur = iff(['service.name'] == 'frontend', duration_ms, 0)
+| extend checkout_dur = iff(['service.name'] == 'checkout', duration_ms, 0)
+| summarize frontend = make_list(frontend_dur), checkout = make_list(checkout_dur)
+| extend correlation = series_pearson_correlation(frontend, checkout)
+| project correlation
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20duration_ms%20%3D%20duration%20%2F%201ms%20%7C%20extend%20frontend_dur%20%3D%20iff(%5B'service.name'%5D%20%3D%3D%20'frontend'%2C%20duration_ms%2C%200)%20%7C%20extend%20checkout_dur%20%3D%20iff(%5B'service.name'%5D%20%3D%3D%20'checkout'%2C%20duration_ms%2C%200)%20%7C%20summarize%20frontend%20%3D%20make_list(frontend_dur)%2C%20checkout%20%3D%20make_list(checkout_dur)%20%7C%20extend%20correlation%20%3D%20series_pearson_correlation(frontend%2C%20checkout)%20%7C%20project%20correlation%22%7D)
+
+**Output**
+
+| correlation |
+| ----------- |
+| 0.65 |
+
+This query measures the correlation between frontend and checkout service latencies, helping understand if performance of one service affects the other.
+
+
+
+
+In security logs, you can use `series_pearson_correlation` to identify relationships between failed authentication attempts and successful requests, detecting potential attack patterns.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| extend success_count = iff(status == '200', 1, 0)
+| extend failure_count = iff(status == '500', 1, 0)
+| summarize successes = make_list(success_count), failures = make_list(failure_count) by bin(_time, 1h)
+| extend correlation = series_pearson_correlation(successes, failures)
+| project correlation
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20extend%20success_count%20%3D%20iff(status%20%3D%3D%20'200'%2C%201%2C%200)%20%7C%20extend%20failure_count%20%3D%20iff(status%20%3D%3D%20'500'%2C%201%2C%200)%20%7C%20summarize%20successes%20%3D%20make_list(success_count)%2C%20failures%20%3D%20make_list(failure_count)%20by%20bin(_time%2C%201h)%20%7C%20extend%20correlation%20%3D%20series_pearson_correlation(successes%2C%20failures)%20%7C%20project%20correlation%22%7D)
+
+**Output**
+
+| correlation |
+| ----------- |
+| -0.45 |
+
+This query analyzes the correlation between successful and failed requests, where a negative correlation might indicate that high failure rates suppress successful requests, potentially signaling an attack.
+
+
+
+
+## List of related functions
+
+- [series_magnitude](/apl/scalar-functions/time-series/series-magnitude): Calculates the magnitude of a series. Use when you need vector length instead of correlation.
+- [series_stats](/apl/scalar-functions/time-series/series-stats): Returns comprehensive statistics. Use when you need variance and covariance components separately.
+- [series_subtract](/apl/scalar-functions/time-series/series-subtract): Performs element-wise subtraction. Often used to compute deviations before correlation analysis.
+- [series_multiply](/apl/scalar-functions/time-series/series-multiply): Performs element-wise multiplication. Use for weighted combinations instead of correlation.
+
diff --git a/apl/scalar-functions/time-series/series-pow.mdx b/apl/scalar-functions/time-series/series-pow.mdx
new file mode 100644
index 00000000..b0d5b6c0
--- /dev/null
+++ b/apl/scalar-functions/time-series/series-pow.mdx
@@ -0,0 +1,162 @@
+---
+title: series_pow
+description: 'This page explains how to use the series_pow function in APL.'
+---
+
+The `series_pow` function raises each element in a numeric dynamic array (series) to a specified power. This performs element-wise exponentiation across the entire series.
+
+You can use `series_pow` when you need to apply power transformations to time-series data. This is particularly useful for non-linear data transformations, calculating exponential growth patterns, applying polynomial features in analysis, or emphasizing larger values in your data.
+
+## For users of other query languages
+
+If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
+
+
+
+
+In Splunk SPL, you typically use the `pow()` function within an `eval` command to calculate powers. In APL, `series_pow` applies the power operation to every element in an array simultaneously.
+
+
+```sql Splunk example
+... | eval squared=pow(value, 2)
+```
+
+```kusto APL equivalent
+datatable(x: dynamic)
+[
+ dynamic([2, 3, 4, 5])
+]
+| extend squared = series_pow(x, 2)
+```
+
+
+
+
+
+In SQL, you use the `POWER()` function to raise values to a power on individual rows. In APL, `series_pow` operates on entire arrays, applying the exponentiation operation element-wise.
+
+
+```sql SQL example
+SELECT POWER(value, 2) AS squared
+FROM measurements;
+```
+
+```kusto APL equivalent
+datatable(x: dynamic)
+[
+ dynamic([2, 3, 4, 5])
+]
+| extend squared = series_pow(x, 2)
+```
+
+
+
+
+
+## Usage
+
+### Syntax
+
+```kusto
+series_pow(array, power)
+```
+
+### Parameters
+
+| Parameter | Type | Description |
+| --------- | ------- | ----------------------------------------------- |
+| `array` | dynamic | A dynamic array of numeric values (base). |
+| `power` | real | The exponent to which to raise each element. |
+
+### Returns
+
+A dynamic array where each element is the result of raising the corresponding input element to the specified power.
+
+## Use case examples
+
+
+
+
+In log analysis, you can use `series_pow` to emphasize outliers by squaring request durations, making larger values more prominent in analysis.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize durations = make_list(req_duration_ms) by id
+| extend squared_durations = series_pow(durations, 2)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20durations%20%3D%20make_list(req_duration_ms)%20by%20id%20%7C%20extend%20squared_durations%20%3D%20series_pow(durations%2C%202)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| id | durations | squared_durations |
+| ---- | ------------------- | ---------------------- |
+| u123 | [50, 100, 75, 200] | [2500, 10000, 5625, 40000] |
+| u456 | [30, 45, 60, 90] | [900, 2025, 3600, 8100] |
+
+This query squares request durations to amplify the differences, making performance anomalies more visible for analysis.
+
+
+
+
+In OpenTelemetry traces, you can use `series_pow` to calculate exponential penalty scores based on span durations, emphasizing longer spans.
+
+**Query**
+
+```kusto
+['otel-demo-traces']
+| extend duration_ms = duration / 1ms
+| summarize durations = make_list(duration_ms) by ['service.name']
+| extend penalty_score = series_pow(durations, 1.5)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20duration_ms%20%3D%20duration%20%2F%201ms%20%7C%20summarize%20durations%20%3D%20make_list(duration_ms)%20by%20%5B'service.name'%5D%20%7C%20extend%20penalty_score%20%3D%20series_pow(durations%2C%201.5)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| service.name | durations | penalty_score |
+| ------------ | -------------------- | ------------------------- |
+| frontend | [100, 200, 150, 250] | [1000, 2828, 1837, 3952] |
+| checkout | [50, 75, 60, 100] | [353, 649, 464, 1000] |
+
+This query applies a power transformation to span durations, creating a penalty score that disproportionately penalizes longer spans.
+
+
+
+
+In security logs, you can use `series_pow` to calculate non-linear risk scores based on request counts, where higher volumes represent exponentially greater risk.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize request_counts = make_list(req_duration_ms) by status
+| extend risk_factor = series_pow(request_counts, 1.8)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20request_counts%20%3D%20make_list(req_duration_ms)%20by%20status%20%7C%20extend%20risk_factor%20%3D%20series_pow(request_counts%2C%201.8)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| status | request_counts | risk_factor |
+| ------ | ------------------ | ---------------------------- |
+| 200 | [50, 60, 55, 58] | [1767, 2601, 2121, 2419] |
+| 401 | [100, 120, 110, 115] | [6309, 8710, 7328, 7926] |
+
+This query applies an exponential transformation to request counts, creating risk scores where high-volume patterns receive disproportionately higher scores.
+
+
+
+
+## List of related functions
+
+- [series_multiply](/apl/scalar-functions/time-series/series-multiply): Performs element-wise multiplication of two series. Use when you need multiplication between two series instead of raising to a power.
+- [series_log](/apl/scalar-functions/time-series/series-log): Computes the natural logarithm of each element. Use as the inverse operation to exponentials.
+- [series_abs](/apl/scalar-functions/time-series/series-abs): Returns the absolute value of each element. Use when you need magnitude without power transformations.
+- [series_sign](/apl/scalar-functions/time-series/series-sign): Returns the sign of each element. Useful before applying power operations to handle negative values.
+
diff --git a/apl/scalar-functions/time-series/series-sign.mdx b/apl/scalar-functions/time-series/series-sign.mdx
new file mode 100644
index 00000000..29931ae9
--- /dev/null
+++ b/apl/scalar-functions/time-series/series-sign.mdx
@@ -0,0 +1,170 @@
+---
+title: series_sign
+description: 'This page explains how to use the series_sign function in APL.'
+---
+
+The `series_sign` function returns the sign of each element in a numeric dynamic array (series). The function returns -1 for negative numbers, 0 for zero, and 1 for positive numbers.
+
+You can use `series_sign` when you need to identify the direction or polarity of values in time-series data. This is particularly useful for detecting changes in trends, classifying values by their sign, or preparing data for further analysis where only the direction matters, not the magnitude.
+
+## For users of other query languages
+
+If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
+
+
+
+
+In Splunk SPL, you typically implement sign detection using conditional statements with `eval`. In APL, `series_sign` provides a built-in function that operates on entire arrays efficiently.
+
+
+```sql Splunk example
+... | eval sign=case(value>0, 1, value<0, -1, true(), 0)
+```
+
+```kusto APL equivalent
+datatable(x: dynamic)
+[
+ dynamic([-5, -2, 0, 3, 7])
+]
+| extend signs = series_sign(x)
+```
+
+
+
+
+
+In SQL, you use the `SIGN()` function to determine the sign of individual values. In APL, `series_sign` applies this operation element-wise across entire arrays.
+
+
+```sql SQL example
+SELECT SIGN(value) AS sign_value
+FROM measurements;
+```
+
+```kusto APL equivalent
+datatable(x: dynamic)
+[
+ dynamic([-5, -2, 0, 3, 7])
+]
+| extend signs = series_sign(x)
+```
+
+
+
+
+
+## Usage
+
+### Syntax
+
+```kusto
+series_sign(array)
+```
+
+### Parameters
+
+| Parameter | Type | Description |
+| --------- | ------- | ---------------------------------- |
+| `array` | dynamic | A dynamic array of numeric values. |
+
+### Returns
+
+A dynamic array where each element is:
+- `-1` if the corresponding input element is negative
+- `0` if the corresponding input element is zero
+- `1` if the corresponding input element is positive
+
+## Use case examples
+
+
+
+
+In log analysis, you can use `series_sign` to detect whether request durations are above or below a baseline by first subtracting the baseline, then examining the sign.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize durations = make_list(req_duration_ms) by id
+| extend baseline = 100
+| extend deviations = series_subtract(durations, dynamic([100, 100, 100, 100, 100]))
+| extend trend = series_sign(deviations)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20durations%20%3D%20make_list(req_duration_ms)%20by%20id%20%7C%20extend%20baseline%20%3D%20100%20%7C%20extend%20deviations%20%3D%20series_subtract(durations%2C%20dynamic(%5B100%2C%20100%2C%20100%2C%20100%2C%20100%5D))%20%7C%20extend%20trend%20%3D%20series_sign(deviations)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| id | durations | deviations | trend |
+| ---- | ---------------------- | --------------------- | -------------- |
+| u123 | [120, 95, 105, 80, 110] | [20, -5, 5, -20, 10] | [1, -1, 1, -1, 1] |
+| u456 | [85, 100, 90, 105, 95] | [-15, 0, -10, 5, -5] | [-1, 0, -1, 1, -1] |
+
+This query calculates deviations from a baseline and uses `series_sign` to classify whether each request was slower (1), faster (-1), or equal (0) to the baseline.
+
+
+
+
+In OpenTelemetry traces, you can use `series_sign` to identify performance improvements or degradations by comparing current spans against previous measurements.
+
+**Query**
+
+```kusto
+['otel-demo-traces']
+| extend duration_ms = duration / 1ms
+| summarize current = make_list(duration_ms) by ['service.name']
+| extend previous = dynamic([100, 120, 95, 110, 105])
+| extend change = series_subtract(current, previous)
+| extend direction = series_sign(change)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20duration_ms%20%3D%20duration%20%2F%201ms%20%7C%20summarize%20current%20%3D%20make_list(duration_ms)%20by%20%5B'service.name'%5D%20%7C%20extend%20previous%20%3D%20dynamic(%5B100%2C%20120%2C%2095%2C%20110%2C%20105%5D)%20%7C%20extend%20change%20%3D%20series_subtract(current%2C%20previous)%20%7C%20extend%20direction%20%3D%20series_sign(change)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| service.name | current | change | direction |
+| ------------ | ---------------------- | ---------------------- | ----------------- |
+| frontend | [95, 115, 100, 105, 110] | [-5, -5, 5, -5, 5] | [-1, -1, 1, -1, 1] |
+| checkout | [105, 125, 90, 115, 100] | [5, 5, -5, 5, -5] | [1, 1, -1, 1, -1] |
+
+This query compares current and previous span durations, using `series_sign` to classify each change as improvement (-1), degradation (1), or no change (0).
+
+
+
+
+In security logs, you can use `series_sign` to classify request patterns as above or below normal thresholds, helping identify potential security anomalies.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize counts = make_list(req_duration_ms) by status
+| extend threshold = dynamic([50, 50, 50, 50, 50])
+| extend difference = series_subtract(counts, threshold)
+| extend alert_flag = series_sign(difference)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20counts%20%3D%20make_list(req_duration_ms)%20by%20status%20%7C%20extend%20threshold%20%3D%20dynamic(%5B50%2C%2050%2C%2050%2C%2050%2C%2050%5D)%20%7C%20extend%20difference%20%3D%20series_subtract(counts%2C%20threshold)%20%7C%20extend%20alert_flag%20%3D%20series_sign(difference)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| status | counts | difference | alert_flag |
+| ------ | ------------------- | ------------------ | ---------------- |
+| 200 | [45, 52, 48, 55, 50] | [-5, 2, -2, 5, 0] | [-1, 1, -1, 1, 0] |
+| 401 | [60, 75, 55, 80, 70] | [10, 25, 5, 30, 20] | [1, 1, 1, 1, 1] |
+
+This query compares request metrics against thresholds and uses `series_sign` to create alert flags, where 1 indicates above-threshold activity that might warrant investigation.
+
+
+
+
+## List of related functions
+
+- [series_abs](/apl/scalar-functions/time-series/series-abs): Returns the absolute value of each element. Use when you need magnitude without direction information.
+- [series_subtract](/apl/scalar-functions/time-series/series-subtract): Performs element-wise subtraction. Often used before `series_sign` to compute deviations from baselines.
+- [series_greater](/apl/scalar-functions/time-series/series-greater): Returns boolean comparison results. Use when you need explicit comparison against a threshold.
+- [series_less](/apl/scalar-functions/time-series/series-less): Returns boolean comparison results. Use for direct comparison instead of sign-based classification.
+
diff --git a/apl/scalar-functions/time-series/series-stats-dynamic.mdx b/apl/scalar-functions/time-series/series-stats-dynamic.mdx
new file mode 100644
index 00000000..0ba22cfb
--- /dev/null
+++ b/apl/scalar-functions/time-series/series-stats-dynamic.mdx
@@ -0,0 +1,164 @@
+---
+title: series_stats_dynamic
+description: 'This page explains how to use the series_stats_dynamic function in APL.'
+---
+
+The `series_stats_dynamic` function computes comprehensive statistical measures for a numeric dynamic array (series), returning results in a dynamic object format with named properties. This provides the same statistics as `series_stats` but with more convenient access through property names instead of array indices.
+
+You can use `series_stats_dynamic` when you need statistical summaries with easier property-based access, better code readability, or when integrating with other dynamic data structures. This is particularly useful in complex analytical workflows where referring to statistics by name (`stats.min`, `stats.avg`) is clearer than using array indices.
+
+## For users of other query languages
+
+If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
+
+
+
+
+In Splunk SPL, you typically use multiple `stats` functions and store results as separate fields. In APL, `series_stats_dynamic` provides all statistics in a dynamic object that you can access by property names.
+
+
+```sql Splunk example
+... | stats min(value) as min_val, max(value) as max_val,
+ avg(value) as avg_val, stdev(value) as stdev_val by user
+```
+
+```kusto APL equivalent
+['sample-http-logs']
+| summarize values = make_list(req_duration_ms) by id
+| extend stats = series_stats_dynamic(values)
+| extend min_val = stats.min, max_val = stats.max, avg_val = stats.avg
+```
+
+
+
+
+
+In SQL, you calculate statistics separately and work with individual columns. In APL, `series_stats_dynamic` provides all statistics in a single dynamic object with named properties that you can query and transform.
+
+
+```sql SQL example
+SELECT
+ user_id,
+ MIN(value) as min_val,
+ MAX(value) as max_val,
+ AVG(value) as avg_val,
+ STDDEV(value) as std_val
+FROM measurements
+GROUP BY user_id;
+```
+
+```kusto APL equivalent
+['sample-http-logs']
+| summarize values = make_list(req_duration_ms) by id
+| extend stats = series_stats_dynamic(values)
+| extend min_val = stats.min, max_val = stats.max
+```
+
+
+
+
+
+## Usage
+
+### Syntax
+
+```kusto
+series_stats_dynamic(array)
+```
+
+### Parameters
+
+| Parameter | Type | Description |
+| --------- | ------- | ---------------------------------- |
+| `array` | dynamic | A dynamic array of numeric values. |
+
+### Returns
+
+A dynamic object containing the following statistical properties:
+
+| Property | Description |
+| ---------- | --------------------------------------------------------- |
+| `min` | The minimum value in the input array. |
+| `min_idx` | The first position of the minimum value in the array. |
+| `max` | The maximum value in the input array. |
+| `max_idx` | The first position of the maximum value in the array. |
+| `avg` | The average value of the input array. |
+| `variance` | The sample variance of the input array. |
+| `stdev` | The sample standard deviation of the input array. |
+
+## Use case examples
+
+
+
+
+In log analysis, you can use `series_stats_dynamic` to generate comprehensive statistical reports with readable property names.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize durations = make_list(req_duration_ms) by id
+| extend stats = series_stats_dynamic(durations)
+| extend performance_score = 100 - (stats.stdev / stats.avg * 100)
+| project id,
+ min = stats.min,
+ max = stats.max,
+ avg = stats.avg,
+ performance_score
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20durations%20%3D%20make_list(req_duration_ms)%20by%20id%20%7C%20extend%20stats%20%3D%20series_stats_dynamic(durations)%20%7C%20extend%20performance_score%20%3D%20100%20-%20(stats.stdev%20%2F%20stats.avg%20*%20100)%20%7C%20project%20id%2C%20min%20%3D%20stats.min%2C%20max%20%3D%20stats.max%2C%20avg%20%3D%20stats.avg%2C%20performance_score%20%7C%20take%205%22%7D)
+
+**Output**
+
+| id | min | max | avg | performance_score |
+| ---- | --- | --- | --- | ----------------- |
+| u123 | 15 | 245 | 95 | 52.4 |
+| u456 | 8 | 189 | 78 | 50.4 |
+
+This query uses property names to access statistics and calculate a custom performance score based on the coefficient of variation.
+
+
+
+
+In security logs, you can use `series_stats_dynamic` to build adaptive anomaly detection thresholds with clear, self-documenting property access.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize durations = make_list(req_duration_ms) by status
+| extend stats = series_stats_dynamic(durations)
+| extend lower_bound = stats.avg - (2 * stats.stdev)
+| extend upper_bound = stats.avg + (2 * stats.stdev)
+| extend range_ratio = (stats.max - stats.min) / stats.avg
+| project status,
+ avg_duration = stats.avg,
+ stdev = stats.stdev,
+ lower_bound,
+ upper_bound,
+ range_ratio
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20durations%20%3D%20make_list(req_duration_ms)%20by%20status%20%7C%20extend%20stats%20%3D%20series_stats_dynamic(durations)%20%7C%20extend%20lower_bound%20%3D%20stats.avg%20-%20(2%20*%20stats.stdev)%20%7C%20extend%20upper_bound%20%3D%20stats.avg%20%2B%20(2%20*%20stats.stdev)%20%7C%20extend%20range_ratio%20%3D%20(stats.max%20-%20stats.min)%20%2F%20stats.avg%20%7C%20project%20status%2C%20avg_duration%20%3D%20stats.avg%2C%20stdev%20%3D%20stats.stdev%2C%20lower_bound%2C%20upper_bound%2C%20range_ratio%22%7D)
+
+**Output**
+
+| status | avg_duration | stdev | lower_bound | upper_bound | range_ratio |
+| ------ | ------------ | ----- | ----------- | ----------- | ----------- |
+| 200 | 52 | 12.5 | 27 | 77 | 6.44 |
+| 401 | 450 | 850.2 | -1250.4 | 2150.4 | 19.68 |
+| 500 | 125 | 95.3 | -65.6 | 315.6 | 4.36 |
+
+This query uses named properties to calculate confidence intervals and assess the relative range of values for adaptive security monitoring.
+
+
+
+
+## List of related functions
+
+- [series_stats](/apl/scalar-functions/time-series/series-stats): Returns the same statistics as a 7-element array instead of a dynamic object with named properties.
+- [series_max](/apl/scalar-functions/time-series/series-max): Compares two arrays element-wise and returns the maximum values.
+- [series_min](/apl/scalar-functions/time-series/series-min): Compares two arrays element-wise and returns the minimum values.
+- [todynamic](/apl/scalar-functions/conversion-functions/todynamic): Converts values to dynamic type for custom object construction.
diff --git a/apl/scalar-functions/time-series/series-stats.mdx b/apl/scalar-functions/time-series/series-stats.mdx
new file mode 100644
index 00000000..ead1632c
--- /dev/null
+++ b/apl/scalar-functions/time-series/series-stats.mdx
@@ -0,0 +1,160 @@
+---
+title: series_stats
+description: 'This page explains how to use the series_stats function in APL.'
+---
+
+The `series_stats` function computes comprehensive statistical measures for a numeric dynamic array (series), returning an array with seven elements containing minimum, maximum, average, variance, standard deviation, and the positions of minimum and maximum values.
+
+You can use `series_stats` when you need a complete statistical summary of time-series data in a single operation. This is particularly useful for understanding data distribution, identifying outliers, calculating confidence intervals, or performing comprehensive data quality assessments without running multiple separate aggregations.
+
+## For users of other query languages
+
+If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
+
+
+
+
+In Splunk SPL, you typically use multiple `stats` functions to calculate different statistics. In APL, `series_stats` provides all common statistics in a single operation on array data, returning them as a 7-element array.
+
+
+```sql Splunk example
+... | stats min(value) as min_val, max(value) as max_val,
+ avg(value) as avg_val, stdev(value) as stdev_val by user
+```
+
+```kusto APL equivalent
+['sample-http-logs']
+| summarize values = make_list(req_duration_ms) by id
+| extend stats = series_stats(values)
+| extend min_val = stats[0], max_val = stats[2], avg_val = stats[4]
+```
+
+
+
+
+
+In SQL, you calculate multiple aggregate functions separately. In APL, `series_stats` provides all these statistics in a single function call on array data, returned as a 7-element array.
+
+
+```sql SQL example
+SELECT
+ MIN(value) as min_val,
+ MAX(value) as max_val,
+ AVG(value) as avg_val,
+ STDDEV(value) as std_val
+FROM measurements
+GROUP BY user_id;
+```
+
+```kusto APL equivalent
+['sample-http-logs']
+| summarize values = make_list(req_duration_ms) by id
+| extend stats = series_stats(values)
+| extend min_val = stats[0], max_val = stats[2], avg_val = stats[4]
+```
+
+
+
+
+
+## Usage
+
+### Syntax
+
+```kusto
+series_stats(array)
+```
+
+### Parameters
+
+| Parameter | Type | Description |
+| --------- | ------- | ---------------------------------- |
+| `array` | dynamic | A dynamic array of numeric values. |
+
+### Returns
+
+An array with seven numeric elements in the following order:
+
+| Index | Statistic | Description |
+| ----- | ---------- | --------------------------------------------------------- |
+| 0 | min | The minimum value in the input array. |
+| 1 | min_idx | The first position of the minimum value in the array. |
+| 2 | max | The maximum value in the input array. |
+| 3 | max_idx | The first position of the maximum value in the array. |
+| 4 | avg | The average value of the input array. |
+| 5 | variance | The sample variance of the input array. |
+| 6 | stdev | The sample standard deviation of the input array. |
+
+## Use case examples
+
+
+
+
+In log analysis, you can use `series_stats` to get a comprehensive statistical summary of request durations for each user, helping identify performance patterns and outliers.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize durations = make_list(req_duration_ms) by id
+| extend stats_array = series_stats(durations)
+| project id,
+ min_duration = stats_array[0],
+ max_duration = stats_array[2],
+ avg_duration = stats_array[4],
+ stdev_duration = stats_array[6]
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20durations%20%3D%20make_list(req_duration_ms)%20by%20id%20%7C%20extend%20stats_array%20%3D%20series_stats(durations)%20%7C%20project%20id%2C%20min_duration%20%3D%20stats_array%5B0%5D%2C%20max_duration%20%3D%20stats_array%5B2%5D%2C%20avg_duration%20%3D%20stats_array%5B4%5D%2C%20stdev_duration%20%3D%20stats_array%5B6%5D%20%7C%20take%205%22%7D)
+
+**Output**
+
+| id | min_duration | max_duration | avg_duration | stdev_duration |
+| ---- | ------------ | ------------ | ------------ | -------------- |
+| u123 | 15 | 245 | 95 | 45.2 |
+| u456 | 8 | 189 | 78 | 38.7 |
+
+This query calculates comprehensive statistics for each user's request durations by extracting specific elements from the 7-element stats array.
+
+
+
+
+In security logs, you can use `series_stats` to establish behavioral baselines and calculate anomaly detection thresholds based on variance.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize durations = make_list(req_duration_ms) by status
+| extend stats_array = series_stats(durations)
+| project status,
+ typical_duration = stats_array[4],
+ variance = stats_array[5],
+ stdev = stats_array[6],
+ max_observed = stats_array[2]
+| extend anomaly_threshold = typical_duration + (3 * stdev)
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20durations%20%3D%20make_list(req_duration_ms)%20by%20status%20%7C%20extend%20stats_array%20%3D%20series_stats(durations)%20%7C%20project%20status%2C%20typical_duration%20%3D%20stats_array%5B4%5D%2C%20variance%20%3D%20stats_array%5B5%5D%2C%20stdev%20%3D%20stats_array%5B6%5D%2C%20max_observed%20%3D%20stats_array%5B2%5D%20%7C%20extend%20anomaly_threshold%20%3D%20typical_duration%20%2B%20(3%20*%20stdev)%22%7D)
+
+**Output**
+
+| status | typical_duration | variance | stdev | max_observed | anomaly_threshold |
+| ------ | ---------------- | -------- | ----- | ------------ | ----------------- |
+| 200 | 52 | 156.25 | 12.5 | 340 | 89.5 |
+| 401 | 450 | 722840 | 850.2 | 8900 | 3000.6 |
+| 500 | 125 | 9082 | 95.3 | 550 | 410.9 |
+
+This query uses statistical analysis to establish normal behavior patterns and calculate anomaly detection thresholds based on standard deviations.
+
+
+
+
+## List of related functions
+
+- [series_stats_dynamic](/apl/scalar-functions/time-series/series-stats-dynamic): Returns the same statistics as a dynamic object with named properties instead of an array.
+- [series_max](/apl/scalar-functions/time-series/series-max): Compares two arrays element-wise and returns the maximum values.
+- [series_min](/apl/scalar-functions/time-series/series-min): Compares two arrays element-wise and returns the minimum values.
+- [avg](/apl/aggregation-function/avg): Aggregation function for calculating averages across rows.
+- [stdev](/apl/aggregation-function/stdev): Aggregation function for standard deviation across rows.
diff --git a/apl/scalar-functions/time-series/series-subtract.mdx b/apl/scalar-functions/time-series/series-subtract.mdx
new file mode 100644
index 00000000..8b874086
--- /dev/null
+++ b/apl/scalar-functions/time-series/series-subtract.mdx
@@ -0,0 +1,165 @@
+---
+title: series_subtract
+description: 'This page explains how to use the series_subtract function in APL.'
+---
+
+The `series_subtract` function performs element-wise subtraction between two numeric dynamic arrays (series). Each element in the first series is subtracted by the corresponding element at the same position in the second series.
+
+You can use `series_subtract` when you need to compute differences between two time-series datasets. This is particularly useful for calculating deltas, deviations from baselines, changes over time, or comparing metrics between different groups or time periods.
+
+## For users of other query languages
+
+If you come from other query languages, this section explains how to adjust your existing queries to achieve the same results in APL.
+
+
+
+
+In Splunk SPL, you typically use the `eval` command with the subtraction operator to calculate differences between fields. In APL, `series_subtract` operates on entire arrays at once, performing element-wise subtraction efficiently.
+
+
+```sql Splunk example
+... | eval difference=value1 - value2
+```
+
+```kusto APL equivalent
+datatable(series1: dynamic, series2: dynamic)
+[
+ dynamic([10, 20, 30]), dynamic([5, 8, 12])
+]
+| extend difference = series_subtract(series1, series2)
+```
+
+
+
+
+
+In SQL, you subtract values using the `-` operator on individual columns. In APL, `series_subtract` performs element-wise subtraction across entire arrays stored in single columns.
+
+
+```sql SQL example
+SELECT value1 - value2 AS difference
+FROM measurements;
+```
+
+```kusto APL equivalent
+datatable(series1: dynamic, series2: dynamic)
+[
+ dynamic([10, 20, 30]), dynamic([5, 8, 12])
+]
+| extend difference = series_subtract(series1, series2)
+```
+
+
+
+
+
+## Usage
+
+### Syntax
+
+```kusto
+series_subtract(series1, series2)
+```
+
+### Parameters
+
+| Parameter | Type | Description |
+| ---------- | ------- | ---------------------------------------------------- |
+| `series1` | dynamic | A dynamic array of numeric values (minuend). |
+| `series2` | dynamic | A dynamic array of numeric values (subtrahend). |
+
+### Returns
+
+A dynamic array where each element is the result of subtracting the corresponding element of `series2` from `series1`. If the arrays have different lengths, the shorter array is extended with `null` values.
+
+## Use case examples
+
+
+
+
+In log analysis, you can use `series_subtract` to calculate the difference between current and baseline request durations, helping identify performance degradations.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize current = make_list(req_duration_ms) by ['geo.city']
+| extend baseline = dynamic([50, 55, 48, 52, 49])
+| extend delta = series_subtract(current, baseline)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20current%20%3D%20make_list(req_duration_ms)%20by%20%5B'geo.city'%5D%20%7C%20extend%20baseline%20%3D%20dynamic(%5B50%2C%2055%2C%2048%2C%2052%2C%2049%5D)%20%7C%20extend%20delta%20%3D%20series_subtract(current%2C%20baseline)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| geo.city | current | baseline | delta |
+| ---------- | ------------------ | ------------------ | ----------------- |
+| Seattle | [60, 65, 58, 62, 59] | [50, 55, 48, 52, 49] | [10, 10, 10, 10, 10] |
+| Portland | [45, 50, 43, 47, 44] | [50, 55, 48, 52, 49] | [-5, -5, -5, -5, -5] |
+
+This query calculates the difference between current request durations and baseline values, showing performance changes per city.
+
+
+
+
+In OpenTelemetry traces, you can use `series_subtract` to compare span durations between different service versions or time periods.
+
+**Query**
+
+```kusto
+['otel-demo-traces']
+| extend duration_ms = duration / 1ms
+| summarize current = make_list(duration_ms) by ['service.name']
+| extend previous = dynamic([100, 120, 95, 110, 105])
+| extend improvement = series_subtract(previous, current)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'otel-demo-traces'%5D%20%7C%20extend%20duration_ms%20%3D%20duration%20%2F%201ms%20%7C%20summarize%20current%20%3D%20make_list(duration_ms)%20by%20%5B'service.name'%5D%20%7C%20extend%20previous%20%3D%20dynamic(%5B100%2C%20120%2C%2095%2C%20110%2C%20105%5D)%20%7C%20extend%20improvement%20%3D%20series_subtract(previous%2C%20current)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| service.name | current | previous | improvement |
+| ------------ | ---------------------- | ---------------------- | -------------------- |
+| frontend | [80, 95, 75, 90, 85] | [100, 120, 95, 110, 105] | [20, 25, 20, 20, 20] |
+| checkout | [110, 125, 105, 120, 115] | [100, 120, 95, 110, 105] | [-10, -5, -10, -10, -10] |
+
+This query compares current span durations with previous measurements, calculating performance improvements (positive values) or degradations (negative values).
+
+
+
+
+In security logs, you can use `series_subtract` to detect anomalous behavior by comparing request patterns against expected baselines.
+
+**Query**
+
+```kusto
+['sample-http-logs']
+| summarize observed = make_list(req_duration_ms) by status
+| extend expected = dynamic([45, 50, 48, 49, 47])
+| extend anomaly_score = series_subtract(observed, expected)
+| take 5
+```
+
+[Run in Playground](https://play.axiom.co/axiom-play-qf1k/query?initForm=%7B%22apl%22%3A%22%5B'sample-http-logs'%5D%20%7C%20summarize%20observed%20%3D%20make_list(req_duration_ms)%20by%20status%20%7C%20extend%20expected%20%3D%20dynamic(%5B45%2C%2050%2C%2048%2C%2049%2C%2047%5D)%20%7C%20extend%20anomaly_score%20%3D%20series_subtract(observed%2C%20expected)%20%7C%20take%205%22%7D)
+
+**Output**
+
+| status | observed | expected | anomaly_score |
+| ------ | ------------------- | ------------------ | ------------------ |
+| 200 | [46, 51, 49, 50, 48] | [45, 50, 48, 49, 47] | [1, 1, 1, 1, 1] |
+| 500 | [145, 150, 148, 149, 147] | [45, 50, 48, 49, 47] | [100, 100, 100, 100, 100] |
+
+This query calculates anomaly scores by comparing observed request durations against expected baselines, with large positive values indicating potential issues.
+
+
+
+
+## List of related functions
+
+- [series_multiply](/apl/scalar-functions/time-series/series-multiply): Performs element-wise multiplication of two series. Use when you need to multiply rather than subtract.
+- [series_abs](/apl/scalar-functions/time-series/series-abs): Returns the absolute value of each element. Use after subtraction to get magnitude of differences.
+- [series_stats](/apl/scalar-functions/time-series/series-stats): Returns statistical summary of a series. Use to analyze the result of subtraction operations.
+- [series_sign](/apl/scalar-functions/time-series/series-sign): Returns the sign of each element. Use after subtraction to determine direction of changes.
+
diff --git a/docs.json b/docs.json
index 5c4a1a1b..8f344fe0 100644
--- a/docs.json
+++ b/docs.json
@@ -408,10 +408,22 @@
"apl/scalar-functions/time-series/series-greater",
"apl/scalar-functions/time-series/series-greater-equals",
"apl/scalar-functions/time-series/series-ifft",
+ "apl/scalar-functions/time-series/series-iir",
"apl/scalar-functions/time-series/series-less",
"apl/scalar-functions/time-series/series-less-equals",
+ "apl/scalar-functions/time-series/series-log",
+ "apl/scalar-functions/time-series/series-magnitude",
+ "apl/scalar-functions/time-series/series-max",
+ "apl/scalar-functions/time-series/series-min",
+ "apl/scalar-functions/time-series/series-multiply",
"apl/scalar-functions/time-series/series-not-equals",
+ "apl/scalar-functions/time-series/series-pearson-correlation",
+ "apl/scalar-functions/time-series/series-pow",
+ "apl/scalar-functions/time-series/series-sign",
"apl/scalar-functions/time-series/series-sin",
+ "apl/scalar-functions/time-series/series-stats",
+ "apl/scalar-functions/time-series/series-stats-dynamic",
+ "apl/scalar-functions/time-series/series-subtract",
"apl/scalar-functions/time-series/series-sum",
"apl/scalar-functions/time-series/series-tan"
]