Skip to content
Open
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
57 changes: 57 additions & 0 deletions hadoop-hdds/docs/content/feature/multi-raft-support.md
Original file line number Diff line number Diff line change
Expand Up @@ -69,6 +69,63 @@ Ratis handles concurrent logs per node.
This property is effective only when the previous property is set to 0.
The value of this property must be greater than 0.

### Calculating Ratis Pipeline Limits

ReplicationFactor.THREE is controlled by three configuration properties that limit the
number of pipelines in the cluster at a cluster-wide level and a datanode level, respectively.
The number of pipelines created by SCM is restricted by these limits.

1. **Cluster-wide Limit (`ozone.scm.ratis.pipeline.limit`)**
* **Description**: An absolute, global limit for the total number of open Ratis pipelines
across the entire cluster. This acts as a final cap on the total number of pipelines.
* **Default Value**: `0` (which means no global limit by default).

2. **Datanode-level Fixed Limit (`ozone.scm.datanode.pipeline.limit`)**
* **Description**: When set to a positive number, this property defines a fixed maximum number of pipelines for
every datanode.
* **Default Value**: `2`
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is tricky since we have a bug; see HDDS-14369.

* **Cluster-wide Limit Calculation**: If this property is set,
the number of pipelines in the cluster is in addition limited by
`(<this value> * <number of healthy datanodes>) / 3`.

3. **Datanode-level Dynamic Limit (`ozone.scm.pipeline.per.metadata.disk`)**
* **Description**: This property takes effect when `ozone.scm.datanode.pipeline.limit` is not set to a positive number.
It calculates a dynamic limit for each datanode based on its available metadata disks.
* **Default Value**: `2`

#### How Limits are Applied

SCM first calculates a target number of pipelines based on either the **Datanode-level Fixed Limit** or the
**Datanode-level Dynamic Limit**. It then compares this calculated target to the **Cluster-wide Limit**. The
**lowest value** is used as the final target for the number of open pipelines.

**Example (Dynamic Limit):**

Consider a cluster with **10 healthy datanodes**.
* **8 datanodes** have 4 metadata disks each.
* **2 datanodes** have 2 metadata disks each.

And the configuration is:
* `ozone.scm.ratis.pipeline.limit` = **30** (A global cap is set)
* `ozone.scm.datanode.pipeline.limit` = **0** (Use dynamic calculation)
* `ozone.scm.pipeline.per.metadata.disk` = **2** (Default)

**Calculation Steps:**
1. Calculate the limit for the first group of datanodes: `8 datanodes * (2 pipelines/disk * 4 disks/datanode) = 64 pipelines`
2. Calculate the limit for the second group of datanodes: `2 datanodes * (2 pipelines/disk * 2 disks/datanode) = 8 pipelines`
3. Calculate the total raw target from the dynamic limit: `(64 + 8) / 3 = 24`
4. Compare with the global limit: `min(24, 30) = 24`

SCM will attempt to create and maintain approximately **24** open, FACTOR_THREE Ratis pipelines.

**Production Recommendation:**

For most production deployments, using the dynamic per-disk limit (`ozone.scm.datanode.pipeline.limit=0`) is
recommended, as it allows the cluster to scale pipeline capacity naturally with its resources. You can use the
global limit (`ozone.scm.ratis.pipeline.limit`) as a safety cap if needed. A good starting value for
`ozone.scm.pipeline.per.metadata.disk` is **2**. Monitor the section **Pipeline Statistics** in SCM web UI, or run
the command `ozone admin pipeline list` to see if the actual number of pipelines aligns with your configured targets.
Comment on lines +123 to +127
Copy link
Contributor

@ivandika3 ivandika3 Jan 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there should a tradeoff of having a lot of concurrent pipelines. This might be worth documenting.

Here are a few I can thinks of

  • Each Ratis group takes some resources (e.g. 8MB for the write buffer IIRC)
  • Larger number of pipelines increase the load on the metadata volume which might cause contention
  • The higher number of pipelines, there will be higher number concurrent storage containers. If one DN is down and all the pipelines are closed, we might end up with a lot of small containers, which might have overhead long term.


## How to Use
1. Configure Datanode metadata directories:
```xml
Expand Down