You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardexpand all lines: content/documentation/_index.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -11,7 +11,7 @@ aliases = [
11
11
12
12
# Documentation
13
13
14
-
<divclass="tipbox tip">Updated to version <ahref="https://github.com/ipfs-cluster/ipfs-cluster/blob/master/CHANGELOG.md">1.1.1 (see the Changelog)</a>.</div>
14
+
<divclass="tipbox tip">Updated to version <ahref="https://github.com/ipfs-cluster/ipfs-cluster/blob/master/CHANGELOG.md">1.1.2 (see the Changelog)</a>.</div>
15
15
16
16
Welcome to IPFS Cluster documentation. The different sections of the documentation will explain how to setup, start, and operate a Cluster. Operating a production IPFS Cluster can be a daunting task if you are not familiar with concepts around [IPFS](https://ipfs.io) and peer-2-peer networking ([libp2p](https://libp2p.io) in particular). We aim to provide comprehensive documentation and guides but we are always open for improvements: documentation issues can be submitted to the [ipfs-cluster-website repository](https://github.com/ipfs-cluster/ipfs-cluster-website).
Copy file name to clipboardexpand all lines: content/documentation/reference/api.md
+1-1
Original file line number
Diff line number
Diff line change
@@ -44,7 +44,7 @@ There are considerations to take into account here:
44
44
45
45
Currently IPFS Cluster supports adding with two DAG-formats (`?format=` query parameter):
46
46
47
-
* By default it uses the `unixfs` format. In this mode, the request body is expected to be a multipart just like described in [`/api/v0/add` documentation](https://docs.ipfs.io/reference/http/api/#api-v0-add). The `/add` endpoint supports the same optional parameters as IPFS does and produces exactly the same DAG as go-ipfs when adding files. In UnixFS, files uploaded in the request are chunked and a DAG is built replicating the desired folder layout. This is done by the cluster peer.
47
+
* By default it uses the `unixfs` format. In this mode, the request body is expected to be a multipart just like described in [`/api/v0/add` documentation](https://docs.ipfs.tech/reference/kubo/rpc/#api-v0-add). The `/add` endpoint supports the same optional parameters as IPFS does and produces exactly the same DAG as go-ipfs when adding files. In UnixFS, files uploaded in the request are chunked and a DAG is built replicating the desired folder layout. This is done by the cluster peer.
48
48
* Alternatively, the `/add` endpoint also accepts a CAR file with `?format=car` format. In this case, the CAR file already includes the blocks that need to be added to IPFS and Cluster does not do any further processing (similarly to `ipfs dag import`). At the moment, the `/add` endpoint will process only a single CAR file and this file must have only one root (the one that will be pinned). CAR files allow adding arbitrary IPLD-DAGs through the Cluster API.
49
49
50
50
<divclass="tipbox warning">Using the <code>/add</code> endpoint with Nginx in front as a reverse proxy may cause problems. Make sure to add <code>?stream-channels=false</code> to every Add request to avoid them.<br /><br />The problems manifest themselves as "connection reset by peer while reading upstream" errors in the logs. They are caused by read after write on the HTTP request/response cycle: Nginx refuses any application that has started sending the response body to read further from the request body (<ahref="https://trac.nginx.org/nginx/ticket/1293"target="_blank">see bug report</a>). IPFS and IPFS Cluster send object updates while adding files, therefore triggering the situation, which is otherwise legal per HTTP specs. The issue depends on Nginx internal buffering and may appear very sporadically or not at all, but it exists.</div>
Copy file name to clipboardexpand all lines: content/documentation/reference/configuration.md
+9-2
Original file line number
Diff line number
Diff line change
@@ -135,11 +135,18 @@ The `leave_on_shutdown` option allows a peer to remove itself from the *peerset*
135
135
| `memory_limit_bytes`|`0`| Controls the maximum amount of RAM memory that the libp2p host should use. When set to `0`, the amount will be set to a 25% of the machine's memory or a minimum of 1GiB. Note that this affects only the libp2p resources and not the overall memory of the cluster node |
136
136
| `file_descriptors_limit`|`0`| Controls the maximum number of file-descriptors to use. When set to `0`, the limit will be set to 50% of the total amount of file descriptors available to the process. |
137
137
|`}`|||
138
+
|`pubsub {`|| A libp2p pubsub configuration object. This allows to configure pubsub internals. Defaults are optimized for ipfs-cluster pubsub usecase, where metrics and CRDT-heads are broadcasted, and it is not generally fatal if a message gets lost. Deviations from pubsub defaults aim to reduce unnecessary chatter in standard clusters as pubsub ends up using a lot of bandwidth. |
139
+
| `seen_messages_ttl`|`"30m0s"`| How long before a seen pubsub message can be forgotten. A seen pubsub message is ignored and not rebroadcasted to peers. This should be high enough that a pubsub message has time to reach all peers in the cluster and some more. |
140
+
| `heartbeat_interval`|`"10s"`| Time between heartbeats. A heartbeat triggers mesh maintenance and emits gossip with our entries. Increasing it reduces chatter. Reducing increases speed. |
141
+
| `d_factor`|`4`| A factor to multiply default mesh values (Dlo-5, D-6, Dhigh-12, DLazy-6). It generally controls how many peers we are meshed with, and that influences how much it costs us to broadcast a message. Few peers means little bandwidth on this peer, at the expense of other peers having to re-broadcast the message more often to reach full distribution, so more chatter in the end. A higher number means less chatter, but more effort per message. The default of 4 makes D=16. That means we will send every broadcast to 16 peers or so. |
142
+
| `history_gossip`|`2`| How many of our heartbeats should include IHAVE entries for each known message. Increasing makes chatter consume more bandwidth, but can improves message delivery. |
143
+
| `history_length`|`6`| For how many heartbeats are message requests from other peers honored. This means that after 6 heartbeats of receiving a pubsub message, we will not be sending it anymore to anyone that requests it. Increasing it improves message delivery, but also network churn since slow peers might be requesting all messages and we would provide them. |
144
+
| `flood_publish`|`false`| Enabling means that the first pubsub message hop is flooded to all subscribed peers (not just ~D). Improves delivery but might be overkill for peers that are not very well connected or more limited. |
145
+
|`}`|||
138
146
|`dial_peer_timeout`|`"3s"`| How long to wait when dialing a cluster peer before giving up. |
139
147
|`state_sync_interval`|`"10m0s"`| Interval between automatic triggers of [`StateSync`](https://godoc.org/github.com/ipfs-cluster/ipfs-cluster#Cluster.StateSync). |
140
148
|`pin_recover_interval`|`"1h0m0s"`| Interval between automatic triggers of [`RecoverAllLocal`](https://godoc.org/github.com/ipfs-cluster/ipfs-cluster#Cluster.RecoverAllLocal). This will automatically re-try pin and unpin operations that failed. |
141
-
|`replication_factor_min`|`-1`| Specifies the default minimum number of peers that should be pinning an item. -1 == all. |
142
-
|`replication_factor_max`|`-1`| Specifies the default maximum number of peers that should be pinning an item. -1 == all. |
149
+
|`replication_factor_min`|`-1`| Specifies the default minimum number of peers that should be pinning an item. -1 == all. ||`replication_factor_max`|`-1`| Specifies the default maximum number of peers that should be pinning an item. -1 == all. |
143
150
|`monitor_ping_interval`|`"15s"`| Interval for sending a `ping` (used to detect downtimes). |
144
151
|`peer_watch_interval`|`"5s"`| Interval for checking the current cluster peerset and detect if this peer was removed from the cluster (and shut-down). |
145
152
|`mdns_interval`|`"10s"`| Setting it to `"0"` disables mDNS. Setting to a larger value enables mDNS but no longer controls anything. |
0 commit comments