-
-
Notifications
You must be signed in to change notification settings - Fork 3.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubo stopped listing pin after some times #10596
Comments
I can confirm we are seeing the same behaviour while testing Logs from ipfs-cluster:
No errors or other logs observed in Kubo. Also, calling the API endpoint or running |
Hello, per the details you provide, I just think leveldb exploded in some way. How many files are there in the Do you have monitoring for disk-reads? Is it trying to read/write a lot from disk even when nothing is being added? I would recommend to switch leveldb to pebble (so, flatfs + pebble). You will loose the pinset (not the blocks, just the list of things you have pinned) but cluster will add the pins again, in time. |
Yes, when switching you will need to edit Do you use MFS for anything? Also, regarding the ipfs-pins graph, it goes to 0 because of ipfs-cluster/ipfs-cluster#2122. From now on it will stay at the last reported amount when Even if it won't need to download data, it will need to add 16M items to the datastore, and pinning will make it traverse everything it has. |
Thank you, I'm going to try that today on one of our node.
No we don't use MFS, we only add new pin via the cluster API, and when needed we access our data via Kubo's gateway using the CIDs. As far as I understand this doesn't involve the MFS subsystems.
Good to know, thank you 👍 |
I have switch one of our node to using the pebble datastore, right now it is slowly adding back the whole pinset to pebble. |
hi |
@ehsan6sha still too soon to tell. Our node is still adding the data back into the pebble store. It has only catched up 50% of the previous pins right now. |
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days. |
@Mayeu how does it look like now? (assuming it finished re-pinning) |
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days. |
Sadly, I can't find anything because the amount log produce meant that the log before the spike up was purged 🤦🏻♀️ I'm updating our log retention config for that machine to keep much (much) more logs, and "hope" to see that drop again. On the bright side, this node is now fully caught up with the cluster state, so we'll see if this issue shows up again. But since the 4th of December, our first node (which is still using LevelDB) didn't experience that issue. |
As mentioned, cannot trust the graph much due to the bug I pointed above... you are better tracking the "pending" items (queued, pinning, error) and comparing that to total items in cluster pinset, rather than using this metric right now. |
(and happy new year!) |
@Mayeu We will wait for another week, and assume the issue is resolved if we do not hear from you. |
@hsanjuan right, I forgot about that. We do gather those as well. Here they are for the past 60 days: Pin queued: ![]() Error: ![]() Pinning: ![]() Just a reminder of the timeline:
For comparison, here are the graphs for our first node (still using LevelDBs), which didn't experience as many issues (it still encountered that listing issue, but for some reason it stabilized pretty quickly after we realized there was an issue): Error and Pinning: ![]() Queued: ![]() |
Hello, @Mayeu . There logs should print info messages of the sort: "Full pinset listing finished" . How long does it take to list the pinset when it works? The ipfs-cluster config has a If it is not that, and you can reproduce this by calling |
Sorry for the delay here, I finally got to check the data that was gathered last week.
We have set this timeout to 20 minutes in our case. Here is one profiling that was triggered while a At the longest some requests took 14 minutes (after the log entry) to respond. But at that point partition was full of profiles so I don't have a profile matching those request. I'll try again if you want.
There isn't any log at the start of the listing process, right? The earliest I can find is when the process is around 500k pins. On our node using pebble it seems to take around 1m:
On our node still using leveldb, this takes around 45s:
|
I requested access to the profile. If you manage to get another manual |
Ha sorry I thought that I shared a public link, I updated the share right to be for anyone with the link. I'll start the script again to try to get a profile for a request that hangs for longer. |
@Mayeu the There is a goroutine listing pins at the same time. And there are 5 waiting for a write Lock to pin something, which they cannot get while the listing is ongoing. I don't see anything weird at the pebble level, it seems to be reading as requested by pinLs. Do you think the ongoing pin/ls requests is returning any data, or just hanging? logs showing on cluster? |
Oops, seems like we needed more information for this issue, please comment with more details or this issue will be closed in 7 days. |
That surprising, when a log entry triggers the cluster has already been waiting for its 20 min timeout. Or maybe I'm missing something ?
The few times I witnessed the issue directly no data was returned for a while it was just hanging waiting for a response from Kubo. "a while" spanning between 30s to a few hour with the Pebble store, and more than 24h when we initially had the issue with LevelDB.
At time when this happens, the cluster only shows the "context cancelled" error:
I restarted the script to get a profile when this error shows up in the log. I improved it a bit because multiple profiles could have been run at the same time previously, and maybe that could have influenced the running time of Sidenote: both nodes have been upgraded to |
Any news @Mayeu ? |
Hello, For the past few days the lock up were shorter than during my previous data gathering tentative, the longest I got was this profile, during which a manual call to |
I think I have something... The It is holding a read lock on the dspinner. However, there are also some writers waiting to write on the same lock (doPinRecursive). This prevents more read-locks to be granted (judging by what it says here https://stackoverflow.com/questions/46786900/the-implementions-of-rwmutex-lock-in-golang, which matches what the stacks show). As a result, everything hangs. I will study how we can address this, given that you already have accelerateddht providing enabled (I assume you need dht providing for every block, otherwise best to disable or change strategy to "roots"). At your pinset size, dht providing IS a bottleneck. |
Also explained here: https://pkg.go.dev/sync#RWMutex |
Thank you for spending your time on this 🙏
Good insight, I don't think we wish to do the reproviding at the block level. For CIDs that points to a file, we definitely don't care about reproviding at the block level, the roots will be enough. But how would "roots" behave for CIDs that point to a folder? If somebody requests only one file in that folder, will our server still answer that request or will it only answer requests to get the whole folder? |
It could be that, if you are only announcing the folder's CID and people request a file by file's CID, that no routing to that file is discovered. If people request If the reporovider does not announce a CID in particular because it is set to |
This provider helper allows to buffer all the results from a keyChanFunc in memory. The purpose is to fix issues with slow re-providing. In the case of ipfs/kubo#10596, the slow re-providing process causes starvation of any operations trying to read/write to the pinset. With the new buffered KeyChanFunc, we can read everything we need to announce into memory first, and free any locks as soon as possible. Given the compact size of CIDs (<50bytes), I don't think any complexer approach (batch reading and lock/unlock handling) is justified. People with pinsets of 20 million items that want to announce everything can well spare an extra GB of RAM. For smaller repos the impact becomes negligible. The test targets precisely the use-case and ensures we don't starve operations while interacting with dspinner.
Fixes #10596. The reproviding process can take long. Currently, each CID to be provided is obtained by making a query to the pinner and reading one by one as the CIDs get provided. While this query is ongoing, the pinner holds a Read mutex on the pinset. If a pin-add-request arrives, a goroutine will start waiting for a Write mutex on the pinset. From that point, no new Read mutexes can be taken until the writer can proceed and finishes. However, no one can proceed because the read mutex is still held while the reproviding is ongoing. The fix is mostly in Boxo, where we add a "buffered" provider which reads the cids onto memory so that they can be provided at its own pace without making everyone wait. The consequence is we will need more RAM memory. Rule of thumb is 1GiB extra per 20M cids to be reprovided.
There's a fix at #10746 and it would be good if someone can confirm improvements. I am testing myself as well on our clusters. |
Fix now on master |
@hsanjuan Hello, sorry I was on vacation last week and I'm only checking this issue right now. I'll try the master branch on our second server and report back any improvement in the number of reported errors 👍 |
Fixes #10596. The reproviding process can take long. Currently, each CID to be provided is obtained by making a query to the pinner and reading one by one as the CIDs get provided. While this query is ongoing, the pinner holds a Read mutex on the pinset. If a pin-add-request arrives, a goroutine will start waiting for a Write mutex on the pinset. From that point, no new Read mutexes can be taken until the writer can proceed and finishes. However, no one can proceed because the read mutex is still held while the reproviding is ongoing. The fix is mostly in Boxo, where we add a "buffered" provider which reads the cids onto memory so that they can be provided at its own pace without making everyone wait. The consequence is we will need more RAM memory. Rule of thumb is 1GiB extra per 20M cids to be reprovided. (cherry picked from commit ba22102)
Checklist
Installation method
built from source
Version
Config
Description
Hello,
We started to experience an issue with Kubo in our 2-node cluster where Kubo don't list pin anymore.
We have 2 nodes that both pin all the pinset we keep track of, which is around 16.39 million pins right now.
Last weeks (while we were still using 0.29), Kubo stopped responding to the
/pin/ls
queries sent by the cluster, those requests were hanging "indefinitely" (as in, when using curl I stopped the command after ~16h without response). Ouripfs-cluster
process is returning the following in the log when this happens:This started out of the blue, there was no change on the server. The issue remained after upgrading to 0.32.1.
At that time, we had the bloom filter activated, deactivating it did improve the situation for a while (maybe 24h), and then the issue started to show up again. In retrospect, I think it may not be related to the bloom filter at all).
This is the typical metrics reported by
ipfs-cluster
which show when Kubo stop responding to/pin/ls
:The graph on top is the number of pins the cluster is keeping track of, and on the one on the bottom is the number of pins reported by Kubo. When restarting Kubo it generally jumps to the expected amount, and after a while it drops to 0. At that point any attempt to list pin from Kubo fails.
We only have the metrics reported by ipfs-cluster because of this Kubo bug.
The server CPU, RAM, and disk utilization is fairly low when this issue show up, so it doesn't look like it a performance issue. The only metric that started to go out of bound is the number of open file descriptors which grow and reached the 128k limit set. I bumped it to 1.28 million, but it still reaches it (with or without the bloom filter):

The FDs limit is set both at the systemd unit level, and via
IPFS_FD_MAX
.Restarting Kubo make it work again most of the time, but sometimes it doesn't change anything and it instantly starts to fail.
Here is some profiling data from one of our nodes:
More info about the system:
logs
andcache
for ZFSKubo also emit a lot of:
But
ipfs swarm resources
doesn't return anything above 5-15%, so I think this error is actually on the remote node side and not related to our issue, right?Anything else we could gather to help solve this issue?
Right now I'm out of ideas to get our cluster back into a working state (beside restarting Kubo every 2h but that's not a solution since it will prevent us from reproving the pins to the rest of the network)
Edit with additional info:
--enable-gc
flag, as prescribed by ipfs-cluster doc.The text was updated successfully, but these errors were encountered: