-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Retention timer not work #50
Comments
I am seeing the same problem. I have a fairly big database of around 3 GB that holds non-crucial data. As such I have set the retention to 2 days. However, the backup folder for the database has now grown to 21 GB. When looking into the folders I can indeed see that some old versions are not being deleted. |
I almost understood what was going on. The Docker container logs show that the deletion is trying to work, but there is a problem with the file path. . {"time":"2024-10-17T15:10:00.014281114+03:00","level":"ERROR","msg":"error soft deleting expired executions","error":"failed to delete file /backups/backups/billing//// It remains to understand why this happened UPD. this file was deleted by me on my own, the file cannot be found because it is not there. |
I also have a backup in the database of which the file does not exist anymore (because I changed the file structure). When the cron job runs it seems to fail on that one file and stop, so other backups from other databases are not cleaned up. I also cannot manually delete the file, since I get the error that the file does not exist anymore. There seems to be no way to delete the database entry. Why not simply delete the database entry when a "delete" is tried but the file does not exist anymore? |
Try to create a text file using the path specified in the error , for example: backup_name.zip . In theory, this will be enough for the system to see the file and send it. Naturally, the name must match the name of the deleted file Unfortunately, the situation is more complicated for me, I made a mistake in the installation and I have the first phantom backup stored on the path : Such characters should be escaped with the \ character, then everything works correctly. I hope this error will be noticed by the developer and he will tell you what to do |
I have solved this problem, and I am giving you a solution.
It looks like this for me : PROFIT! There is no need to reboot anything, we wait 10 minutes and open the catalog. The old versions have been deleted. |
But retention still not working. I have 3 months old dumps, with 7days rule.
|
I am also still seeing delete errors despite having gone through and manually deleting the files that were giving errors recently:
I also saw a permission issue to delete the files, but I haven't seen it recently. So I'm not quite sure what is going on. |
let me take a look |
@Stitch10925 Can you please verify in your file system if the files in the logs exists? |
@eduardolat |
Maybe to give a bit more context, at least in my case: I use an NFS volume mount to my NAS to mount the directory where the backups are stored. Writing always seems to work, but removing seems to be an issue, even though manually deleting the files has never been a problem. Hmm, I just realized, maybe something is changing the permissions of the files causing issues with the deletion, I will have to check that this weekend 🤔 |
Hi, I'm having some problems with auto-deleting old archives.
During the setup process, I experimented with the time of the auto-delete value, it seems that this does not affect anything.
At the moment, I have the following parameters set:
cloud backup - 30 days
local backup - 3 days
Despite this, local backups have been stored since 19.09, for 19 days now.
I periodically delete them myself, but this is not the experience I would like to have
I will be glad of any advice. Thanks
The text was updated successfully, but these errors were encountered: