You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Buttersink runs into the 5GB chunk limit when 'trashing' files on s3. I have a list of old snapshots on s3 which float around 5GB - all the ones larger than 5GB haven't been 'trashed', but the ones below 5GB move into /trash as expected.
I'm using buttersink in a backup script for prometheus, and I get these sorts of errors when it runs
InvalidRequest: The specified copy source is larger than the maximum allowable size for a copy source: 5368709120
Trash: 0539...104e //prometheus_backup from None (17.45 GiB)
InvalidRequest: The specified copy source is larger than the maximum allowable size for a copy source: 5368709120
Trash: 60f2...6410 //prometheus_backup from None (5.341 GiB)
InvalidRequest: The specified copy source is larger than the maximum allowable size for a copy source: 5368709120
Trash: 73de...22cb //prometheus_backup from None (5.528 GiB)
InvalidRequest: The specified copy source is larger than the maximum allowable size for a copy source: 5368709120
Trash: 81f7...d2ff //prometheus_backup from None (5.082 GiB)
InvalidRequest: The specified copy source is larger than the maximum allowable size for a copy source: 5368709120
Trash: 4f96...b6a7 //prometheus_backup from None (5.406 GiB)
S3 needs all operations above 5GB to be broken into 5GB chunks (or less). Note that I have no problem in syncing the original large files from the server to s3 - this issue is just in moving the snapshot to the /trash folder
Cheers,
vacri
The text was updated successfully, but these errors were encountered:
version: buttersink 0.6.9 via pip
Buttersink runs into the 5GB chunk limit when 'trashing' files on s3. I have a list of old snapshots on s3 which float around 5GB - all the ones larger than 5GB haven't been 'trashed', but the ones below 5GB move into /trash as expected.
I'm using buttersink in a backup script for prometheus, and I get these sorts of errors when it runs
S3 needs all operations above 5GB to be broken into 5GB chunks (or less). Note that I have no problem in syncing the original large files from the server to s3 - this issue is just in moving the snapshot to the /trash folder
Cheers,
vacri
The text was updated successfully, but these errors were encountered: