You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When downloading something large, like an 80GB file, it's a massive pain when 70% of the way through the download the network stutters and you loose ... everything.
libcurl supports partial/resumable downloads (see https://curl.se/libcurl/c/CURLOPT_RESUME_FROM_LARGE.html), and so it would be nice if Downloads could support writing the partial download to a file/buffer and then continuing either from a manually specified point or just the size of the file/buffer so far.
Ideally, Downloads would also support automatically re-trying up to N times.
The text was updated successfully, but these errors were encountered:
I think the retry would be the way. Otherwise you need to introduce a whole API for providing a potentially partially downloaded file to continue. If the retrying is internal then that can be managed transparently.
Retrying is certainly the most "simple" way in many respects, however, I'm beginning to use Julia to manage large data sets (as I type this, I'm downloading a 750GB file, but via curl so I can resume it in case something goes wrong) and I'd imagine that just "retry" would fall short in cases like this.
Would the following sound viable?
Add an option to dump partial output to a file
Add an option to resume the download from a certain point
I'd think that would be sufficient to enable partially downloaded files.
When downloading something large, like an 80GB file, it's a massive pain when 70% of the way through the download the network stutters and you loose ... everything.
libcurl supports partial/resumable downloads (see https://curl.se/libcurl/c/CURLOPT_RESUME_FROM_LARGE.html), and so it would be nice if
Downloads
could support writing the partial download to a file/buffer and then continuing either from a manually specified point or just the size of the file/buffer so far.Ideally,
Downloads
would also support automatically re-trying up to N times.The text was updated successfully, but these errors were encountered: