-
-
Notifications
You must be signed in to change notification settings - Fork 34
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
download fails with FDWatcher: bad file descriptor (EBADF)
#197
Comments
I don't think that AWS#552 would explain this if they are happening on different pods, since each pod should have it's own downloader object, so they aren't sharing any connection pools between them. But maybe it could be something like: AWS flips out at some point in time, sends back garbage to each pod independently at the same time, it corrupts the downloader object in each pod's AWS module somehow, and then because we don't retry with fresh downloaders (which is what AWS#552 is about) they each fail on the retried attempts as well, so they all fail at the same time. (?) |
Yup, the latter was my hunch. |
@vtjnash: is there something we could do here? I'm guessing it's the |
Isn't this a duplicate of #186? |
Possibly. @kleinschmidt, @ericphanson, can you try using a newer version of |
Will do, although without a clear reproducer I'm not sure how far we'll get! I take it the fix for #186 didn't get back ported to 1.7.3? |
Not yet, but it could be. |
Well, actually at this point it probably doesn't make sense because there's unlikely to be a 1.7.4 release, so I think the fix will be to go to 1.8 for this. |
I get the same error, but I get it on my local computer and I get it pretty much every time I install or update something. This happens both on 1.7.3 and 1.8.0-rc1. EDIT: Think I might not have restarted my computer in a while, and now after restarting I haven't seen it again so far. |
I'm still getting the same error with Julia 1.81... |
Note that if #187 should have fixed this issue, it didn't make it to Julia v1.8, because the commit of |
I am seeing this in CI logs on 1.8.0 as well: Unhandled Task ERROR: IOError: FDWatcher: bad file descriptor (EBADF)
Stacktrace:
[1] try_yieldto(undo::typeof(Base.ensure_rescheduled))
@ Base ./task.jl:871
[2] wait()
@ Base ./task.jl:931
[3] wait(c::Base.GenericCondition{Base.Threads.SpinLock})
@ Base ./condition.jl:124
[4] _wait(fdw::FileWatching._FDWatcher, mask::FileWatching.FDEvent)
@ FileWatching /opt/hostedtoolcache/julia/1.8.0/x64/share/julia/stdlib/v1.8/FileWatching/src/FileWatching.jl:535
[5] wait(fdw::FileWatching.FDWatcher)
@ FileWatching /opt/hostedtoolcache/julia/1.8.0/x64/share/julia/stdlib/v1.8/FileWatching/src/FileWatching.jl:563
[6] macro expansion
@ /opt/hostedtoolcache/julia/1.8.0/x64/share/julia/stdlib/v1.8/Downloads/src/Curl/Multi.jl:166 [inlined]
[7] (::Downloads.Curl.var"#40#46"{Int32, FileWatching.FDWatcher, Downloads.Curl.Multi})()
@ Downloads.Curl ./task.jl:484
Unhandled Task ERROR: IOError: FDWatcher: bad file descriptor (EBADF)
Stacktrace:
[1] try_yieldto(undo::typeof(Base.ensure_rescheduled))
@ Base ./task.jl:871
[2] wait()
@ Base ./task.jl:931
[3] wait(c::Base.GenericCondition{Base.Threads.SpinLock})
@ Base ./condition.jl:124
[4] _wait(fdw::FileWatching._FDWatcher, mask::FileWatching.FDEvent)
@ FileWatching /opt/hostedtoolcache/julia/1.8.0/x64/share/julia/stdlib/v1.8/FileWatching/src/FileWatching.jl:535
[5] wait(fdw::FileWatching.FDWatcher)
@ FileWatching /opt/hostedtoolcache/julia/1.8.0/x64/share/julia/stdlib/v1.8/FileWatching/src/FileWatching.jl:563
[6] macro expansion
@ /opt/hostedtoolcache/julia/1.8.0/x64/share/julia/stdlib/v1.8/Downloads/src/Curl/Multi.jl:166 [inlined]
[7] (::Downloads.Curl.var"#40#46"{Int32, FileWatching.FDWatcher, Downloads.Curl.Multi})()
@ Downloads.Curl ./task.jl:484 My tests didn't actually fail though; maybe it happened somewhere where I have retries. I see Julia v1.8.2 is on the same commit as 1.8.1, so sounds like that won't be a fix either. Maybe we can backport it for 1.8.3? |
I don't think we've added any features to Downloads since then so we can probably just bump Downloads on 1.8 to latest. |
On Julia 1.7.3, I've found that downloads sometimes fail with the following error:
The line this points to in the Downloads.jl source is
Downloads.jl/src/Curl/Multi.jl
Line 166 in b0f23d0
"Sometimes" here means "after millions of S3 requests in the span of multiple days of runtime with
retry
around the actual request-making code". (retry
using the default settings, so with the defaultExponentialBackOff
schedule with a single retry). When this error occurred, it occurred multiple times, on multiple different pods (which by design are accessing different s3 URIs but still in the same region), so I'm wondering if it is somehow related to the "connection pool corruption" issue w/ AWS.jl. Another possibly relevant bit of context is that the code that actually is making the requests is actually doing anasyncmap
over dozens (<100) of small s3 GET requests.I'm afraid this happened in a long-running job that I can't interact with directly and don't have a reprex that I can share, but wanted to open an issue in case someone else has seen this or has advice on how to debug or what other information would be useful!
The text was updated successfully, but these errors were encountered: