You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I usually don't use more than 1024^2 file descriptors, but have set ulimit -n unlimited in the past to overcome some problems on computers with lots of cpus.
If ulimit -n unlimited is set on Mac:
this causes the call to get_max_fd() to return INT_MAX, and then the process fails here:
We ran into this as well. The issue with the current handling is that it is not clear at all that this is reproc failing, not the called process.
We currently work around this by if needed setting a lower rlimit in case we see a limit reproc does not want to use. This is far from ideal. I wonder if the code here could be restructured to remove that arbitrary limit inside reproc, e.g., by using close_range(2).
I usually don't use more than 1024^2 file descriptors, but have set
ulimit -n unlimited
in the past to overcome some problems on computers with lots of cpus.If
ulimit -n unlimited
is set on Mac:this causes the call to get_max_fd() to return INT_MAX, and then the process fails here:
reproc/reproc/src/process.posix.c
Line 266 in 3eabeb3
This appears to only happen on mac. I'm guessing the problem is that mac doesn't report the actual limit while on linux it actually returns
1024^2
.original issue:
mamba-org/mamba#1758
I don't really know what the best solution to this would be, as I'm not a mac expert... : /
The text was updated successfully, but these errors were encountered: