-
Notifications
You must be signed in to change notification settings - Fork 39
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reading sysfs files - reopen each time vs caching open filehandles #12
Comments
@fuzzycow Sorry for the late response; I have been trying to juggle a lot of different tasks over the last few weeks, so I am not as fast to respond as I would like to be. At this time, the internal handling of file handles is left up to the implementation, and not specified in the spec, as it can be language-dependent. As you said, the tradeoff is generally being safe and slow(er) vs possibly unsafe and fast(er). But the way I see it, there is not always a clear benefit to doing it either way:
So in the end, I see your point, but I don't think that we would be able to draw a clear-cut conclusion that every binding must maintain the open file handle. Here's my proposal: we put a note in our spec that asks the maintainers to be cognisant of the tradeoffs between the two options, and advises that they experiment with both to decide how they implement it. Does that sound good? @fdetro You are more knowledgeable on the subject of performance maximization than I am; jump in if you have anything to add. |
@WasabiFan I agree with your comments regarding CPU vs. resource / memory usage optimization. As one normally does not read sensor / motor values / attributes continuously from different threads and typically also with a quite low frequency (some 10 Hz?), the small overhead of re-opening the file every time shouldn't really hurt. On the other side with a larger number of sensors and motors connected one can end up with quite a lot of open file handles in the case of caching, which will use memory and OS resources even if used only once. So I would vote for not caching the handles in the standard case. If someone has a use case where read performance really matters, he can easily optimize this case with a custom implementation. |
Thanks for responding! I agree with your comments - its often best to default to pragmatic approach, and to advise developers on the potential impact and ways for further optimization. I would still like to share some thoughts om the issue - for those who may want to do alot of sysfs file io on ev3: (EDIT - correct performance numbers for python) Golang-specific notes: Regarding performance impact: I will post my test code snippets and benchmark results as soon as I get a bit of time. Lastly - I would like to reemphasize that I'm not suggesting that existing code be rewritten to suit my particular needs or opinions. Information is provided as food for thought, with hope that it will assist others implementing heavy sysfs file io. |
Benchmarks: Golang (reopen file, 5 reader coroutine, 1 consumer coroutine) - ~520 |
Looks like this was addressed in #25. |
That PR only helps with this for C++, the other languages will need their own solution. |
Python binding is based on C++ one, so that leaves js, lua, and R. |
This is not a bug, but a general observation:
It looks like most implementations of ev3dev language bindings choose to reopen sysfs files each time anew, rather then caching an open filehandle (open once, seek to the beginning before reading).
While reopen approach is much "safer" - it has a serious performance penalty.
Unfortunately my primary experience is with language bindings not in this repo, but I believe the above applies to lesser or higher degree to all implementations.
In a small python test using cached filehandle is over x3.5 faster, and golang test with 3 concurrent readers showed x10 performance improvement.
The text was updated successfully, but these errors were encountered: