-
Notifications
You must be signed in to change notification settings - Fork 69
Flush improvements #185
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Flush improvements #185
Conversation
- Purpose is provisional flushing: clone, flush to measure output, then drop and revert to the original instance before the flush
- Make the code not output a block if there is no data to output - Add new flush modes that don't add a sync sequence if the stream is already on a byte boundary - Add a "no-sync" flush mode that flushes without a sync - Test correct behaviour of all flush modes
|
This looks fine, though adding new variants to TDEFLFlush is technically an API break since it's not tagged with Given that I think I want to try to get a new release out with the current pending changes before merging this, and maybe also look into whether there is any other minor abi changes that are worth looking at while we're at it if we have to bump it anyhow. |
|
Actually scrap that it would probably be better to pair the minor version bump if needed with the current unreleased changes since that would mean any flate2 updates would require the new version and thus require users to update. |
|
Thanks. I'm happy to wait and/or update the PR as necessary to fit your release schedule |
|
Is it possible to re-work the test to be a bit more simple? It takes waaaay to long to run right now? |
|
I'll have a look at making the test run quicker tomorrow |
|
Regarding the test slowness, it is a bad interaction with the other recent change "simplify stored compression". Either I'm using the API wrong, or that change introduced a regression. In detail: The test compression that the flush-test does appears to be storing rather than compressing. Partial output length is input length +6. Data is very compressible so this shouldn't happen. Previously, it was completing the test in 59 iterations. Now I'm way past 80K iterations and still it is only doing a "store". If I revert the stored compression patch, I'm back to 59 iterations. I will change the flush test to bail out and fail earlier in this case. |
Hm, must be me messing up then and the test not catching it, will have to look into it |
I needed much finer control over the flush behaviour because I am
packet-streaming at low bandwidth and every byte counts. Typically I
need to scan ahead and check how much compressed data is available to
send without committing to any flush or sync at that point (so I need
clone), and then once downstream packet-handling code has requesteddata, avoid committing to any sync until I'm sure I need it.
This change may result in smaller compressed output for some existing
code using the existing API (since an empty block generated by a flush
call with no data buffered will be skipped, unless it is required to
carry the "finish" flag).
I've tested on Rust stable and 1.60.