Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eliminate need for explicit configuration of request/response buffer sizes #48

Closed
cemerick opened this issue Jan 31, 2023 · 4 comments
Closed

Comments

@cemerick
Copy link

Addressing aantron/dream#214 required modifying dream to accept an Httpaf.Config.t so that a much higher read_buffer_size could be specified to avoid parsing problems when consuming/receiving larger websocket messages.

While provisioning larger buffers does guard against failures involving messages smaller than that buffer size, it's clearly (a) a waste of resources for the vast majority of (usually small) ws messages, and (b) does nothing to protect against failures with the rare message that does exceed even the over-provisioned buffer size.

In private communications with @anmonteiro discussing this problem in general, he indicated:

that does indicate a bug in the parser for websocketaf IMO
we should probably commit the parser more often

Opening an issue here now as a bookmark for future improvements; looking around at a couple of other websocket implementations (e.g. https://github.com/websockets/ws), none seem to have any explicit buffer configuration options, choosing instead to dynamically allocate to accommodate arbitrary message sizes. This is presumably the direction websocketaf should move in.

@jdf-id-au
Copy link

Netty has maxContentLength and maxFrameSize and some corresponding exceptions, (the first of which was possibly misleadingly used?)

@cemerick
Copy link
Author

An update: a recent commit (44c5429) was AFAIK intended to address this, but does not. The same behaviour noted @ aantron/dream#214 remains, as of aantron/dream@a39c938

@anmonteiro
Copy link
Owner

Buffer sizes are quite unrelated to garbled data. The design of this library follows http/af and h2 where we perform unbuffered parsing with angstrom.

I believe I've addressed the source of garbled data, and I believe this issue can be solved separately by committing the parser more often (and therefore freeing space in the buffer).

@anmonteiro
Copy link
Owner

we fixed the garbled data issue in #68. I don't plan on removing the need for the low-level configuration of buffer sizes, but will reopen this issue if I change my mind.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants