Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Enforce worker limits in Compression Streams API #2502

Merged
merged 1 commit into from
Aug 11, 2024

Conversation

fhanau
Copy link
Contributor

@fhanau fhanau commented Aug 9, 2024

No description provided.

@fhanau fhanau requested a review from jasnell August 9, 2024 01:28
@fhanau fhanau requested review from a team as code owners August 9, 2024 01:28
@fhanau fhanau requested a review from harrishancock August 9, 2024 01:28
@fhanau
Copy link
Contributor Author

fhanau commented Aug 9, 2024

This is modeled after the limiting approach recently added in dh.c++ and should avoid CPU utilization issues when compressing a large amount of data at once.
Some things I'm not sure about:

  1. This assumes that querying the limiter is somewhat expensive. Hence, we try to avoid it if the output data fits into the output buffer, so that we don't need to query this every time when compressing small amounts of data/compressing data in small chunks which should already get limited properly. Is there actually such overhead, for example from having to acquire locks?
  2. Is this the right spot for this to apply exception handling correctly?

Copy link
Member

@anonrig anonrig left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this easily testable? I feel such that should have a test that validates the behavior.

@jasnell
Copy link
Member

jasnell commented Aug 9, 2024

Testing this in workerd is likely difficult since we do not actually implement the cpu limiting functionality (it's all a non-op in workerd). We'll likely need to add a test to the internal repo separately.

@jasnell
Copy link
Member

jasnell commented Aug 9, 2024

Labeling as "needs-internal-pr" so that we can get a test added internally to verify this

KJ_IF_SOME(outcome, IoContext::current().getLimitEnforcer().getLimitsExceeded()) {
if (outcome == EventOutcome::EXCEEDED_CPU) {
JSG_FAIL_REQUIRE(Error, "Compression Stream write failed: Exceeded CPU limit");
} else if (outcome == EventOutcome::EXCEEDED_MEMORY) {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So one key bit here is that I don't believe we are yet reporting the size of the buffer here to v8, so v8 itself won't really have any insight into the memory consumed by the decompression itself -- that is, unless these chunks are immediately being moved into a v8::ArrayBuffer. So I'm not sure if the memory check here will be effective? Have you been able to verify this appropriately accounts for the memory used during decompression?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change was designed with CPU time limiting in mind; it does not take any additional steps to accurately measure memory consumption, but once that is implemented properly it should be able to limit that too. I can add a comment indicating this.

@fhanau
Copy link
Contributor Author

fhanau commented Aug 9, 2024

Downstream PR and test have been added.

@fhanau fhanau force-pushed the felix/cpu-limit-compression branch from eefa115 to 3a91020 Compare August 10, 2024 18:00
@fhanau fhanau merged commit 992c357 into main Aug 11, 2024
8 of 9 checks passed
@fhanau fhanau deleted the felix/cpu-limit-compression branch August 11, 2024 17:01
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants