-
Notifications
You must be signed in to change notification settings - Fork 278
wasmparser
: 15-30% performance regressions from v228 -> v229
#2180
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Would you be able to help profile and track down where this is coming from? Perhaps via bisection? |
Yes I can do that. |
@alexcrichton The 30% parse and validation regression first appears here
The commit right before it does not have the regression:
I have also ran benchmarks on the newly released cc @keithw |
I can replicate this in-repo with |
I did some measurements on parsing (without validating) a single huge module on an unloaded AMD Ryzen 7 PRO 4750U, comparing two release builds built with rustc 1.86.0 (05f9846f8 2025-03-31). For the "before", I tested the commit just before that PR (90c156f), backporting the "full parse" logic from https://github.com/bytecodealliance/wasm-tools/blob/main/src/lib.rs#L299 (with the exception of the ops.finish()?; line since the "before" parser doesn't have a finish function to check anything at the end of a function). For the "after", I used that PR (0354dde). In both cases I tested I was surprised to find that in this "single huge module" test, I didn't see a regression:
For parsing+validation, again with this single huge module, I see a roughly 3% slowdown on running On the other hand, cargo bench on the spec and local testsuite (which has thousands of mostly tiny modules) definitely documents a regression. So I'm wondering... are you seeing a big slowdown on parsing or parse+validating individual (large) modules, and if so, are you able to share an example module? Or should we be looking at a slowdown related to parser creation/startup time on lots of small modules, and that's getting washed out in the noise on my "single huge module" test? |
I just finished updating Wasmi to the new
wasm-tools
v229 coming from v228.wasmparser
parsing and validation made up a larger chunk of the work. So that's probably the real regression inwasmparser
.Wasmi PR: wasmi-labs/wasmi#1501
Benchmarking before and after revealed the following performance regressions:

The text was updated successfully, but these errors were encountered: