-
Notifications
You must be signed in to change notification settings - Fork 536
test: improve performance on our slowest tests #4321
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @terriko , this looks like an interesting issue! I’d love to help speed up the tests. I’ll start by checking the longest-running ones and see if we can reduce the number of product lookups while keeping the coverage solid. Also, I’ll take a look at the language scanner to see if there are any performance tweaks we can make. Let me know if there are any specific things I should keep in mind. Excited to contribute! |
@Gyan-max thanks! I think reducing the product lookups is going to make the biggest difference even if we make other performance tweaks, so probably start there. |
@terriko To improve the performance of our test suite, I propose the following solutions:
1.Use pytest --durations=10 --profile to identify performance bottlenecks. These steps should help reduce execution time while maintaining test coverage. |
Hi @terriko, I'd like to work on this issue as part of my GSoC preparation. |
in #4319 I'm switching pytest to print our longest duration tests so we can see about improving the performance of our test suite. On a random local run, here's what I saw
It looks like our language scanner tests are noticeably slower on my machine. If I had to guess, the primary problem is likely due to the sheer number of products and vulnerabilities those tests look up, so I would start by reducing the test files to look up a minimal number of products and make sure that the products that they look up have a minimal number of vulnerabilities. Exactly how many products you should keep will depend on what's needed to test different parsing and to conform to however a full lock file with dependencies should look for the language, but if you can get enough test coverage with 1 product that has 1 vulnerability, go for it!
It's entirely possible that there's also performance gains to be had in the language scanner code if you want to do a deeper dive there too!
The text was updated successfully, but these errors were encountered: