-
Couldn't load subscription status.
- Fork 5
fix: return logs for validator tests #109
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Open
taco-paco
wants to merge
1
commit into
main
Choose a base branch
from
fix/return-logs-for-tests-pr
base: main
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from all commits
Commits
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we please simplify this by removing the whole
if matches!()logic and just writing:And also remove the
msg!line fromslow_process_instruction(), so it isn’t logged twice.However, this change will cause 4 tests to fail in this repo, since they ensure that the CU stays within specific limits ... but this innocent-looking change increases the CU by
413units. 😄So.. maybe we can introduce a separate feature flag.. like:
… and enable it only when we actually need discriminator logs. 🤔
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we need this in production code, maybe we can write a more optimized log utility just to print a preformatted-discriminator and pay
114 CUinstead of413 CU.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This shall be only in test mode, or do we deploy dlp compiled with
unit_test_config? I would think not. @GabrielePiccoI think if we can it would be better to avoid adding extra feature,
unit_test_configalready relates to tests and affects cu usage for example inload_program_upgrade_authority. Since its needed in validator tests only i thinkunit_test_configusage make sense as well, or we can rename it totest_configto avoid confusionUh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Then we would lose that log in release, while old dlp had it. Not sure if anyone rely on those logs tho @GabrielePicco.
The reason I made it this way:
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggested introducing a new flag, because otherwise the tests in this repo will continue to fail as they also ensure CUs stays within specific limits. Please check why the CI is failing!
Please see my previous comment again for the details.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
performance measurements are part of the testing as well.
The testing is not about correctness only, it is also about how quickly something works, which is why I have added this:
You don't call it a test?
To get better and more accurate CU, we can build it in "released" mode with
unit_test_config(that is also why this flag should have minimal code under it). If not that, some other flag — either way we introduce a flag.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No need to make a big deal out of such a small detail.
We don’t need logs in production. We can feature-flag all logs with a “logging” feature to have more granular control. This would work for both the validator and CU tests.
I agree that the CU tests aren’t fully accurate; we can switch to something like Mollusk benchmarking
in the future, but it’s good enough for now.
Uh oh!
There was an error while loading. Please reload this page.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@GabrielePicco The thing is that we connect CU measurements to
unit_test_configflag, while its purely a test flag.CU measurment assert/tests or whatever shall happen on the version that runs in production. There's no need for any extra flags when
unit_test_configdoes the job when its used in proper context.Whilte I agree @snawaz that asserting CU measurment is a test, using flag in this context is incorrect imo.
This is non-blocking at the moment and to do things properly I can create a separate PR that will fix issue with CUs.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We ending up associating
unit_test_configwith CU measurment and any consequential changes underunit_test_configmay affect CU measurments and we may to have to spawn new flags just for the sake of keeping CU measurments as expectedThere was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In case we still disagree on this matter I'd propose to discuss on the meeting