-
Notifications
You must be signed in to change notification settings - Fork 245
chore: Add memory reservation debug logging and visualization #2521
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
native/core/src/execution/jni_api.rs
Outdated
debug_native: jboolean, | ||
explain_native: jboolean, | ||
tracing_enabled: jboolean, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
rather than adding yet another flag to this API call, I am now using the already available spark config map in native code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
+1. The config map should be the preferred method
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2521 +/- ##
============================================
+ Coverage 56.12% 58.93% +2.80%
- Complexity 976 1449 +473
============================================
Files 119 147 +28
Lines 11743 13649 +1906
Branches 2251 2369 +118
============================================
+ Hits 6591 8044 +1453
- Misses 4012 4382 +370
- Partials 1140 1223 +83 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
|
||
impl MemoryPool for LoggingPool { | ||
fn grow(&self, reservation: &MemoryReservation, additional: usize) { | ||
println!( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
should be println
as info!
or trace!
?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I guess info!
would be ok. I pushed that change. If we use trace!
then we would have to set spark.comet.debug.memory=true
and also configure trace logging for this one file, which seem like overkill for a debug feature
moving to draft while I work on the Python scripts |
Which issue does this PR close?
Closes #.
Rationale for this change
Debugging.
From this, we can make pretty charts to help with comprehension:
What changes are included in this PR?
spark.comet.debug.memory
LoggingPool
that is enabled when the new config is setHow are these changes tested?