Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP: [GPU] Logging cleanup #2446

Open
wants to merge 3 commits into
base: main
Choose a base branch
from
Open

WIP: [GPU] Logging cleanup #2446

wants to merge 3 commits into from

Conversation

echeresh
Copy link
Contributor

This is related to https://jira.devtools.intel.com/browse/MFDNN-11400.

It'd be nice simplify and unify logging. This is the initial set of changes.

I plan to look into:

  • Simplifying log levels, e.g. drop perf-specific levels as they are not really used AFAIK (?)
  • Documenting the usage scenarios
  • Moving the existing functionality to src/gpu
  • Reusing spdlog (probably not much value besides prefix printing)

@echeresh echeresh added the platform:gpu-intel Codeowner: @oneapi-src/onednn-gpu-intel label Jan 18, 2025
@echeresh echeresh requested a review from a team as a code owner January 18, 2025 01:10
@rjoursler
Copy link
Contributor

rjoursler commented Jan 18, 2025

Simplifying log levels, e.g. drop perf-specific levels as they are not really used AFAIK (?)

I have often used this when trying to optimize IR creation time. The main issue is that the trace level introduces overhead (~30% from what I remember) and this overhead was not evenly distributed. This makes is so that performance in trace mode only loosely corresponds to performance in release mode when prioritizing optimizations. Could we add perf information (perhaps in a more compressed format) to the info level, analogous to how oneDNN verbose is mapps into spdlog levels?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
platform:gpu-intel Codeowner: @oneapi-src/onednn-gpu-intel
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants