Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

lcov error detection #264

Open
sdarwin opened this issue Feb 10, 2025 · 12 comments
Open

lcov error detection #264

sdarwin opened this issue Feb 10, 2025 · 12 comments

Comments

@sdarwin
Copy link
Collaborator

sdarwin commented Feb 10, 2025

In connection with PR 263, this is a feature idea.

The newer lcov displays more warnings and errors than before. Find a way to view those errors, not always suppressing them.

  1. In codecov.sh include an optional section that's enabled by a variable LCOV_ERROR_DETECTION. When that is enabled, an extra run of lcov proceeds and doesn't include the ignore-errors flag. Without the ignore-errors flag, all warnings and errors may occur.
    set +e so bash doesn't exit when the command crashes.
    The result is that warnings and errors are shown in the CI logs.
    lcov should likely run twice. The first time without ignore-errors. The second time as normal, completing successfully.

  2. Since most repositories will ignore the above feature, and not enable it, add a scheduled github actions cron job to run in boost-ci.
    Once per month, it will checkout the entire superproject and run lcov on all subprojects.
    There will certainly be lcov errors, especially setting the lcov version to the latest (2.3).
    All the errors will appear in the boost-ci CI log, once per month.
    Parse the results. Display a table of number of errors per boost library.

@Flamefire
Copy link
Collaborator

Sounds good. 3 questions:

  1. Which errors may lcov report?
  2. Which lcov call do you want to run again? We already execute it a couple times.
  3. For the Boost.CI run: Do you intend to build the whole project with all libraries in coverage mode and run all tests for all libraries? That would need to happen on Drone as it will likely exceed the GHA timeouts. We also need to make sure to take a recent enough C++ standard such that all libraries get build which they usually silently do not. Similar for some dependencies like ICU. Not sure if any library hard-requires any such dependency.

@sdarwin
Copy link
Collaborator Author

sdarwin commented Feb 10, 2025

  1. Which errors may lcov report?

Any errors that happen without ignore-errors. Including "inconsistent,mismatch,unused"

  1. Which lcov call do you want to run again?

Good question. Perhaps all of them, the whole test suite twice. Skipping the upload to codecov.

  1. will likely exceed the GHA timeouts.

Hmm, that is a problem.

@Flamefire
Copy link
Collaborator

  1. Due to the LCOV 2.1 PR I come to the conclusion that it might be a good idea to have those errors on by default and rather provide an opt-out. Those seem to be real errors that should be addressed, don't they?
  2. Hm, instead of running it twice it might be better to run it once with the errors flag set. Or is there a reason to run it first without?

@sdarwin
Copy link
Collaborator Author

sdarwin commented Feb 11, 2025

  1. "errors on by default". see the comments I just added to the pull request. there is at least a possibility that could cause a sort of "sudden outage", where half of boost libraries aren't getting codecov reports, and their CI is red.

I think it would be better to follow a more gradual method, have knowledge about how widespread the issues are (generate reports). open github issues.

  1. Do you mean "run it once with the ignore-errors flag set." ? If reports run once, with ignore-errors flag set, then warnings and errors are ignored. suppressed, by ignore-errors. It would not achieve the proposed goal of detecting/discovering errors, as a separate step

@Flamefire
Copy link
Collaborator

  1. Agreed
  2. Sorry, wrong way round. I meant: With errors enabled. I.e. do not run first with ignore-errors set.

@sdarwin
Copy link
Collaborator Author

sdarwin commented Feb 11, 2025

There really are errors being generated by the standard library, clang, g++, when testing with lcov 2.3. Any of those, since they are out of our control, probably need to be set as ignore-errors in codecov.sh, all the time, so codecov.sh can function, sending reports to codecov.io.
You mentioned b2 libs/*/test or maybe b2 all-tests. Great. You are more knowledgeable about the topic than me. If you have time, could you create such a drone test? I have just increased the timeout at https://drone.cpp.al/boostorg/boost-ci/settings to 48 hours. A method could be... use another git branch, that's not named "feature/*", but just plain "lcov-reports", so drone doesn't run it automatically. Manually kick off jobs from drone.cpp.al.

@Flamefire
Copy link
Collaborator

Flamefire commented Feb 11, 2025

I can do when I have some more time. But it really is just:

  • recursively clone boostorg/boost to boost-root
  • Set some B2_* variables
  • Source common_install.sh and codecov.sh setup
  • Run build.sh with appropriate B2_TARGETS (not sure how to run all tests from all libs. I just guess that libs/*/test might work
  • Run codecov.sh collect

@sdarwin
Copy link
Collaborator Author

sdarwin commented Feb 11, 2025

This may be terribly inefficient, but how about a "git submodule foreach ...' Run codecov.sh on each submodule separately. Each boost library.

@Flamefire
Copy link
Collaborator

I'm not sure I understand your intention for that.
Also: Do you want to do a build of each test from a new build directory and run codecov.sh collect after each build, then remove the build folder before starting the next?

@sdarwin
Copy link
Collaborator Author

sdarwin commented Feb 11, 2025

I'm not sure I understand your intention for that.

You had said "... not sure how to run all tests from all libs."

git submodule foreach is a way to iterate over all boost libraries, in a loop, and then take any action. Such as running lcov. Or codecov.sh. That is the intention. But when it comes to the actual details of running this test, I don't have a clear picture, and you probably do, so perhaps nevermind, just throwing out random ideas.

Remove each build folder

yes, that could be helpful to avoid a disk space issue. there are a lot of libraries.

@sdarwin
Copy link
Collaborator Author

sdarwin commented Feb 11, 2025

It may be necessary to run the report with LCOV_IGNORE_ERRORS_LEVEL=all so the test doesn't crash early.

Hopefully lcov continues to display the errors as warnings in that situation. We have to see what it does - what happens when an error is ignored...

@Flamefire
Copy link
Collaborator

I don't have a clear picture, and you probably do, so perhaps nevermind, just throwing out random ideas.

I'm not sure either.

We have basically source codecov.sh setup && build.sh && codecov.sh collect as the workflow

It looks like collecting coverage is done on the boost-root folder, and in a 2nd step filtered to the current library, i.e. $SELF.

We could a) build and run tests for all libraries before collecting coverage or b) build, test and collect coverage for each library in a clean state. That takes much longer as libraries will be built multiple times (dependencies...) but might provide better errors as it isn't a fail-all-or-nothing. But still it might fail a library because a dependency is faulty.

So my comment is about the 2nd step, the build.sh which ultimately calls b2. Usually this is ./b2 libs/foo/test (and some params) but it might be possible to run the tests for all libraries at once. If that isn't then a) isn't an option anyway. Although we could still reuse the build directory which might save a few rebuilds. I.e. do . codecov.setup; ./b2 libs/foo/test; ./b2 libs/bar/test; codecov.sh collect

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants