-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] at32f437-mini/systemview: builds fail with error: implicit declaration of function 'SEGGER_RTT_LOCK' #15451
[BUG] at32f437-mini/systemview: builds fail with error: implicit declaration of function 'SEGGER_RTT_LOCK' #15451
Comments
CI in its current state misses too many build errors :( |
Yes, could we bring back more board into ci? @lupyuen ? |
here is the fix: #15441 |
@simbit18 @raiden00pl @xiaoxiang781216 Yeah I reported this earlier based on NuttX Dashboard, and this also. I'm kinda exhausted from monitoring our CI Jobs every day since November. And it's not sustainable (in case something happens to me). Let's do this:
|
could we monitor the loading realtime and postpone the new pr ci when the loading approach the budget? Since the contributor doesn't care whether the ci can finish in one hour or extend to two or three hours in most case. |
@xiaoxiang781216 Sorry are we suggesting that the monitor the GitHub Runners in real-time, then if the load exceeds 25 GitHub Runners, we will delay the CI Jobs for Complex PRs? Right now we have this GitHub Runners Monitoring by Day: https://lupyuen.github.io/nuttx-metrics/github-fulltime-runners.png (Explained here) We could use it to figure out the Current CI Load. However our CI Load tends to spike high and low, often exceeding 25 runners, even though the Daily Average Load is under 25 runners. So it's hard to know for sure whether it's OK to run the CI Job right now. ASF said that they are OK so long as the Average Weekly Load is under 25 runners. So I think we are safe to run at 100% CI Load. Let's monitor and see what happens :-) |
This PR increases the CI Jobs for Complex PRs from 50% to 100%, as explained here: - apache#15451 (comment)
This PR increases the CI Jobs for Complex PRs from 50% to 100%, as explained here: - apache/nuttx#15451 (comment) This PR also includes the fix for Simple x86 PR: - apache/nuttx#14896
This PR increases the CI Jobs for Complex PRs from 50% to 100%, as explained here: - apache/nuttx#15451 (comment) This PR also includes the fix for Simple x86 PR: - apache/nuttx#14896
can we stop the auto ci trigger on release branch? and let @jerpelea trigger ci manually. |
@xiaoxiang781216 Hmmm I don't have an easy way to manually start CI Jobs on the Release Branch, it requires a script similar to this. I think let's not stress out Alin too much during NuttX Release :-) I expected the CI Load to jump much higher last month for NuttX Release. But surprisingly the CI Load was OK (below 50% of our budget for GitHub Runners). That's why I think it's OK to increase our CI Jobs to 100%, even when NuttX Release is happening. |
ok, let's monitor the ci loading. |
This PR increases the CI Jobs for Complex PRs from 50% to 100%, as explained here: - #15451 (comment)
@lup
I think that we should switch the CI to manual for the release branch and do a full build on all targets manually
Bets regards
Alin
…________________________________
Från: Xiang Xiao ***@***.***>
Skickat: den 8 januari 2025 06:16
Till: apache/nuttx ***@***.***>
Kopia: Jerpelea, Alin ***@***.***>; Mention ***@***.***>
Ämne: Re: [apache/nuttx] [BUG] at32f437-mini/systemview: builds fail with error: implicit declaration of function 'SEGGER_RTT_LOCK' (Issue #15451)
The load was higher than usual, because we ran more CI Jobs for NuttX Release. (Merging a Release Branch will trigger 100% of CI Jobs) can we stop the auto ci trigger on release branch? and let @jerpelea trigger ci manually. — Reply to
* The load was higher than usual, because we ran more CI Jobs for NuttX Release. (Merging a Release Branch will trigger 100% of CI Jobs)
can we stop the auto ci trigger on release branch? and let @jerpelea<https://urldefense.com/v3/__https://github.com/jerpelea__;!!O7_YSHcmd9jp3hj_4dEAcyQ!1qk8DUTvfeiPCZliU-AwcKOL8Aw4W8h7ZIIorbiGF1o0EIjai1tvTMqj-RdKHYrd9T1DJV645YJ4PfzmFLQiRFr3aA$> trigger ci manually.
—
Reply to this email directly, view it on GitHub<https://urldefense.com/v3/__https://github.com/apache/nuttx/issues/15451*issuecomment-2576772165__;Iw!!O7_YSHcmd9jp3hj_4dEAcyQ!1qk8DUTvfeiPCZliU-AwcKOL8Aw4W8h7ZIIorbiGF1o0EIjai1tvTMqj-RdKHYrd9T1DJV645YJ4PfzmFLR6iZ75lg$>, or unsubscribe<https://urldefense.com/v3/__https://github.com/notifications/unsubscribe-auth/AB32XCSLGLV4W7HGVNPGQDL2JSYBHAVCNFSM6AAAAABUXLFX5OVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDKNZWG43TEMJWGU__;!!O7_YSHcmd9jp3hj_4dEAcyQ!1qk8DUTvfeiPCZliU-AwcKOL8Aw4W8h7ZIIorbiGF1o0EIjai1tvTMqj-RdKHYrd9T1DJV645YJ4PfzmFLQPC2NTWQ$>.
You are receiving this because you were mentioned.Message ID: ***@***.***>
|
Sorry @jerpelea by "building manually", do you mean running a script at the command line? I don't think there's a command in GitHub Actions that will allow us to click and start a CI Job. Here is the script that I run to start the build at the NuttX Mirror Repo: https://github.com/lupyuen/nuttx-release/blob/main/enable-macos-windows.sh |
This PR increases the CI Jobs for Complex PRs from 50% to 100%, as explained here: - apache#15451 (comment)
FYI our usage of GitHub Runners is still OK and within the ASF Budget. Past 7 Days: We consumed 18 Full-Time GitHub Runners. That's 72% of the ASF Budget for GitHub Runners (25 runners). Next Week I expect our usage to spike up (just before the Lunar New Year). Following Week our usage should drop (during the Lunar New Year Holidays). I'll keep monitoring thanks :-) |
Description / Steps to reproduce the issue
[NuttX Mirror Build Linux (arm-01)]
https://github.com/NuttX/nuttx/actions/runs/12646742907/job/35238004207
On which OS does this issue occur?
[OS: Linux]
What is the version of your OS?
ubuntu at GitHub Actions
NuttX Version
master
Issue Architecture
[Arch: arm]
Issue Area
[Area: Build System]
Verification
The text was updated successfully, but these errors were encountered: