-
-
Notifications
You must be signed in to change notification settings - Fork 213
Skip any configs not matching a meta #2434
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
40b32af to
423fe4b
Compare
|
This has no explanation, no context, and no tests that would illustrate the desired behaviour. From ongoing discussions I'm guessing it might be related to conda-forge/conda-forge-pinning-feedstock#8045 & #1617? |
|
This fixes #1617 and also conda-forge/conda-forge-pinning-feedstock#6967 which you've been pinging me about for a long time. Please help test in various feedstocks and adding tests. |
Tested in conda-forge/pytorch-cpu-feedstock#332. I like that this works by just skipping the top-level build. However, it requires pretty arcane knowledge to do something like # ensure github_actions_labels appears "used" from POV of conda-{build,smithy}
# [github_actions_labels]so that the builds get split correctly, rather than ending up with multiple labels per variant (check the git history of that PR; multiple labels per variant make no sense, we could probably even lint on that) github_actions_labels:
+- cirun-openstack-cpu-2xlarge
- cirun-openstack-gpu-2xlargeSo I think it'd be good to add conda-smithy/conda_smithy/configure_feedstock.py Lines 639 to 640 in 5b2575d
so that it works out of the box. These labels should never be collapsed. The skip condition is pretty horrible, but since I could concentrate it in one place, that's manageable. The recipe in that PR could be simplified into a test case pretty easily. |
If you look 17 lines down, you'll see that it's already added |
I hadn't noticed that, thanks. In that case, we'd IMO need something stronger than I know that it's possible to get things to work without that, so I guess we can punt on this for now, but it'll be a painful foot gun. By the way, does this PR change your stance regarding the |
I don't know what you were doing there. conda-forge/pytorch-cpu-feedstock#332 works without the hack |
|
If you look at the history of the commits you force-pushed over, you'd see how the behaviour changed based on adding that work-around. During the course of those changes, I also had to rewrite the skip condition from {% set diagonalize = "" %}
{% if target_platform.startswith("linux") and (
cuda_compiler_version == "None" and "gpu" in github_actions_labels
) or (
cuda_compiler_version != "None" and "cpu" in github_actions_labels
)%}
{% set diagonalize = "skip: true" %}
{% endif %}
[...]
build:
{{ diagonalize }}to {% set diagonalize = False %}
{% if False %}
{% elif target_platform.startswith("linux") and cuda_compiler_version == "None" and "gpu" in github_actions_labels %}
{% set diagonalize = True %}
{% elif target_platform.startswith("linux") and cuda_compiler_version != "None" and "cpu" in github_actions_labels %}
{% set diagonalize = True %}
{% endif %}
[...]
build:
{% if diagonalize %}
skip: true
{% endif %}which apparently affected conda-build's used-variable detection. If I do the equally logically sound {% set diagonalize = False %}
{% if target_platform.startswith("linux") and (
(cuda_compiler_version == "None" and "gpu" in github_actions_labels) or %}
(cuda_compiler_version != "None" and "cpu" in github_actions_labels)
) %}
{% set diagonalize = True %}
{% endif %}then the variable detection fails again (presumably because the regexes involved are single-line-only, but we cannot expect maintainers to be aware of this subtlety), making the work-around necessary once more.
Please respond to this. |
Not really. It's okay to go with CF_SELF_HOSTED assuming you still commit to fix all the feedstocks that's going to get broken. |
We can lint against this if needed. I'm trying to review this. AFAIK, the code here doesn't require any changes or workarounds in feedstocks, right? I'm not super sure of what's going on in conda-forge/pytorch-cpu-feedstock#332, but it looks like the CI matrix is populated as intended? If that's the case and this solution works, I'd like to get a test added here (so we can control against regressions in the future), plus the corresponding news file. |
Sure |
|
Awesome, thanks. Pushed test and news. From my side, this is ready for review. |
Co-authored-by: Isuru Fernando <isuruf@gmail.com>
Co-authored-by: Isuru Fernando <isuruf@gmail.com>
Co-authored-by: Isuru Fernando <isuruf@gmail.com>
|
pre-commit.ci autofix |
for more information, see https://pre-commit.ci
|
|
||
| # return (configs, top_level_loop_vars) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
debug leftover?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I took it as the "return type hint", as a comment
Checklist
newsentrypython conda_smithy/schema.py)cc @carterbox @jaimergp @h-vetinari