Skip to content

Conversation

@isuruf
Copy link
Member

@isuruf isuruf commented Dec 10, 2025

Checklist

  • Added a news entry
  • Regenerated schema JSON if schema altered (python conda_smithy/schema.py)

cc @carterbox @jaimergp @h-vetinari

@isuruf isuruf changed the title Skip meta Skip any configs not matching a meta Dec 10, 2025
@isuruf isuruf force-pushed the skip-meta branch 2 times, most recently from 40b32af to 423fe4b Compare December 10, 2025 19:40
@h-vetinari
Copy link
Member

This has no explanation, no context, and no tests that would illustrate the desired behaviour. From ongoing discussions I'm guessing it might be related to conda-forge/conda-forge-pinning-feedstock#8045 & #1617?

@isuruf
Copy link
Member Author

isuruf commented Dec 11, 2025

This fixes #1617 and also conda-forge/conda-forge-pinning-feedstock#6967 which you've been pinging me about for a long time. Please help test in various feedstocks and adding tests.

@h-vetinari
Copy link
Member

h-vetinari commented Dec 11, 2025

Please help test in various feedstocks and adding tests.

Tested in conda-forge/pytorch-cpu-feedstock#332. I like that this works by just skipping the top-level build. However, it requires pretty arcane knowledge to do something like

# ensure github_actions_labels appears "used" from POV of conda-{build,smithy}
# [github_actions_labels]

so that the builds get split correctly, rather than ending up with multiple labels per variant (check the git history of that PR; multiple labels per variant make no sense, we could probably even lint on that)

 github_actions_labels:
+- cirun-openstack-cpu-2xlarge
 - cirun-openstack-gpu-2xlarge

So I think it'd be good to add github_action_labels to

# Add in some variables that should always be preserved
always_keep_keys = {

so that it works out of the box. These labels should never be collapsed.

The skip condition is pretty horrible, but since I could concentrate it in one place, that's manageable. The recipe in that PR could be simplified into a test case pretty easily.

@isuruf
Copy link
Member Author

isuruf commented Dec 11, 2025

So I think it'd be good to add github_action_labels to

If you look 17 lines down, you'll see that it's already added

@h-vetinari
Copy link
Member

If you look 17 lines down, you'll see that it's already added

I hadn't noticed that, thanks. In that case, we'd IMO need something stronger than always_keep_keys. Something like never_collapse_keys.

I know that it's possible to get things to work without that, so I guess we can punt on this for now, but it'll be a painful foot gun.

By the way, does this PR change your stance regarding the CF_SELF_HOSTED approach you had suggested? IMO this PR is orthogonal to that, and having both would be beneficial.

@isuruf
Copy link
Member Author

isuruf commented Dec 29, 2025

In that case, we'd IMO need something stronger than always_keep_keys. Something like never_collapse_keys.

I don't know what you were doing there. conda-forge/pytorch-cpu-feedstock#332 works without the hack

# ensure github_actions_labels appears "used" from POV of conda-{build,smithy}
# [github_actions_labels]

@h-vetinari
Copy link
Member

h-vetinari commented Dec 30, 2025

If you look at the history of the commits you force-pushed over, you'd see how the behaviour changed based on adding that work-around.

During the course of those changes, I also had to rewrite the skip condition from

{% set diagonalize = "" %}
{% if target_platform.startswith("linux") and (
    cuda_compiler_version == "None" and "gpu" in github_actions_labels
) or (
    cuda_compiler_version != "None" and "cpu" in github_actions_labels
)%}
{% set diagonalize = "skip: true" %}
{% endif %}

[...]

build:
  {{ diagonalize }}

to

{% set diagonalize = False %}
{% if False %}
{% elif target_platform.startswith("linux") and cuda_compiler_version == "None" and "gpu" in github_actions_labels %}
    {% set diagonalize = True %}
{% elif target_platform.startswith("linux") and cuda_compiler_version != "None" and "cpu" in github_actions_labels %}
    {% set diagonalize = True %}
{% endif %}

[...]

build:
  {% if diagonalize %}
  skip: true
  {% endif %}

which apparently affected conda-build's used-variable detection.

If I do the equally logically sound

{% set diagonalize = False %}
{% if target_platform.startswith("linux") and (
    (cuda_compiler_version == "None" and "gpu" in github_actions_labels) or %}
    (cuda_compiler_version != "None" and "cpu" in github_actions_labels)
) %}
    {% set diagonalize = True %}
{% endif %}

then the variable detection fails again (presumably because the regexes involved are single-line-only, but we cannot expect maintainers to be aware of this subtlety), making the work-around necessary once more.

By the way, does this PR change your stance regarding the CF_SELF_HOSTED approach you had suggested? IMO this PR is orthogonal to that, and having both would be beneficial.

Please respond to this.

@isuruf
Copy link
Member Author

isuruf commented Dec 30, 2025

By the way, does this PR change your stance regarding the CF_SELF_HOSTED approach you had suggested? IMO this PR is orthogonal to that, and having both would be beneficial.

Not really. It's okay to go with CF_SELF_HOSTED assuming you still commit to fix all the feedstocks that's going to get broken.

@jaimergp
Copy link
Member

jaimergp commented Jan 7, 2026

presumably because the regexes involved are single-line-only, but we cannot expect maintainers to be aware of this subtlety

We can lint against this if needed.


I'm trying to review this. AFAIK, the code here doesn't require any changes or workarounds in feedstocks, right? I'm not super sure of what's going on in conda-forge/pytorch-cpu-feedstock#332, but it looks like the CI matrix is populated as intended?

If that's the case and this solution works, I'd like to get a test added here (so we can control against regressions in the future), plus the corresponding news file.

@jaimergp
Copy link
Member

jaimergp commented Jan 7, 2026

Ok, I checked the original report in #1617 and have a test to prove this PR fixes that one reproducer. May I push to your branch, @isuruf?

@isuruf
Copy link
Member Author

isuruf commented Jan 7, 2026

May I push to your branch, @isuruf?

Sure

@jaimergp
Copy link
Member

jaimergp commented Jan 7, 2026

Awesome, thanks. Pushed test and news. From my side, this is ready for review.

jaimergp and others added 3 commits January 7, 2026 17:57
Co-authored-by: Isuru Fernando <isuruf@gmail.com>
Co-authored-by: Isuru Fernando <isuruf@gmail.com>
Co-authored-by: Isuru Fernando <isuruf@gmail.com>
@jaimergp
Copy link
Member

jaimergp commented Jan 7, 2026

pre-commit.ci autofix

@isuruf isuruf marked this pull request as ready for review January 7, 2026 18:09
@isuruf isuruf requested a review from a team as a code owner January 7, 2026 18:09
Comment on lines +697 to +698

# return (configs, top_level_loop_vars)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

debug leftover?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I took it as the "return type hint", as a comment

@beckermr beckermr merged commit ea21a2e into conda-forge:main Jan 12, 2026
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Reducing the size of the cuda_compiler zip Additive logic breaks skip comments

4 participants