Skip to content

Conversation

@mgorny
Copy link
Contributor

@mgorny mgorny commented Jan 10, 2026

Checklist

  • Used a personal fork of the feedstock to propose changes
  • Bumped the build number (if the version is unchanged)
  • Reset the build number to 0 (if the version changed)
  • Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
  • Ensured the license file is being packaged.

Combined updates for 2.10.x. So far with [ci skip] on top, we'll run it when the next RC or final is available.

Signed-off-by: Michał Górny <[email protected]>
@conda-forge-admin
Copy link
Contributor

conda-forge-admin commented Jan 14, 2026

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe/meta.yaml) and found it was in an excellent condition.

I do have some suggestions for making it better though...

For recipe/meta.yaml:

  • ℹ️ The magma output has been superseded by libmagma-devel.
  • ℹ️ The recipe is not parsable by parser conda-souschef (grayskull). This parser is not currently used by conda-forge, but may be in the future. We are collecting information to see which recipes are compatible with grayskull.
  • ℹ️ The recipe is not parsable by parser conda-recipe-manager. The recipe can only be automatically migrated to the new v1 format if it is parseable by conda-recipe-manager.

This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/21220733650. Examine the logs at this URL for more detail.

@mgorny
Copy link
Contributor Author

mgorny commented Jan 17, 2026

@h-vetinari, do we want to include CUDA 13 migration for when the final is released?

@h-vetinari
Copy link
Member

@h-vetinari, do we want to include CUDA 13 migration for when the final is released?

As long as you use a development install of smithy (combined with the skip from #332, so that CPU builds run on non-GPU agents), that's OK for me. We also won't be able to test the GPU paths for CUDA 13, but I guess running the test suite on CUDA 12.x only is good enough.

@mgorny
Copy link
Contributor Author

mgorny commented Jan 19, 2026

@h-vetinari, do we want to include CUDA 13 migration for when the final is released?

As long as you use a development install of smithy (combined with the skip from #332, so that CPU builds run on non-GPU agents), that's OK for me. We also won't be able to test the GPU paths for CUDA 13, but I guess running the test suite on CUDA 12.x only is good enough.

I suppose new conda-smithy will be released before the final version.

@mgorny
Copy link
Contributor Author

mgorny commented Jan 19, 2026

The aarch64 build hit the "exec format error" problem again, and the mkl/CUDA x86-64 build seems to have hit some builder issue — the logs are cut short, and GitHub seemed to be confused over whether it actually failed or was still running.

@mgorny
Copy link
Contributor Author

mgorny commented Jan 20, 2026

Uh, aarch64 keeps failing with that "Exec format error". I wonder how it is that it happens only in some runs.

And then mkl run timed out. FWICS the non-mkl build took over 19 hours anyway, so we probably need to consider increasing timeouts again.

Signed-off-by: Michał Górny <[email protected]>
Signed-off-by: Michał Górny <[email protected]>
…6.01.20.09.33.33

Other tools:
- conda-build 25.11.1
- rattler-build 0.55.0
- rattler-build-conda-compat 1.4.10
@h-vetinari
Copy link
Member

One test failure on osx-arm64; I can add a skip for that while merging (assuming the rest doesn't blow up)

=========================== short test summary info ============================
FAILED [0.3161s] test/test_nn.py::TestNNDeviceTypeMPS::test_LayerNorm_numeric_mps - AssertionError: Tensor-likes are not close!

Mismatched elements: 9437063 / 18874368 (50.0%)
Greatest absolute difference: 0.7128685712814331 at index (0, 8, 24, 54) (up to 1e-05 allowed)
Greatest relative difference: 0.7136273980140686 at index (0, 69, 57, 29) (up to 0 allowed)

To execute this test, run the following from the base repo dir:
    python test/test_nn.py TestNNDeviceTypeMPS.test_LayerNorm_numeric_mps

This message can be suppressed by setting PYTORCH_PRINT_REPRO_ON_FAILURE=0
= 1 failed, 8672 passed, 4907 skipped, 57 xfailed, 65984 warnings in 462.48s (0:07:42) =

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

5 participants