Skip to content

Conversation

@carterbox
Copy link
Member

Adds activation/deactivation scripts which export CF_TORCH_CUDA_ARCH_LIST into the environment. This variable records the CUDA architectures that were targeted when pytorch was built so that downstream feedstocks can automatically follow the same archs that are used here by adding something like:

export TORCH_CUDA_ARCH_LIST="${CF_TORCH_CUDA_ARCH_LIST}"

to their build scripts.

Checklist

  • Used a personal fork of the feedstock to propose changes
  • Bumped the build number (if the version is unchanged)
  • Reset the build number to 0 (if the version changed)
  • Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
  • Ensured the license file is being packaged.

Closes #423

@conda-forge-admin
Copy link
Contributor

conda-forge-admin commented Dec 9, 2025

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe/meta.yaml) and found it was in an excellent condition.

I do have some suggestions for making it better though...

For recipe/meta.yaml:

  • ℹ️ The magma output has been superseded by libmagma-devel.
  • ℹ️ The recipe is not parsable by parser conda-souschef (grayskull). This parser is not currently used by conda-forge, but may be in the future. We are collecting information to see which recipes are compatible with grayskull.
  • ℹ️ The recipe is not parsable by parser conda-recipe-manager. The recipe can only be automatically migrated to the new v1 format if it is parseable by conda-recipe-manager.

This message was generated by GitHub Actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/21009919321. Examine the logs at this URL for more detail.

@@ -0,0 +1,6 @@
@echo on
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

no echo

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

if [[ ! -v CF_TORCH_CUDA_ARCH_LIST ]]
then
export CF_TORCH_CUDA_ARCH_LIST="@cf_torch_cuda_arch_list@"
export CF_TORCH_CUDA_ARCH_LIST_BACKUP="UNSET"
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

UNSET is not the greatest name already, but using it on unix is even worse, because there unset is actually a valid command, and we can detect the difference between an unset and an empty environment variable. No such luck on windows, so it makes sense there.

I can imagine keeping the current setup for uniformity if you want to declare this all internal to the package, but then both activate.sh and activate.bat need an explanatory comment.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can imagine keeping the current setup for uniformity if you want to declare this all internal to the package, but then both activate.sh and activate.bat need an explanatory comment.

This still applies. Windows should have a comment that we cannot distinguish unset from empty, so we use a placeholder. And using a placeholder on unix rather than (more properly) determing whether a variable is set/unset in bash needs a comment that this is for uniformity with windows.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@carterbox carterbox requested a review from h-vetinari December 15, 2025 22:42
Copy link
Member

@h-vetinari h-vetinari left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is fine from my side. We can incorporate this in the next PR

@mgorny
Copy link
Contributor

mgorny commented Jan 8, 2026

@h-vetinari, do we want to include it in the old branches as well, or just 2.10 when it is ready?

@h-vetinari
Copy link
Member

I'd prefer to do this at a version boundary, i.e. only for 2.10

mgorny pushed a commit to mgorny/pytorch-cpu-feedstock that referenced this pull request Jan 10, 2026
@mgorny
Copy link
Contributor

mgorny commented Jan 14, 2026

Looks like the activation script doesn't work on osx:

/Users/runnerx/miniforge3/conda-bld/libtorch_1768358369258/_h_env_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold_placehold/etc/conda/activate.d/libtorch_activate.sh: line 3: conditional binary operator expected

https://github.com/conda-forge/pytorch-cpu-feedstock/actions/runs/20980286298/job/60303616215?pr=475
https://github.com/conda-forge/pytorch-cpu-feedstock/actions/runs/20980286298/job/60303616203?pr=475

That said, I wonder if we should be installing them at all for non-CUDA builds. Perhaps we should just make that conditional to cuda_compiler_version != None?

I can do the necessary changes, just need confirmation that I'm not missing something.

@mgorny
Copy link
Contributor

mgorny commented Jan 14, 2026

(FWICS the problem is that macOS hosts are using bash 3.2, which doesn't support -v operator)

@carterbox
Copy link
Member Author

@mgorny, can you please propose a fix? I don't have any macOS devices. Unless you think I can debug this by installing bash 3.2 on a Linux host.

@mgorny
Copy link
Contributor

mgorny commented Jan 14, 2026

@mgorny, can you please propose a fix? I don't have any macOS devices. Unless you think I can debug this by installing bash 3.2 on a Linux host.

Yeah, I'm pretty sure you can reproduce it with bash 3.2 on Linux. However, as I said above, I think we shouldn't be installing these activation files when PyTorch is built without CUDA, right?

@carterbox
Copy link
Member Author

I think we shouldn't be installing these activation files when PyTorch is built without CUDA, right?

OK. I'll wrap the activation install commands in a conditional.

@mgorny
Copy link
Contributor

mgorny commented Jan 15, 2026

Thanks. Testing in #475.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Export CF_TORCH_CUDA_ARCH_LIST in activation script

5 participants