Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix pytorch-{cpu,gpu} packages are built only for single python version #281

Closed
jeongseok-meta opened this issue Nov 3, 2024 · 2 comments · Fixed by #282
Closed

Fix pytorch-{cpu,gpu} packages are built only for single python version #281

jeongseok-meta opened this issue Nov 3, 2024 · 2 comments · Fixed by #282

Comments

@jeongseok-meta
Copy link
Contributor

image

That said this seems to be a problem with the 2.4.1 package as well and likely for many of the megabuild packages....
https://conda-metadata-app.streamlit.app/?q=conda-forge%2Flinux-64%2Fpytorch-cpu-2.4.1-cpu_mkl_py39h09a6fac_103.conda

Originally posted by @hmaarrfk in #277 (comment)

@hmaarrfk
Copy link
Contributor

hmaarrfk commented Nov 3, 2024

I think the following patch will work:

diff --git a/recipe/meta.yaml b/recipe/meta.yaml
index 46a7507..e56535c 100644
--- a/recipe/meta.yaml
+++ b/recipe/meta.yaml
@@ -347,7 +347,9 @@ outputs:
         - pytorch-cpu                                      # [cuda_compiler_version == "None"]
     requirements:
       run:
-        - {{ pin_subpackage("pytorch", exact=True) }}
+        - pytorch {{ version }}=cuda_{{ blas_impl }}*{{ PKG_BUILDNUM }}   # [megabuild and cuda_compiler_version != "None"]
+        - pytorch {{ version }}=cpu_{{ blas_impl }}*{{ PKG_BUILDNUM }}    # [megabuild and cuda_compiler_version == "None"]
+        - {{ pin_subpackage("pytorch", exact=True) }}                     # [not megabuild]
     test:
       imports:
         - torch

@hmaarrfk
Copy link
Contributor

hmaarrfk commented Nov 3, 2024

Hmm we might need the build string to also be "resolved"

diff --git a/recipe/meta.yaml b/recipe/meta.yaml
index 46a7507..7fbee9f 100644
--- a/recipe/meta.yaml
+++ b/recipe/meta.yaml
@@ -338,8 +338,10 @@ outputs:
   {% set pytorch_cpu_gpu = "pytorch-gpu" %}   # [cuda_compiler_version != "None"]
   - name: {{ pytorch_cpu_gpu }}
     build:
-      string: cuda{{ cuda_compiler_version | replace('.', '') }}py{{ CONDA_PY }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}  # [cuda_compiler_version != "None"]
-      string: cpu_{{ blas_impl }}_py{{ CONDA_PY }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}                                      # [cuda_compiler_version == "None"]
+      string: cuda{{ cuda_compiler_version | replace('.', '') }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}                  # [megabuild and cuda_compiler_version != "None"]
+      string: cpu_{{ blas_impl }}_h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}                                                # [megabuild and cuda_compiler_version == "None"]
+      string: cuda{{ cuda_compiler_version | replace('.', '') }}py{{ CONDA_PY }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}  # [not megabuild and cuda_compiler_version != "None"]
+      string: cpu_{{ blas_impl }}_py{{ CONDA_PY }}h{{ PKG_HASH }}_{{ PKG_BUILDNUM }}                                # [not megabuild and cuda_compiler_version == "None"]
       detect_binary_files_with_prefix: false
       skip: true  # [cuda_compiler_version != "None" and linux64 and blas_impl != "mkl"]
       # weigh down cpu implementation and give cuda preference
@@ -347,7 +349,9 @@ outputs:
         - pytorch-cpu                                      # [cuda_compiler_version == "None"]
     requirements:
       run:
-        - {{ pin_subpackage("pytorch", exact=True) }}
+        - pytorch {{ version }}=cuda_{{ blas_impl }}*{{ PKG_BUILDNUM }}   # [megabuild and cuda_compiler_version != "None"]
+        - pytorch {{ version }}=cpu_{{ blas_impl }}*{{ PKG_BUILDNUM }}    # [megabuild and cuda_compiler_version == "None"]
+        - {{ pin_subpackage("pytorch", exact=True) }}                     # [not megabuild]
     test:
       imports:
         - torch

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants