You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Once we update python3Packages.torch to 2.X.X, its torchWithRocm variation will depend on cudaPacakges.cuda_nvcc (through python3Packages.openai-triton), which is unfree. This means hydra won't be building and caching torchWithRocm.
Upstream enforces that there be a copy of bin/ptxas (from cuda_nvcc):
However, they do not actually use it except for cuda support (obviosuly).
We should patch around this to re-enable caching, and we should work with upstream to make the dependency optional in the first place
CC @NixOS/rocm-maintainers
The text was updated successfully, but these errors were encountered:
Issue description
Once we update
python3Packages.torch
to2.X.X
, itstorchWithRocm
variation will depend oncudaPacakges.cuda_nvcc
(throughpython3Packages.openai-triton
), which is unfree. This means hydra won't be building and cachingtorchWithRocm
.Upstream enforces that there be a copy of
bin/ptxas
(fromcuda_nvcc
):However, they do not actually use it except for cuda support (obviosuly).
We should patch around this to re-enable caching, and we should work with upstream to make the dependency optional in the first place
CC @NixOS/rocm-maintainers
The text was updated successfully, but these errors were encountered: