-
-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
What is difference in pytorch and pytorch-cpu (and pytorch-gpu) #164
Comments
Pytorch-cpu & pytorch-gpu are the old names for compatibility with the packages that used to be published in the in short:
should work. In case of doubt, you can specify the buildstring directly to select the respective variant
|
All right, I understand the situation. |
It's possible that conda cannot find your drivers, though that would be unusual (you can check the output of If you're on some exotic setup (e.g. HPC, or installed in weird location) and conda cannot find your CUDA drivers, you can override what conda detects as your CUDA version (or its absence) and do
See here for details. |
The driver seems to be found by conda, is that correct? |
It should work by default (you can check in the list of packages that are about to be downloaded if the buildstring after the pytorch version begins with cuda or cpu), but you can enforce that selection as I said above:
In any case you should first add the configuration that's the only supported one for conda-forge which is mentioned on every feedstock:
Note that this will effectively replace all installed packages with the versions from conda-forge (upon the next Finally, you shouldn't be installing stuff like |
Ah, nevermind, I just see now that you're on windows, where we unfortunately don't have pytorch builds yet: #32. In this case, you can only try to use the builds from the pytorch or default channel |
Hmmm, I see, I understand the situation. |
yes. that is likely the sustainable way to do it. there was some good work made in #134 I think it would be a good starting point (maybe) but it would be nice if you could keep the contributor's authorship by using git-cherry-pick. Generally. make a PR, the CI should be able to compile 6 hours worth of the process. This builds confidence. Then we can invoke the process outlined in |
Jumping here as I am using |
According to me (not necessarily everyone else here), it's subject to removal at any time (pending usage numbers etc.), so best not to rely on it. Side note: If conda is able to find your cuda drivers (check |
Understood.
Yeah, I know, but we are building various and complex conda envs in container images in CI, and I found that forcing for pytorch-gpu prevents me various headaches :-D (this pipeline has been setup quite a long time ago, and I know the pytorch situation is much better today but just in case I prefer to keep it) |
I'm pretty hesitant to remove it for the reasons in conda-forge/conda-forge.github.io#1894 |
The situation here is not as complicated as the ones you describe in that issue. If we dropped |
I have a proposal to make the conda-forge linter complain when it finds |
Comment:
I wanted to install pytorch in the conda-forge channel and came across this repository while searching.
There is pytorch, pytorch-cpu and pytorch-gpu, what is the difference between them? Also, if I want to use pytorch with gpu in conda-forge, is my understanding correct that currently only linux (pytorch-gpu) can be installed?
The text was updated successfully, but these errors were encountered: