Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

PyTorch 2.5.1 #277

Merged
merged 3 commits into from
Nov 3, 2024
Merged

PyTorch 2.5.1 #277

merged 3 commits into from
Nov 3, 2024

Conversation

jeongseok-meta
Copy link
Contributor

@jeongseok-meta jeongseok-meta commented Oct 17, 2024

Checklist

  • Used a personal fork of the feedstock to propose changes
  • Bumped the build number (if the version is unchanged)
  • Reset the build number to 0 (if the version changed)
  • Re-rendered with the latest conda-smithy (Use the phrase @conda-forge-admin, please rerender in a comment in this PR for automated rerendering)
  • Ensured the license file is being packaged.

@jeongseok-meta jeongseok-meta marked this pull request as ready for review October 17, 2024 22:04
@jeongseok-meta jeongseok-meta marked this pull request as draft October 17, 2024 22:04
@jeongseok-meta
Copy link
Contributor Author

@conda-forge-admin, please rerender

@conda-forge-admin
Copy link
Contributor

Hi! This is the friendly automated conda-forge-linting service.

I just wanted to let you know that I linted all conda-recipes in your PR (recipe/meta.yaml) and found it was in an excellent condition.

@conda-forge-admin
Copy link
Contributor

Hi! This is the friendly automated conda-forge-webservice.

I tried to rerender for you, but it looks like there was nothing to do.

This message was generated by GitHub actions workflow run https://github.com/conda-forge/conda-forge-webservices/actions/runs/11393752191.

@hmaarrfk
Copy link
Contributor

You might need to update my patch to help find numpy, my suggestion didn't seem like it went through upstream....

@jeongseok-meta
Copy link
Contributor Author

Submitted pytorch/pytorch#138287 for the nvtx patch

@jeongseok-meta
Copy link
Contributor Author

Woohoo, it passed the cmake configuration stage and is now in building stage. By the way, I don't have permissions to run several CI jobs. Could I get those permissions or would someone like to take over this PR or create a new one?

@hmaarrfk
Copy link
Contributor

just use azure for now, that CI gets cloggled anyway, then you wait for ever ;)

@hmaarrfk
Copy link
Contributor

if you can get to the 6 hour timeout, we can switch back to the larger runner

@jeongseok-meta
Copy link
Contributor Author

@conda-forge-admin, please rerender

1 similar comment
@jeongseok-meta
Copy link
Contributor Author

@conda-forge-admin, please rerender

conda-forge.yml Outdated Show resolved Hide resolved
@jeongseok-meta
Copy link
Contributor Author

@conda-forge-admin, please rerender

@hmaarrfk
Copy link
Contributor

There are other things you have to patch out.

torch 2.5.0.post100 has requirement sympy==1.13.1; python_version >= "3.9", but you have sympy 1.13.3.

recipe/meta.yaml Outdated Show resolved Hide resolved
recipe/meta.yaml Outdated Show resolved Hide resolved
@jeongseok-meta jeongseok-meta force-pushed the pytorch_250 branch 2 times, most recently from 213cb75 to 1b40828 Compare October 18, 2024 16:53
recipe/meta.yaml Outdated Show resolved Hide resolved
@jeongseok-meta
Copy link
Contributor Author

but that takes ownership away from you, and centralizes it into "me".

This is totally fine to me as long as we can move forward!

@hmaarrfk
Copy link
Contributor

hmaarrfk commented Nov 2, 2024

@conda-forge-admin please rerender

@hmaarrfk
Copy link
Contributor

hmaarrfk commented Nov 2, 2024

Honestly, feel free to clean up the commigts and force push:

  1. Drop all MNT commits from rerendering
  2. Squash all the commits you made (and me) into one of your own
  3. Leave the commits of other contributors intact (if any)
  4. Rerender
  5. Force push

Then if you have a few powerful linux machines, you can run the builds yourself.

Co-authored-by: Mark Harfouche <[email protected]>
@jeongseok-meta
Copy link
Contributor Author

Done! I've squashed all the commits into one with co-authoring.

Also, I've initiated a local build for Linux. Once it's complete, how can I upload the files (if needed)? They will be too large to attach to this PR comment thread.

@Tobias-Fischer
Copy link
Contributor

Should we create an issue in https://github.com/Quansight/open-gpu-server/issues to request an image with more disk space, to avoid having to do CFEP03 in the future?

@jeongseok-meta
Copy link
Contributor Author

@Tobias-Fischer sounds good! Quansight/open-gpu-server#47

@hmaarrfk
Copy link
Contributor

hmaarrfk commented Nov 3, 2024

You should upload them to your own anaconda channel as outlined in CFPE03

@hmaarrfk
Copy link
Contributor

hmaarrfk commented Nov 3, 2024

The logs should be uploaded here for me and others to review

@hmaarrfk
Copy link
Contributor

hmaarrfk commented Nov 3, 2024

the linker script strikes again!

@jslee02
Copy link

jslee02 commented Nov 3, 2024

Alright, I just finished building Linux packages locally with the linker script patch (which I'm not 100% sure if it's a proper fix, though):

@jslee02 jslee02 mentioned this pull request Nov 3, 2024
5 tasks
@hmaarrfk
Copy link
Contributor

hmaarrfk commented Nov 3, 2024

I was about to upload but then i realized that the pytorch-cpu package only has a single build....

image

That said this seems to be a problem with the 2.4.1 package as well and likely for many of the megabuild packages....
https://conda-metadata-app.streamlit.app/?q=conda-forge%2Flinux-64%2Fpytorch-cpu-2.4.1-cpu_mkl_py39h09a6fac_103.conda

lets merge this and find a longer term solution later.

@hmaarrfk hmaarrfk merged commit 494d890 into conda-forge:main Nov 3, 2024
18 of 27 checks passed
@jeongseok-meta jeongseok-meta deleted the pytorch_250 branch November 3, 2024 17:45
@jeongseok-meta
Copy link
Contributor Author

OMG, this got merged! 🎉 🎉 Thank you so much for all your help!

For the pytorch-cpu package, what would be the desired outcome?

@hmaarrfk
Copy link
Contributor

hmaarrfk commented Nov 3, 2024

That users can install it with ay python version.

Right now it locks them to a single one.

@danpetry
Copy link

danpetry commented Nov 4, 2024

yeah, if you use pin_subpackage with exact=True, it'll pin to the python version.
you could do

- name: pytorch-{{ "cpu" if cuda_major ==  0 else "gpu" }}
   requirements:
      run:
        - pytorch ={{ version }}={{ "cpu" if cuda_major == 0 else "cuda" }}*

@jeongseok-meta
Copy link
Contributor Author

jeongseok-meta commented Nov 4, 2024

@danpetry, thank you for the suggestion! Could you please take a look #282 if it also makes sense to you?

@danpetry
Copy link

danpetry commented Nov 4, 2024

I will have a look as soon as I can :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

6 participants