-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
the compilation invocation is missing the typical includes, maybe something broke with detecting the toolchain? #18
Comments
Please paste the whole stack trace. This looks like a problem of torch rather than triton |
It seems to be an error when TorchInductor was compiling something to CPU, and my code in this repo does not modify that. The header file Did you use (Personally I don't like to use |
Yes, there are some details: without vcvars: I have previously seen your package working, and now, it does not work |
When you say "a place they can be found" where did you copy it? And am I right to assume you did: and then your compile worked? |
This is a working procedure:
Visit http://localhost:8188, then drag and drop this file into the work area, and click Queue: This successfully compiled. |
I think in |
Logs showing a failure to configure the toolchain using your automatic discovery code:
cvuvp4i7roujum4xemrfwnb3t4c5t3r3mihr4b7iegh6tcqvdg43.h:
This is using the latest binaries from your site with torch 2.5.1.
Generally, the best way to install MSVC on Windows is using
chocolately
:This will result in the installation path:
C:\Program Files (x86)\Microsoft Visual Studio\2022\BuildTools\VC\Tools\MSVC\14.37.32822\include
correctly hasalgorithm
in it. This is the only toolchain that is installed on this machine.CUDA SDK will not correctly copy in its cmake package config into VS 2022, which is another pain point.
Executing with
vcvars
causespython311.lib
to not be found:The generated compilation command appears to be referencing the wrong paths for libraries. After providing the Python lib path, everything works.
Reproducing
Requirements: 24GB VRAM NVIDIA GPU with Ampere or better; 30GB of disk space.
You can reproduce this issue using:
Visit
http://localhost:8188
, then drag and drop this file into the work area, and click Queue:mochi_text_to_video_example.json
You will need 24GB of VRAM or more.
The text was updated successfully, but these errors were encountered: