-
-
Notifications
You must be signed in to change notification settings - Fork 14.8k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
openai-triton: update to v2.2.0 pass compiler and libcuda paths to runtime #292996
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
This file was deleted.
This file was deleted.
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -19,6 +19,7 @@ | |
, filelock | ||
, torchWithRocm | ||
, python | ||
, writeScriptBin | ||
|
||
, runCommand | ||
|
||
|
@@ -27,33 +28,23 @@ | |
}: | ||
|
||
let | ||
ptxas = "${cudaPackages.cuda_nvcc}/bin/ptxas"; # Make sure cudaPackages is the right version each update (See python/setup.py) | ||
mkBinaryStub = name: "${writeScriptBin name '' | ||
echo binary ${name} is not available: openai-triton was built without CUDA support | ||
''}/bin/${name}"; | ||
in | ||
buildPythonPackage rec { | ||
pname = "triton"; | ||
version = "2.1.0"; | ||
version = "2.2.0"; | ||
pyproject = true; | ||
|
||
src = fetchFromGitHub { | ||
owner = "openai"; | ||
repo = pname; | ||
rev = "v${version}"; | ||
hash = "sha256-8UTUwLH+SriiJnpejdrzz9qIquP2zBp1/uwLdHmv0XQ="; | ||
# Release v2.2.0 is not tagged, but published on pypi: https://github.com/openai/triton/issues/3160 | ||
rev = "0e7b97bd47fc4beb21ae960a516cd9a7ae9bc060"; | ||
hash = "sha256-UdxoHkFnFFBfvGa/NvgvGebbtwGYbrAICQR9JZ4nvYo="; | ||
}; | ||
|
||
patches = [ | ||
# fix overflow error | ||
(fetchpatch { | ||
url = "https://github.com/openai/triton/commit/52c146f66b79b6079bcd28c55312fc6ea1852519.patch"; | ||
hash = "sha256-098/TCQrzvrBAbQiaVGCMaF3o5Yc3yWDxzwSkzIuAtY="; | ||
}) | ||
] ++ lib.optionals (!cudaSupport) [ | ||
./0000-dont-download-ptxas.patch | ||
# openai-triton wants to get ptxas version even if ptxas is not | ||
# used, resulting in ptxas not found error. | ||
./0001-ptxas-disable-version-key-for-non-cuda-targets.patch | ||
]; | ||
|
||
nativeBuildInputs = [ | ||
setuptools | ||
pythonRelaxDepsHook | ||
|
@@ -111,6 +102,11 @@ buildPythonPackage rec { | |
# Use our linker flags | ||
substituteInPlace python/triton/common/build.py \ | ||
--replace '${oldStr}' '${newStr}' | ||
# triton/common/build.py will be called both on build, and sometimes in runtime. | ||
substituteInPlace python/triton/common/build.py \ | ||
--replace 'os.getenv("TRITON_LIBCUDA_PATH")' '"${cudaPackages.cuda_cudart}/lib"' | ||
substituteInPlace python/triton/common/build.py \ | ||
--replace 'os.environ.get("CC")' '"${cudaPackages.backendStdenv.cc}/bin/cc"' | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would suggest making this (and the above) something like |
||
''; | ||
|
||
# Avoid GLIBCXX mismatch with other cuda-enabled python packages | ||
|
@@ -125,24 +121,28 @@ buildPythonPackage rec { | |
|
||
# The rest (including buildPhase) is relative to ./python/ | ||
cd python | ||
|
||
mkdir -p $out/${python.sitePackages}/triton/third_party/cuda/bin | ||
function install_binary { | ||
export TRITON_''${1^^}_PATH=$2 | ||
ln -s $2 $out/${python.sitePackages}/triton/third_party/cuda/bin/ | ||
} | ||
'' + lib.optionalString cudaSupport '' | ||
export CC=${cudaPackages.backendStdenv.cc}/bin/cc; | ||
export CXX=${cudaPackages.backendStdenv.cc}/bin/c++; | ||
|
||
# Work around download_and_copy_ptxas() | ||
mkdir -p $PWD/triton/third_party/cuda/bin | ||
ln -s ${ptxas} $PWD/triton/third_party/cuda/bin | ||
install_binary ptxas ${cudaPackages.cuda_nvcc}/bin/ptxas | ||
install_binary cuobjdump ${cudaPackages.cuda_cuobjdump}/bin/cuobjdump | ||
install_binary nvdisasm ${cudaPackages.cuda_nvdisasm}/bin/nvdisasm | ||
'' + lib.optionalString (!cudaSupport) '' | ||
install_binary ptxas ${mkBinaryStub "ptxas"} | ||
install_binary cuobjdump ${mkBinaryStub "cuobjdump"} | ||
install_binary nvdisasm ${mkBinaryStub "nvdisasm"} | ||
''; | ||
|
||
# CMake is run by setup.py instead | ||
dontUseCmakeConfigure = true; | ||
|
||
# Setuptools (?) strips runpath and +x flags. Let's just restore the symlink | ||
postFixup = lib.optionalString cudaSupport '' | ||
rm -f $out/${python.sitePackages}/triton/third_party/cuda/bin/ptxas | ||
ln -s ${ptxas} $out/${python.sitePackages}/triton/third_party/cuda/bin/ptxas | ||
''; | ||
|
||
checkInputs = [ cmake ]; # ctest | ||
dontUseSetuptoolsCheck = true; | ||
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The environment variable says "libcuda" (the userspace driver), not "libcudart"? Is this a confusion on upstream's side?
Yes and this is intentional: isn't triton literally a tool for compiling kernels on the fly from some subset of python?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also, wasn't there an attempt to make CUDA stuff optional in triton? In that case we don't want to refer to backendStdenv but to the conditional stdenv (otherwise the cpu-only version pulls two different GCCs into the closure)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It wants libcuda.so.1, I'm not sure where I need to look for it?
The cursed part is that build and runtime step are closely intermixed, but the build step doesn't have a way to provide some values for runtime.
It provides ptxas as third_party binary, but libcuda and etc are expecting that it is running with the same binaries as it built with, while this is not necessary true, as I'm sure some things there are not ABI-compatible. I see in 3.0.0 version the build process is making much more sense.
The CC variable read override is only enabled on cudaSupport, otherwise it doesn't try to call CC at runtime, and the build-time CC is enough (At least, I haven't experienced that with vLLM)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Libcuda depends on the (nvidia) kernel (module) that runs on the user's machine, so we don't link it through the nix store, we link it through
/run/opengl-driver/lib
:${addDriverRunpath.driverLink}/lib
.More specifically, we use the fake driver
${getLib cudaPackages.cuda_cudart}/lib/stubs
at build/link time, and${addDriverRunpath.driverLink}/lib
at runtime. It's also important that at runtime we first try todlopen("libcuda.so", ...)
first, and only thendlopen("/run/opengl-driver/lib/libcuda.so", ...)
because we want things to also work on FHS distributions and respect the optionalLD_LIBRARY_PATH
Oh right, we should probably try to explicitly track the references retained at runtime.
Do they not use
triton.common.build
at runtime for their jit/aot?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't think we can achieve that with this code? It runs both at compile time, and at runtime...
Except by patching it in preInstall?..
Not in vLLM on ROCm, I'm not sure about other projects using triton directly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well we should open an issue asking for a more fine-grained support. Note they do not use the variable on
master
any more, but usewhereis
which is also platform-specific: https://github.com/feihugis/triton/blob/a9d1935e795cf28aa3c3be8ac5c14723e6805de5/python/triton/compiler.py#L1354-L1357