You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm encountering the following compilation errors when building the project:
cmake --build . --config {build_type} --parallel {job_count} --target tensorrt_llm nvinfer_plugin_tensorrt_llm th_common bindings executorWorker
[ 0%] Building CXX object tensorrt_llm/runtime/CMakeFiles/runtime_src.dir/utils/numpyUtils.cpp.o
In file included from /tmp/package/MEP-tensorRT/cpp/include/tensorrt_llm/common/cudaUtils.h:20,
from /tmp/package/MEP-tensorRT/cpp/include/tensorrt_llm/runtime/cudaStream.h:20,
from /tmp/package/MEP-tensorRT/cpp/include/tensorrt_llm/runtime/bufferManager.h:20,
from /tmp/package/MEP-tensorRT/cpp/tensorrt_llm/runtime/utils/numpyUtils.h:19,
from /tmp/package/MEP-tensorRT/cpp/tensorrt_llm/runtime/utils/numpyUtils.cpp:17:
/tmp/package/MEP-tensorRT/cpp/tensorrt_llm/common/cudaDriverWrapper.h:74:37: error: ‘CUtensorMap’ has not been declared
74 | CUresult cuTensorMapEncodeTiled(CUtensorMap* tensorMap, CUtensorMapDataType tensorDataType, cuuint32_t tensorRank,
| ^~~~~~~~~~~
/tmp/package/MEP-tensorRT/cpp/tensorrt_llm/common/cudaDriverWrapper.h:74:61: error: ‘CUtensorMapDataType’ has not been declared
74 | CUresult cuTensorMapEncodeTiled(CUtensorMap* tensorMap, CUtensorMapDataType tensorDataType, cuuint32_t tensorRank,
| ^~~~~~~~~~~~~~~~~~~
/tmp/package/MEP-tensorRT/cpp/tensorrt_llm/common/cudaDriverWrapper.h:76:43: error: ‘CUtensorMapInterleave’ has not been declared
76 | cuuint32_t const* elementStrides, CUtensorMapInterleave interleave, CUtensorMapSwizzle swizzle,
| ^~~~~~~~~~~~~~~~~~~~~
/tmp/package/MEP-tensorRT/cpp/tensorrt_llm/common/cudaDriverWrapper.h:76:77: error: ‘CUtensorMapSwizzle’ has not been declared
76 | cuuint32_t const* elementStrides, CUtensorMapInterleave interleave, CUtensorMapSwizzle swizzle,
| ^~~~~~~~~~~~~~~~~~
/tmp/package/MEP-tensorRT/cpp/tensorrt_llm/common/cudaDriverWrapper.h:77:9: error: ‘CUtensorMapL2promotion’ has not been declared
77 | CUtensorMapL2promotion l2Promotion, CUtensorMapFloatOOBfill oobFill) const;
| ^~~~~~~~~~~~~~~~~~~~~~
/tmp/package/MEP-tensorRT/cpp/tensorrt_llm/common/cudaDriverWrapper.h:77:45: error: ‘CUtensorMapFloatOOBfill’ has not been declared
77 | CUtensorMapL2promotion l2Promotion, CUtensorMapFloatOOBfill oobFill) const;
| ^~~~~~~~~~~~~~~~~~~~~~~
/tmp/package/MEP-tensorRT/cpp/tensorrt_llm/common/cudaDriverWrapper.h:103:41: error: ‘CUtensorMap’ has not been declared
103 | CUresult (*_cuTensorMapEncodeTiled)(CUtensorMap* tensorMap, CUtensorMapDataType tensorDataType,
| ^~~~~~~~~~~
/tmp/package/MEP-tensorRT/cpp/tensorrt_llm/common/cudaDriverWrapper.h:103:65: error: ‘CUtensorMapDataType’ has not been declared
103 | CUresult (*_cuTensorMapEncodeTiled)(CUtensorMap* tensorMap, CUtensorMapDataType tensorDataType,
| ^~~~~~~~~~~~~~~~~~~
/tmp/package/MEP-tensorRT/cpp/tensorrt_llm/common/cudaDriverWrapper.h:105:69: error: ‘CUtensorMapInterleave’ has not been declared
105 | cuuint32_t const* boxDim, cuuint32_t const* elementStrides, CUtensorMapInterleave interleave,
| ^~~~~~~~~~~~~~~~~~~~~
/tmp/package/MEP-tensorRT/cpp/tensorrt_llm/common/cudaDriverWrapper.h:106:9: error: ‘CUtensorMapSwizzle’ has not been declared
106 | CUtensorMapSwizzle swizzle, CUtensorMapL2promotion l2Promotion, CUtensorMapFloatOOBfill oobFill);
| ^~~~~~~~~~~~~~~~~~
/tmp/package/MEP-tensorRT/cpp/tensorrt_llm/common/cudaDriverWrapper.h:106:37: error: ‘CUtensorMapL2promotion’ has not been declared
106 | CUtensorMapSwizzle swizzle, CUtensorMapL2promotion l2Promotion, CUtensorMapFloatOOBfill oobFill);
| ^~~~~~~~~~~~~~~~~~~~~~
/tmp/package/MEP-tensorRT/cpp/tensorrt_llm/common/cudaDriverWrapper.h:106:73: error: ‘CUtensorMapFloatOOBfill’ has not been declared
106 | CUtensorMapSwizzle swizzle, CUtensorMapL2promotion l2Promotion, CUtensorMapFloatOOBfill oobFill);
| ^~~~~~~~~~~~~~~~~~~~~~~
gmake[3]: *** [tensorrt_llm/runtime/CMakeFiles/runtime_src.dir/build.make:76: tensorrt_llm/runtime/CMakeFiles/runtime_src.dir/utils/numpyUtils.cpp.o] Error 1
gmake[2]: *** [CMakeFiles/Makefile2:1556: tensorrt_llm/runtime/CMakeFiles/runtime_src.dir/all] Error 2
gmake[1]: *** [CMakeFiles/Makefile2:1225: tensorrt_llm/CMakeFiles/tensorrt_llm.dir/rule] Error 2
The errors indicate that several types related to CUtensorMap (like CUtensorMapDataType, CUtensorMapInterleave, etc.) are not recognized. These types are likely missing from an included header or are not defined in the project.
How to solve it?
ENV:
-- The CXX compiler identification is GNU 9.2.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- NVTX is disabled
-- Importing batch manager
-- Importing executor
-- Importing nvrtc wrapper
-- Importing internal cutlass kernels
-- Building PyTorch
-- Building Google tests
-- Building benchmarks
-- Not building C++ micro benchmarks
-- TensorRT-LLM version: 0.16.0.dev2024112600
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - /usr/local/cuda-11.2/bin/nvcc
-- CUDA compiler: /usr/local/cuda-11.2/bin/nvcc
-- GPU architectures: 70-real;80-real;86-real
-- The C compiler identification is GNU 9.2.0
-- The CUDA compiler identification is NVIDIA 11.2.67
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/local/bin/mpicc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda-11.2/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Found CUDAToolkit: /usr/local/cuda-11.2/include (found version "11.2.67")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- CUDA library status:
-- version: 11.2.67
-- libraries: /usr/local/cuda-11.2/lib64
-- include path: /usr/local/cuda-11.2/targets/x86_64-linux/include
-- pybind11 v3.0.0 dev1
-- Found PythonInterp: /opt/miniconda/envs/python37/envs/py39/bin/python (found suitable version "3.9.20", minimum required is "3.8")
-- Found PythonLibs: /opt/miniconda/envs/python37/envs/py39/lib/libpython3.9.so
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- ========================= Importing and creating target nvinfer ==========================
-- Looking for library nvinfer
-- Library that was found /usr/local/tensorrt/targets/x86_64-linux-gnu/lib/libnvinfer.so
-- ==========================================================================================
-- CUDAToolkit_VERSION 11.2 is greater or equal than 11.0, enable -DENABLE_BF16 flag
-- Found MPI_C: /usr/local/bin/mpicc (found version "3.1")
-- Found MPI_CXX: /usr/local/lib/libmpi.so (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- COMMON_HEADER_DIRS: /tmp/package/MEP-tensorRT/cpp;/usr/local/cuda-11.2/include
-- Found Python3: /opt/miniconda/envs/python37/envs/py39/bin/python3.9 (found version "3.9.20") found components: Interpreter Development Development.Module Development.Embed
-- USE_CXX11_ABI is set by python Torch to 0
-- TORCH_CUDA_ARCH_LIST: 7.0;8.0;8.6
-- Found Python executable at /optminiconda/envs/python37/envs/py39/bin/python3.9
-- Found Python libraries at /opt/miniconda/envs/python37/envs/py39/lib
-- Found CUDA: /usr/local/cuda-11.2 (found version "11.2")
-- Found CUDAToolkit: /usr/local/cuda-11.2/include (found version "11.2.67")
-- Caffe2: CUDA detected: 11.2
-- Caffe2: CUDA nvcc is: /usr/local/cuda-11.2/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda-11.2
-- Caffe2: Header version is: 11.2
The text was updated successfully, but these errors were encountered:
Description:
I'm encountering the following compilation errors when building the project:
The errors indicate that several types related to CUtensorMap (like CUtensorMapDataType, CUtensorMapInterleave, etc.) are not recognized. These types are likely missing from an included header or are not defined in the project.
How to solve it?
ENV:
-- The CXX compiler identification is GNU 9.2.0
-- Detecting CXX compiler ABI info
-- Detecting CXX compiler ABI info - done
-- Check for working CXX compiler: /usr/bin/g++ - skipped
-- Detecting CXX compile features
-- Detecting CXX compile features - done
-- NVTX is disabled
-- Importing batch manager
-- Importing executor
-- Importing nvrtc wrapper
-- Importing internal cutlass kernels
-- Building PyTorch
-- Building Google tests
-- Building benchmarks
-- Not building C++ micro benchmarks
-- TensorRT-LLM version: 0.16.0.dev2024112600
-- Looking for a CUDA compiler
-- Looking for a CUDA compiler - /usr/local/cuda-11.2/bin/nvcc
-- CUDA compiler: /usr/local/cuda-11.2/bin/nvcc
-- GPU architectures: 70-real;80-real;86-real
-- The C compiler identification is GNU 9.2.0
-- The CUDA compiler identification is NVIDIA 11.2.67
-- Detecting C compiler ABI info
-- Detecting C compiler ABI info - done
-- Check for working C compiler: /usr/local/bin/mpicc - skipped
-- Detecting C compile features
-- Detecting C compile features - done
-- Detecting CUDA compiler ABI info
-- Detecting CUDA compiler ABI info - done
-- Check for working CUDA compiler: /usr/local/cuda-11.2/bin/nvcc - skipped
-- Detecting CUDA compile features
-- Detecting CUDA compile features - done
-- Found CUDAToolkit: /usr/local/cuda-11.2/include (found version "11.2.67")
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD
-- Performing Test CMAKE_HAVE_LIBC_PTHREAD - Success
-- Found Threads: TRUE
-- CUDA library status:
-- version: 11.2.67
-- libraries: /usr/local/cuda-11.2/lib64
-- include path: /usr/local/cuda-11.2/targets/x86_64-linux/include
-- pybind11 v3.0.0 dev1
-- Found PythonInterp: /opt/miniconda/envs/python37/envs/py39/bin/python (found suitable version "3.9.20", minimum required is "3.8")
-- Found PythonLibs: /opt/miniconda/envs/python37/envs/py39/lib/libpython3.9.so
-- Performing Test HAS_FLTO
-- Performing Test HAS_FLTO - Success
-- ========================= Importing and creating target nvinfer ==========================
-- Looking for library nvinfer
-- Library that was found /usr/local/tensorrt/targets/x86_64-linux-gnu/lib/libnvinfer.so
-- ==========================================================================================
-- CUDAToolkit_VERSION 11.2 is greater or equal than 11.0, enable -DENABLE_BF16 flag
-- Found MPI_C: /usr/local/bin/mpicc (found version "3.1")
-- Found MPI_CXX: /usr/local/lib/libmpi.so (found version "3.1")
-- Found MPI: TRUE (found version "3.1")
-- COMMON_HEADER_DIRS: /tmp/package/MEP-tensorRT/cpp;/usr/local/cuda-11.2/include
-- Found Python3: /opt/miniconda/envs/python37/envs/py39/bin/python3.9 (found version "3.9.20") found components: Interpreter Development Development.Module Development.Embed
-- USE_CXX11_ABI is set by python Torch to 0
-- TORCH_CUDA_ARCH_LIST: 7.0;8.0;8.6
-- Found Python executable at /optminiconda/envs/python37/envs/py39/bin/python3.9
-- Found Python libraries at /opt/miniconda/envs/python37/envs/py39/lib
-- Found CUDA: /usr/local/cuda-11.2 (found version "11.2")
-- Found CUDAToolkit: /usr/local/cuda-11.2/include (found version "11.2.67")
-- Caffe2: CUDA detected: 11.2
-- Caffe2: CUDA nvcc is: /usr/local/cuda-11.2/bin/nvcc
-- Caffe2: CUDA toolkit directory: /usr/local/cuda-11.2
-- Caffe2: Header version is: 11.2
The text was updated successfully, but these errors were encountered: