Questions? Here’s how to reach us:
- Email: [email protected]
- Team: @ISOAI
Please reference the table below for official GPU packages dependencies for the ONNX Runtime inferencing package. Note that ONNX Runtime Training is aligned with PyTorch CUDA versions; refer to the Training tab on onnxruntime.ai for supported versions.
- Download the installer from the CUDA Toolkit 11.8 Downloads | NVIDIA Developer.
- Double click
cuda_11.8.0_522.06_windows.exe
. - Follow the on-screen prompts to complete the installation.
To verify the installation:
-
Open the command prompt.
-
Type the following command and press Enter:
nvcc --version
This command should output information about the NVIDIA CUDA Compiler (nvcc), including the version of the CUDA toolkit that is installed.
If you get an error saying that 'nvcc' is not recognized as an internal or external command, it means either CUDA is not installed or it's not added to the system PATH. If CUDA is installed but not added to the system PATH, you need to add it. The CUDA bin directory (which contains nvcc
) is typically located at C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8\bin
.
-
Download the installer from the CUDA Toolkit 11.8 Downloads | NVIDIA Developer.
-
Open a terminal and navigate to the directory where the installer is downloaded.
-
Install the toolkit by running the following commands:
sudo dpkg -i cuda-repo-<distro>_<version>_amd64.deb sudo apt-key adv --fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/<distro>/x86_64/7fa2af80.pub sudo apt-get update sudo apt-get install cuda
To verify the installation:
-
Open a terminal.
-
Type the following command and press Enter:
nvcc --version
This command should output information about the NVIDIA CUDA Compiler (nvcc), including the version of the CUDA toolkit that is installed.
If you get an error saying that 'nvcc' is not recognized as an internal or external command, it means either CUDA is not installed or it's not added to the system PATH. If CUDA is installed but not added to the system PATH, you need to add it by appending the following lines to your .bashrc
file:
export PATH=/usr/local/cuda-11.8/bin${PATH:+:${PATH}}
export LD_LIBRARY_PATH=/usr/local/cuda-11.8/lib64${LD_LIBRARY_PATH:+:${LD_LIBRARY_PATH}}
Then, reload your .bashrc
file:
source ~/.bashrc
-
Download the cuDNN installer from the Index of /compute/redist/cudnn/v8.5.0/local_installers/11.7 (nvidia.com).
-
Specifically, download the archive: cudnn-windows-x86_64-8.5.0.96_cuda11-archive.zip.
-
Extract the contents to
C:\Program Files\NVIDIA GPU Computing Toolkit\CUDA\v11.8
.
-
Download zlib from the provided link: zlib123dllx64.zip.
-
Extract the contents to
C:\Program Files\zlib123dllx64
. -
Add the following path to your system environment variables:
C:\Program Files\zlib123dllx64\dll_x64
-
Download the cuDNN installer from the Index of /compute/redist/cudnn/v8.5.0/local_installers/11.7 (nvidia.com).
-
Extract the downloaded archive:
tar -xzvf cudnn-linux-x86_64-8.5.0.96_cuda11-archive.tar.xz
-
Copy the extracted files to your CUDA directory:
sudo cp cuda/include/cudnn*.h /usr/local/cuda/include sudo cp cuda/lib64/libcudnn* /usr/local/cuda/lib64 sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
-
Verify the installation:
cat /usr/local/cuda/include/cudnn_version.h | grep CUDNN_MAJOR -A 2
-
Install zlib using your package manager:
sudo apt-get update sudo apt-get install zlib1g zlib1g-dev
Ensure zlib is correctly installed and available in your system paths.
-
Configure the production repository:
curl -fsSL https://nvidia.github.io/libnvidia-container/gpgkey | sudo gpg --dearmor -o /usr/share/keyrings/nvidia-container-toolkit-keyring.gpg \ && curl -s -L https://nvidia.github.io/libnvidia-container/stable/deb/nvidia-container-toolkit.list | \ sed 's#deb https://#deb [signed-by=/usr/share/keyrings/nvidia-container-toolkit-keyring.gpg] https://#g' | \ sudo tee /etc/apt/sources.list.d/nvidia-container-toolkit.list
-
Optionally, configure the repository to use experimental packages:
sed -i -e '/experimental/ s/^#//g' /etc/apt/sources.list.d/nvidia-container-toolkit.list
-
Update the packages list from the repository:
sudo apt-get update
-
Install the NVIDIA Container Toolkit packages:
sudo apt-get install -y nvidia-container-toolkit
For further information, check the installation guide from the NVIDIA website: NVIDIA Container Toolkit Installation Guide.
-
Configure the container runtime by using the
nvidia-ctk
command:sudo nvidia-ctk runtime configure --runtime=docker
The
nvidia-ctk
command modifies the/etc/docker/daemon.json
file on the host. The file is updated so that Docker can use the NVIDIA Container Runtime. -
Restart the Docker daemon:
sudo systemctl restart docker
git clone https://github.com/gorkemkaramolla/iso-fr-ai.git
cd iso-fr-ai
docker compose up --build
cd iso-fr-ai
cd iso-electron && npm run isoai