You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I have searched the existing issues and checked the recent builds/commits
What would your feature do ?
The software is great, but after a long period of rapid iteration, many functions and codes may be a little outdated, and it is impossible to use the latest version of the basic software to obtain efficient rendering speeds.
Appropriately update the WebUI component version to obtain corrected functionality and higher performance.
Support and use Python 10 & PyTorch 2.2.0 & CUDA 12 & xFormers for faster rendering performance, with more appropriate component version.
After discussion, I will split appropriate small PR functions to improve the above functions, so that WebUI can eventually support containerized operation of various devices and systems, and can be continuously upgraded with PyTorch and CUDA updates. environment for maximum performance.
Looking forward to hearing what other users in the community think
Additional information
PyTorch version: 2.2.0a0+81ea7a4
Is debug build: False
CUDA used to build PyTorch: 12.3
ROCM used to build PyTorch: N/A
OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.9
Libc version: glibc-2.35
Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 525.147.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] numpy==1.24.4
[pip3] onnx==1.15.0rc2
[pip3] open-clip-torch==2.7.0
[pip3] optree==0.10.0
[pip3] pytorch-lightning==2.1.3
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.2.0a0+81ea7a4
[pip3] torch-tensorrt==2.2.0a0
[pip3] torchdata==0.7.0a0
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==1.3.0
[pip3] torchsde==0.2.6
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.17.0a0
[pip3] triton==2.1.0+6e4932c
The text was updated successfully, but these errors were encountered:
Regardless of whether it is Windows or other operating systems, the community will have a unified environment and say goodbye to messy installation errors and failures.
It will also be easier for community users and developers to reproduce and test different versions of components and conduct faster upgrades. .
Is there an existing issue for this?
What would your feature do ?
The software is great, but after a long period of rapid iteration, many functions and codes may be a little outdated, and it is impossible to use the latest version of the basic software to obtain efficient rendering speeds.
Progress
The first step is to use a standard container environment: Nvidia/PyTorch to implement standard installation of WebUI and decouple components.
https://github.com/soulteary/docker-stable-diffusion-webui/
Proposed workflow
Additional information
PyTorch version: 2.2.0a0+81ea7a4 Is debug build: False CUDA used to build PyTorch: 12.3 ROCM used to build PyTorch: N/A OS: Ubuntu 22.04.3 LTS (x86_64) GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0 Clang version: Could not collect CMake version: version 3.27.9 Libc version: glibc-2.35 Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime) Python platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35 Is CUDA available: True CUDA runtime version: 12.3.107 CUDA_MODULE_LOADING set to: LAZY GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090 Nvidia driver version: 525.147.05 cuDNN version: Probably one of the following: /usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7 /usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7 /usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7 /usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7 /usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7 /usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7 HIP runtime version: N/A MIOpen runtime version: N/A Is XNNPACK available: True Versions of relevant libraries: [pip3] clip-anytorch==2.6.0 [pip3] dctorch==0.1.2 [pip3] numpy==1.24.4 [pip3] onnx==1.15.0rc2 [pip3] open-clip-torch==2.7.0 [pip3] optree==0.10.0 [pip3] pytorch-lightning==2.1.3 [pip3] pytorch-quantization==2.1.2 [pip3] torch==2.2.0a0+81ea7a4 [pip3] torch-tensorrt==2.2.0a0 [pip3] torchdata==0.7.0a0 [pip3] torchdiffeq==0.2.3 [pip3] torchmetrics==1.3.0 [pip3] torchsde==0.2.6 [pip3] torchtext==0.17.0a0 [pip3] torchvision==0.17.0a0 [pip3] triton==2.1.0+6e4932c
The text was updated successfully, but these errors were encountered: