Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature Request]: Stable & Efficient Running Environment For WebUI #14651

Open
3 of 10 tasks
soulteary opened this issue Jan 15, 2024 · 1 comment
Open
3 of 10 tasks

[Feature Request]: Stable & Efficient Running Environment For WebUI #14651

soulteary opened this issue Jan 15, 2024 · 1 comment
Labels
enhancement New feature or request

Comments

@soulteary
Copy link

soulteary commented Jan 15, 2024

Is there an existing issue for this?

  • I have searched the existing issues and checked the recent builds/commits

What would your feature do ?

The software is great, but after a long period of rapid iteration, many functions and codes may be a little outdated, and it is impossible to use the latest version of the basic software to obtain efficient rendering speeds.

Progress

The first step is to use a standard container environment: Nvidia/PyTorch to implement standard installation of WebUI and decouple components.

https://github.com/soulteary/docker-stable-diffusion-webui/

Proposed workflow

  1. After discussion, I will split appropriate small PR functions to improve the above functions, so that WebUI can eventually support containerized operation of various devices and systems, and can be continuously upgraded with PyTorch and CUDA updates. environment for maximum performance.
  2. Looking forward to hearing what other users in the community think

Additional information

image

PyTorch version: 2.2.0a0+81ea7a4
Is debug build: False
CUDA used to build PyTorch: 12.3
ROCM used to build PyTorch: N/A

OS: Ubuntu 22.04.3 LTS (x86_64)
GCC version: (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0
Clang version: Could not collect
CMake version: version 3.27.9
Libc version: glibc-2.35

Python version: 3.10.12 (main, Nov 20 2023, 15:14:05) [GCC 11.4.0] (64-bit runtime)
Python platform: Linux-6.5.0-14-generic-x86_64-with-glibc2.35
Is CUDA available: True
CUDA runtime version: 12.3.107
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration: GPU 0: NVIDIA GeForce RTX 4090
Nvidia driver version: 525.147.05
cuDNN version: Probably one of the following:
/usr/lib/x86_64-linux-gnu/libcudnn.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_adv_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_cnn_train.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_infer.so.8.9.7
/usr/lib/x86_64-linux-gnu/libcudnn_ops_train.so.8.9.7
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True


Versions of relevant libraries:
[pip3] clip-anytorch==2.6.0
[pip3] dctorch==0.1.2
[pip3] numpy==1.24.4
[pip3] onnx==1.15.0rc2
[pip3] open-clip-torch==2.7.0
[pip3] optree==0.10.0
[pip3] pytorch-lightning==2.1.3
[pip3] pytorch-quantization==2.1.2
[pip3] torch==2.2.0a0+81ea7a4
[pip3] torch-tensorrt==2.2.0a0
[pip3] torchdata==0.7.0a0
[pip3] torchdiffeq==0.2.3
[pip3] torchmetrics==1.3.0
[pip3] torchsde==0.2.6
[pip3] torchtext==0.17.0a0
[pip3] torchvision==0.17.0a0
[pip3] triton==2.1.0+6e4932c
@soulteary
Copy link
Author

Regardless of whether it is Windows or other operating systems, the community will have a unified environment and say goodbye to messy installation errors and failures.

It will also be easier for community users and developers to reproduce and test different versions of components and conduct faster upgrades. .

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

1 participant