Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug]: GPU Memory Accounting Issue with Multiple vLLM Instances #10643

Closed
1 task done
brokenlander opened this issue Nov 25, 2024 · 0 comments
Closed
1 task done

[Bug]: GPU Memory Accounting Issue with Multiple vLLM Instances #10643

brokenlander opened this issue Nov 25, 2024 · 0 comments
Labels
bug Something isn't working

Comments

@brokenlander
Copy link

Your current environment

The output of `python collect_env.py`
System Information Report
GPU Information

Model: NVIDIA GH200 480GB (Hopper Architecture)
Total GPU Memory: 97,871 MiB (~95.57 GiB)
NVIDIA Driver Version: 560.35.03

CUDA/PyTorch Environment

CUDA Version: 12.6 (System)
CUDA Version in Container: 12.6.2

vLLM Details

vLLM version: tested v0.6.4.post1 and v0.6.4.post2.dev71
Models being used:

Llama-3.1-8B-Instruct
Qwen2.5-7B-Instruct



System Details

Running in Docker containers with following settings:

GPU Memory Utilization: 0.3 (30%)
Max Model Length: 16384
Max Batched Tokens: 2048
Max Sequence Length to Capture: 16384
Chunked Prefill: Enabled
Prefix Caching: Enabled
IPC Mode: host
GPU Access: all GPUs exposed to container

Model Input Dumps

docker run --rm --gpus all
-p 8000:8000
--ipc=host
-v /llm-store/Llama-3.1-8B-Instruct:/model
--name vllm-llama
vllm-nvidia:latest
--host 0.0.0.0
--port 8000
--gpu-memory-utilization 0.3
--enable_chunked_prefill True
--enable_prefix_caching
--max_model_len 16384
--max-num-batched-tokens 2048
--max_seq_len_to_capture 16384
--model /model \

docker run --rm --gpus all
-p 8001:8000
--ipc=host
-v /llm-store/Qwen2.5-7B-Instruct:/model
--name vllm-qwen
vllm-nvidia:latest
--host 0.0.0.0
--port 8000
--gpu-memory-utilization 0.3
--enable_chunked_prefill True
--enable_prefix_caching
--max_model_len 16384
--max-num-batched-tokens 2048
--max_seq_len_to_capture 16384
--model /model \

🐛 Describe the bug

When running multiple vLLM instances on the same GPU, the second instance fails to start due to incorrect GPU memory accounting. The second instance appears to include the first instance's memory usage in its calculations, leading to a negative KV cache size and initialization failure.

Note: this setup was working fine in version 0.6.3.post1

First Instance (Works correctly)

Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:41<00:00, 10.38s/it]
INFO 11-25 14:11:42 model_runner.py:1077] Loading model weights took 14.9888 GB
INFO 11-25 14:11:42 worker.py:232] Memory profiling results: total_gpu_memory=95.00GiB initial_memory_usage=15.64GiB peak_torch_memory=16.20GiB memory_usage_post_profile=15.74GiB non_torch_memory=0.72GiB kv_cache_size=11.58GiB gpu_memory_utilization=0.30
INFO 11-25 14:11:42 gpu_executor.py:113] # GPU blocks: 5931, # CPU blocks: 2048
INFO 11-25 14:11:42 gpu_executor.py:117] Maximum concurrency for 16384 tokens per request: 5.79x

Second Instance (Memory issue)

Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:39<00:00,  9.95s/it]
INFO 11-25 14:14:05 model_runner.py:1077] Loading model weights took 14.2487 GB
INFO 11-25 14:14:06 worker.py:232] Memory profiling results: total_gpu_memory=95.00GiB initial_memory_usage=42.61GiB peak_torch_memory=15.67GiB memory_usage_post_profile=42.71GiB non_torch_memory=28.43GiB kv_cache_size=-15.61GiB gpu_memory_utilization=0.30
INFO 11-25 14:14:06 gpu_executor.py:113] # GPU blocks: 0, # CPU blocks: 4681
INFO 11-25 14:14:06 gpu_executor.py:117] Maximum concurrency for 16384 tokens per request: 0.00x
ERROR 11-25 14:14:06 engine.py:366] No available memory for the cache blocks. Try increasing `gpu_memory_utilization` when initializing the engine.

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@brokenlander brokenlander added the bug Something isn't working label Nov 25, 2024
@brokenlander brokenlander closed this as not planned Won't fix, can't repro, duplicate, stale Nov 26, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant