You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
System Information Report
GPU Information
Model: NVIDIA GH200 480GB (Hopper Architecture)
Total GPU Memory: 97,871 MiB (~95.57 GiB)
NVIDIA Driver Version: 560.35.03
CUDA/PyTorch Environment
CUDA Version: 12.6 (System)
CUDA Version in Container: 12.6.2
vLLM Details
vLLM version: tested v0.6.4.post1 and v0.6.4.post2.dev71
Models being used:
Llama-3.1-8B-Instruct
Qwen2.5-7B-Instruct
System Details
Running in Docker containers with following settings:
GPU Memory Utilization: 0.3 (30%)
Max Model Length: 16384
Max Batched Tokens: 2048
Max Sequence Length to Capture: 16384
Chunked Prefill: Enabled
Prefix Caching: Enabled
IPC Mode: host
GPU Access: all GPUs exposed to container
When running multiple vLLM instances on the same GPU, the second instance fails to start due to incorrect GPU memory accounting. The second instance appears to include the first instance's memory usage in its calculations, leading to a negative KV cache size and initialization failure.
Note: this setup was working fine in version 0.6.3.post1
First Instance (Works correctly)
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:41<00:00, 10.38s/it]
INFO 11-25 14:11:42 model_runner.py:1077] Loading model weights took 14.9888 GB
INFO 11-25 14:11:42 worker.py:232] Memory profiling results: total_gpu_memory=95.00GiB initial_memory_usage=15.64GiB peak_torch_memory=16.20GiB memory_usage_post_profile=15.74GiB non_torch_memory=0.72GiB kv_cache_size=11.58GiB gpu_memory_utilization=0.30
INFO 11-25 14:11:42 gpu_executor.py:113] # GPU blocks: 5931, # CPU blocks: 2048
INFO 11-25 14:11:42 gpu_executor.py:117] Maximum concurrency for 16384 tokens per request: 5.79x
Second Instance (Memory issue)
Loading safetensors checkpoint shards: 100% Completed | 4/4 [00:39<00:00, 9.95s/it]
INFO 11-25 14:14:05 model_runner.py:1077] Loading model weights took 14.2487 GB
INFO 11-25 14:14:06 worker.py:232] Memory profiling results: total_gpu_memory=95.00GiB initial_memory_usage=42.61GiB peak_torch_memory=15.67GiB memory_usage_post_profile=42.71GiB non_torch_memory=28.43GiB kv_cache_size=-15.61GiB gpu_memory_utilization=0.30
INFO 11-25 14:14:06 gpu_executor.py:113] # GPU blocks: 0, # CPU blocks: 4681
INFO 11-25 14:14:06 gpu_executor.py:117] Maximum concurrency for 16384 tokens per request: 0.00x
ERROR 11-25 14:14:06 engine.py:366] No available memory for the cache blocks. Try increasing `gpu_memory_utilization` when initializing the engine.
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
Your current environment
The output of `python collect_env.py`
Model Input Dumps
docker run --rm --gpus all
-p 8000:8000
--ipc=host
-v /llm-store/Llama-3.1-8B-Instruct:/model
--name vllm-llama
vllm-nvidia:latest
--host 0.0.0.0
--port 8000
--gpu-memory-utilization 0.3
--enable_chunked_prefill True
--enable_prefix_caching
--max_model_len 16384
--max-num-batched-tokens 2048
--max_seq_len_to_capture 16384
--model /model \
docker run --rm --gpus all
-p 8001:8000
--ipc=host
-v /llm-store/Qwen2.5-7B-Instruct:/model
--name vllm-qwen
vllm-nvidia:latest
--host 0.0.0.0
--port 8000
--gpu-memory-utilization 0.3
--enable_chunked_prefill True
--enable_prefix_caching
--max_model_len 16384
--max-num-batched-tokens 2048
--max_seq_len_to_capture 16384
--model /model \
🐛 Describe the bug
When running multiple vLLM instances on the same GPU, the second instance fails to start due to incorrect GPU memory accounting. The second instance appears to include the first instance's memory usage in its calculations, leading to a negative KV cache size and initialization failure.
Note: this setup was working fine in version 0.6.3.post1
First Instance (Works correctly)
Second Instance (Memory issue)
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: