Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Misc][Quark] Upstream Quark format to VLLM #10765

Merged
merged 20 commits into from
Jan 15, 2025

Conversation

kewang-xlnx
Copy link
Contributor

This PR is related to the issue discussed in this #10294.

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

Copy link

mergify bot commented Nov 29, 2024

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @kewang-xlnx.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Nov 29, 2024
@kewang-xlnx kewang-xlnx force-pushed the kewang/quark_upstream branch from deedcc9 to 8837c74 Compare December 2, 2024 10:28
@mergify mergify bot removed the needs-rebase label Dec 2, 2024
@kewang-xlnx kewang-xlnx force-pushed the kewang/quark_upstream branch 3 times, most recently from dce4cec to fab2244 Compare December 4, 2024 12:02
@kewang-xlnx kewang-xlnx changed the title [feature]:upstream quark format to vllm [Misc][Quark] Upstream Quark format to VLLM Dec 4, 2024
@robertgshaw2-redhat
Copy link
Collaborator

Its a bit unfortunate that:

  • We will have all this duplicated code
  • We will have multiple model formats to maintain

However, I understand that AMD wants to push forward with Quark, so I think this approach is very reasonable. I would like to take on this effort myself and do two things:

  • Factor out some of the utility functions such that we do not have too much copy / paste code between the backends
  • Leverage the Kernel abstraction such that compressed-tensors models can run on RoCM backend

To do this, it would be very helpful if the Quark team could provide:

  • A sample W8A8-int8 model
  • A sample W8A8-fp8 model

If this is done, I can take on the effort of finalizing the integration. Thanks!

@kewang-xlnx
Copy link
Contributor Author

Its a bit unfortunate that:

  • We will have all this duplicated code
  • We will have multiple model formats to maintain

However, I understand that AMD wants to push forward with Quark, so I think this approach is very reasonable. I would like to take on this effort myself and do two things:

  • Factor out some of the utility functions such that we do not have too much copy / paste code between the backends
  • Leverage the Kernel abstraction such that compressed-tensors models can run on RoCM backend

To do this, it would be very helpful if the Quark team could provide:

  • A sample W8A8-int8 model
  • A sample W8A8-fp8 model

If this is done, I can take on the effort of finalizing the integration. Thanks!

Hi @robertgshaw2-neuralmagic. Thank you for your comments.

Your proposed work is highly valuable and will benefit both VLLM contributors and users. Factoring out utility functions to reduce code duplication and leveraging the Kernel abstraction for compatibility with the RoCM backend are excellent steps forward.

Here is a config file of W8A8-fp8:
config.json
And this is a config file of W8A8-int8:
config.json

key names and data types of Quark safetensors

model.layers.1.self_attn.k_proj.input_scale, 	torch.float16
model.layers.1.self_attn.k_proj.weight, 	torch.float8_e4m3fn
model.layers.1.self_attn.k_proj.weight_scale, 	torch.float16
model.layers.1.self_attn.o_proj.input_scale, 	torch.float16
model.layers.1.self_attn.o_proj.weight, 	torch.float8_e4m3fn
model.layers.1.self_attn.o_proj.weight_scale, 	torch.float16
model.layers.1.self_attn.q_proj.input_scale, 	torch.float16
model.layers.1.self_attn.q_proj.weight, 	torch.float8_e4m3fn
model.layers.1.self_attn.q_proj.weight_scale, 	torch.float16
model.layers.1.self_attn.v_proj.input_scale, 	torch.float16
model.layers.1.self_attn.v_proj.weight, 	torch.float8_e4m3fn
model.layers.1.self_attn.v_proj.weight_scale, 	torch.float16

KV scale format if kv cache used

model.layers.1.self_attn.k_proj.output_scale, 	torch.float16
model.layers.1.self_attn.v_proj.output_scale, 	torch.float16

And it’s important that the integration of the Quark format into VLLM is a priority. To ensure long-term maintainability, we would like to keep the responsibility for maintaining the Quark format within AMD’s team. Our team will closely monitor the integration to minimize any maintenance overhead for the vLLM project.

Let us know if you need any further support during the process. Thanks again for your collaboration!

@mgoin
Copy link
Member

mgoin commented Dec 16, 2024

KV scale format if kv cache used

model.layers.1.self_attn.k_proj.output_scale, 	torch.float16
model.layers.1.self_attn.v_proj.output_scale, 	torch.float16

Hi @kewang-xlnx can you describe what happens in the case of a model with fused QKV proj, such as Phi-3? In several cases models do not have separate k_proj/v_proj linear layers with which to register output scales on. This is why we register explicit self_attn.k_scale and self_attn.v_scale in compressed-tensors. We could likely reuse the same utility if you choose the same place to register these (on the Attention module)

Copy link

mergify bot commented Dec 16, 2024

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @kewang-xlnx.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Dec 16, 2024
@kewang-xlnx
Copy link
Contributor Author

KV scale format if kv cache used

model.layers.1.self_attn.k_proj.output_scale, 	torch.float16
model.layers.1.self_attn.v_proj.output_scale, 	torch.float16

Hi @kewang-xlnx can you describe what happens in the case of a model with fused QKV proj, such as Phi-3? In several cases models do not have separate k_proj/v_proj linear layers with which to register output scales on. This is why we register explicit self_attn.k_scale and self_attn.v_scale in compressed-tensors. We could likely reuse the same utility if you choose the same place to register these (on the Attention module)

Hi @mgoin. If the kv_scale of fused QKV proj in quark is as below, take Phi-3 for example.
model.layers.22.self_attn.qkv_proj.output_scale
This is why we need to remap one kv_scale to both attn.k_scale and attn.v_scale.

@kewang-xlnx kewang-xlnx force-pushed the kewang/quark_upstream branch from 4ca3bb8 to 3d59ea6 Compare December 17, 2024 10:39
@mergify mergify bot removed the needs-rebase label Dec 17, 2024
@kewang-xlnx kewang-xlnx force-pushed the kewang/quark_upstream branch from 3d59ea6 to 6c45013 Compare December 17, 2024 11:05
Copy link

mergify bot commented Dec 19, 2024

This pull request has merge conflicts that must be resolved before it can be
merged. Please rebase the PR, @kewang-xlnx.

https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/working-with-forks/syncing-a-fork

@mergify mergify bot added the needs-rebase label Dec 19, 2024
@kewang-xlnx kewang-xlnx force-pushed the kewang/quark_upstream branch from 6c45013 to 69b6bc9 Compare December 19, 2024 04:12
@mergify mergify bot removed the needs-rebase label Dec 19, 2024
@kewang-xlnx kewang-xlnx force-pushed the kewang/quark_upstream branch 3 times, most recently from 7d26e69 to 58b9221 Compare December 19, 2024 06:50
@mgoin mgoin self-requested a review December 21, 2024 16:29
Copy link
Member

@mgoin mgoin left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Apologies for the delay and thanks for the changes! This seems to be in a decent place to land. Would you have a small quark dense and/or moe model we could add to CI @kewang-xlnx ?

kewang-xlnx and others added 14 commits January 15, 2025 05:50
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
…ton_scaled_mm.py

Great suggestion to use reshape operator. I think the change should be like this
scale_a = scale_a.reshape(-1, 1) if scale_a.dim() <= 1 else scale_a
scale_b = scale_b.reshape(-1, 1) if scale_b.dim() <= 1 else scale_b

Co-authored-by: Michael Goin <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
@kewang-xlnx kewang-xlnx force-pushed the kewang/quark_upstream branch from 61ac2ec to f3d9e58 Compare January 15, 2025 10:01
@mgoin
Copy link
Member

mgoin commented Jan 15, 2025

Thank you for your hard work @kewang-xlnx!

@mgoin mgoin merged commit de0526f into vllm-project:main Jan 15, 2025
57 checks passed
ice-tong pushed a commit to ice-tong/vllm that referenced this pull request Jan 18, 2025
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Signed-off-by: ice-tong <[email protected]>
joennlae pushed a commit to 44ai-labs/vllm that referenced this pull request Jan 19, 2025
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
joennlae pushed a commit to 44ai-labs/vllm that referenced this pull request Jan 19, 2025
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
jikunshang pushed a commit to jikunshang/vllm that referenced this pull request Jan 21, 2025
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
abmfy pushed a commit to abmfy/vllm-flashinfer that referenced this pull request Jan 24, 2025
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
abmfy pushed a commit to abmfy/vllm-flashinfer that referenced this pull request Jan 24, 2025
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
rasmith pushed a commit to rasmith/vllm that referenced this pull request Jan 30, 2025
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Isotr0py pushed a commit to Isotr0py/vllm that referenced this pull request Feb 2, 2025
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
hongxiayang added a commit to ROCm/vllm that referenced this pull request Feb 3, 2025
…ntion (#399)

* [V1] Avoid sending text prompt to core engine (vllm-project#11963)

Signed-off-by: Roger Wang <[email protected]>

* [CI/Build] Add markdown linter (vllm-project#11857)

Signed-off-by: Rafael Vasquez <[email protected]>

* [Model] Initialize support for Deepseek-VL2 models (vllm-project#11578)

Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>

* [Hardware][CPU] Multi-LoRA implementation for the CPU backend (vllm-project#11100)

Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>

* [Hardware][TPU] workaround fix for MoE on TPU (vllm-project#11764)

* [V1][Core][1/n] Logging and Metrics (vllm-project#11962)

Signed-off-by: [email protected] <[email protected]>

* [Model] Support GGUF models newly added in `transformers` 4.46.0 (vllm-project#9685)

Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>

* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction (vllm-project#11973)

Signed-off-by: [email protected] <[email protected]>

* [MISC] fix typo in kv transfer send recv test (vllm-project#11983)

* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. (vllm-project#11979)

* [CI][Spec Decode] fix: broken test for EAGLE model (vllm-project#11972)

Signed-off-by: Sungjae Lee <[email protected]>

* [Misc] Fix Deepseek V2 fp8 kv-scale remapping (vllm-project#11947)

Signed-off-by: Yida Wu <[email protected]>

* [Misc]Minor Changes about Worker (vllm-project#11555)

Signed-off-by: Chenguang Li <[email protected]>

* [platform] add ray_device_key (vllm-project#11948)

Signed-off-by: youkaichao <[email protected]>

* Fix Max Token ID for Qwen-VL-Chat (vllm-project#11980)

Signed-off-by: Alex-Brooks <[email protected]>

* [Kernel] unified_attention for Attention.forward (vllm-project#11967)

Signed-off-by: Chen Zhang <[email protected]>

* [Doc][V1] Update model implementation guide for V1 support (vllm-project#11998)

Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>

* [Doc] Organise installation documentation into categories and tabs (vllm-project#11935)

Signed-off-by: Harry Mellor <[email protected]>

* [platform] add device_control env var (vllm-project#12009)

Signed-off-by: youkaichao <[email protected]>

* [Platform] Move get_punica_wrapper() function to Platform (vllm-project#11516)

Signed-off-by: Shanshan Shen <[email protected]>

* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function (vllm-project#11982)

Signed-off-by: elijah <[email protected]>

* [Doc] Fix build from source and installation link in README.md (vllm-project#12013)

Signed-off-by: Yikun <[email protected]>

* Using list

* [Bugfix] Fix deepseekv3 gate bias error (vllm-project#12002)

Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>

* Revert "[misc] improve memory profiling (vllm-project#11809)"

This reverts commit 889e662.

* Multi-lingual P3L (#356)

* Commiting the *multilingual* P3L test.

* Created a *multi-lingual* P3L test.

* Making ruff happy.

* .

* Added a reference to the language-scripture Confluence table.

* Typo fixing.

* Harmonizing naming.

* Fixing comments in the header.

---------

Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>

* Trying to make scales work with compileable attention

* [Docs] Add Sky Computing Lab to project intro (vllm-project#12019)

Signed-off-by: Woosuk Kwon <[email protected]>

* [HPU][Bugfix] set_forward_context and CI test execution (vllm-project#12014)

Signed-off-by: Konrad Zawora <[email protected]>

* [Doc] Update Quantization Hardware Support Documentation (vllm-project#12025)

Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>

* [HPU][misc] add comments for explanation (vllm-project#12034)

Signed-off-by: youkaichao <[email protected]>

* [Bugfix] Fix various bugs in multi-modal processor (vllm-project#12031)

Signed-off-by: DarkLight1337 <[email protected]>

* [Kernel] Revert the API change of Attention.forward (vllm-project#12038)

Signed-off-by: Chen Zhang <[email protected]>

* [Platform] Add output for Attention Backend (vllm-project#11981)

Signed-off-by: wangxiyuan <[email protected]>

* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention (vllm-project#12040)

Signed-off-by: Chen Zhang <[email protected]>

* Explain where the engine args go when using Docker (vllm-project#12041)

Signed-off-by: Harry Mellor <[email protected]>

* Docs lint

* [Doc]: Update the Json Example of the `Engine Arguments` document (vllm-project#12045)

* [Misc]  Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping (vllm-project#11924)

Signed-off-by: Jee Jee Li <[email protected]>

* [Kernel] Support MulAndSilu (vllm-project#11624)

Signed-off-by: Jee Jee Li <[email protected]>

* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py (vllm-project#12046)

Signed-off-by: Konrad Zawora <[email protected]>

* [Platform] move current_memory_usage() into platform (vllm-project#11369)

Signed-off-by: Shanshan Shen <[email protected]>

* [V1][BugFix] Fix edge case in VLM scheduling (vllm-project#12065)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Misc] Add multipstep chunked-prefill support for FlashInfer (vllm-project#10467)

* [core] Turn off GPU communication overlap for Ray executor (vllm-project#12051)

Signed-off-by: Rui Qiao <[email protected]>

* [core] platform agnostic executor via collective_rpc (vllm-project#11256)

Signed-off-by: youkaichao <[email protected]>

* [Doc] Update examples to remove SparseAutoModelForCausalLM (vllm-project#12062)

Signed-off-by: Kyle Sayers <[email protected]>

* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager (vllm-project#12003)

* Fix: cases with empty sparsity config (vllm-project#12057)

Signed-off-by: Rahul Tuli <[email protected]>

* Type-fix: make execute_model output type optional (vllm-project#12020)

* [Platform] Do not raise error if _Backend is not found (vllm-project#12023)

Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>

* [Model]: Support internlm3 (vllm-project#12037)

* Misc: allow to use proxy in `HTTPConnection` (vllm-project#12042)

Signed-off-by: Yuan Zhou <[email protected]>

* [Misc][Quark] Upstream Quark format to VLLM (vllm-project#10765)

Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>

* [Doc]: Update `OpenAI-Compatible Server` documents (vllm-project#12082)

* [Bugfix] use right truncation for non-generative tasks (vllm-project#12050)

Signed-off-by: Joe Runde <[email protected]>

* [V1][Core] Autotune encoder cache budget (vllm-project#11895)

Signed-off-by: Roger Wang <[email protected]>

* [Bugfix] Fix _get_lora_device for HQQ marlin (vllm-project#12090)

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* Allow hip sources to be directly included when compiling for rocm. (vllm-project#12087)

* [Core] Default to using per_token quantization for fp8 when cutlass is supported. (vllm-project#8651)

Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>

* [Doc] Add documentation for specifying model architecture (vllm-project#12105)

* Various cosmetic/comment fixes (vllm-project#12089)

Signed-off-by: mgoin <[email protected]>

* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 (vllm-project#12067)

Signed-off-by: Isotr0py <[email protected]>

* Support torchrun and SPMD-style offline inference (vllm-project#12071)

Signed-off-by: youkaichao <[email protected]>

* [core] LLM.collective_rpc interface and RLHF example (vllm-project#12084)

Signed-off-by: youkaichao <[email protected]>

* [Bugfix] Fix max image feature size for Llava-one-vision (vllm-project#12104)

Signed-off-by: Roger Wang <[email protected]>

* Enable user marker for vllm profiling (#357)

* Enable user marker for vllm profiling

---------

Co-authored-by: Gregory Shtrasberg <[email protected]>

* [misc] Add LoRA kernel micro benchmarks (vllm-project#11579)

* [Model] Add support for deepseek-vl2-tiny model (vllm-project#12068)

Signed-off-by: Isotr0py <[email protected]>

* Deepseek V3 support (#364)

* Changing the hard coded datatype to see if it's enough for the model to work

* Picking the upstrteam moe kernel version

* make upstream fix for v3 also works for rocm v2

* Conditional fnuz dtype

* Requantizing from fn to fnuz

* Requantizing moe as well

* Actually requantizing moe weights

* Conditional requantization and assert on padding in block quant

* Format

---------

Co-authored-by: charlifu <[email protected]>

* [Bugfix] Set enforce_eager automatically for mllama (vllm-project#12127)

Signed-off-by: Chen Zhang <[email protected]>

* [Bugfix] Fix a path bug in disaggregated prefill example script. (vllm-project#12121)

Signed-off-by: Kuntai Du <[email protected]>

* [CI]add genai-perf benchmark in nightly benchmark (vllm-project#10704)

Signed-off-by: Kunshang Ji <[email protected]>

* [Doc] Add instructions on using Podman when SELinux is active (vllm-project#12136)

Signed-off-by: Yuan Tang <[email protected]>

* [Bugfix] Fix issues in CPU build Dockerfile (vllm-project#12135)

Signed-off-by: Yuan Tang <[email protected]>

* [BugFix] add more `is not None` check in VllmConfig.__post_init__ (vllm-project#12138)

Signed-off-by: Chen Zhang <[email protected]>

* [Misc] Add deepseek_vl2 chat template (vllm-project#12143)

Signed-off-by: Isotr0py <[email protected]>

* [ROCm][MoE] moe tuning support for rocm (vllm-project#12049)

Signed-off-by: Divakar Verma <[email protected]>

* [V1] Move more control of kv cache initialization from model_executor to EngineCore (vllm-project#11960)

Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>

* [Misc][LoRA] Improve the readability of LoRA error messages (vllm-project#12102)

Signed-off-by: Jee Jee Li <[email protected]>

* [CI/Build][CPU][Bugfix] Fix CPU CI (vllm-project#12150)

Signed-off-by: jiang1.li <[email protected]>

* [core] allow callable in collective_rpc (vllm-project#12151)

Signed-off-by: youkaichao <[email protected]>

* [Bugfix] Fix score api for missing max_model_len validation (vllm-project#12119)

Signed-off-by: Wallas Santos <[email protected]>

* [Bugfix] Mistral tokenizer encode accept list of str (vllm-project#12149)

Signed-off-by: Kunshang Ji <[email protected]>

* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant (vllm-project#12134)

Signed-off-by: Gregory Shtrasberg <[email protected]>

* [torch.compile] disable logging when cache is disabled (vllm-project#12043)

Signed-off-by: youkaichao <[email protected]>

* [misc] fix cross-node TP (vllm-project#12166)

Signed-off-by: youkaichao <[email protected]>

* [AMD][CI/Build][Bugfix] use pytorch stale wheel (vllm-project#12172)

Signed-off-by: hongxyan <[email protected]>

* [core] further polish memory profiling (vllm-project#12126)

Signed-off-by: youkaichao <[email protected]>

* [Docs] Fix broken link in SECURITY.md (vllm-project#12175)

Signed-off-by: Russell Bryant <[email protected]>

* [Model] Port deepseek-vl2 processor, remove dependency (vllm-project#12169)

Signed-off-by: Isotr0py <[email protected]>

* [core] clean up executor class hierarchy between v1 and v0 (vllm-project#12171)

Signed-off-by: youkaichao <[email protected]>

* [Misc] Support register quantization method out-of-tree (vllm-project#11969)

* [V1] Collect env var for usage stats (vllm-project#12115)

* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu (vllm-project#12152)

Signed-off-by: Michal Adamczyk <[email protected]>

* [Bugfix] Fix multi-modal processors for transformers 4.48 (vllm-project#12187)

* [torch.compile] store inductor compiled Python file (vllm-project#12182)

Signed-off-by: youkaichao <[email protected]>

* benchmark_serving support --served-model-name param (vllm-project#12109)

Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>

* [Misc] Add BNB support to GLM4-V model (vllm-project#12184)

Signed-off-by: Isotr0py <[email protected]>

* [V1] Add V1 support of Qwen2-VL (vllm-project#12128)

Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>

* [Model] Support for fairseq2 Llama (vllm-project#11442)

Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>

* [Bugfix] Fix num_heads value for simple connector when tp enabled (vllm-project#12074)

Signed-off-by: Shangming Cai <[email protected]>

* [torch.compile] fix sym_tensor_indices (vllm-project#12191)

Signed-off-by: youkaichao <[email protected]>

* Move linting to `pre-commit` (vllm-project#11975)

Signed-off-by: Harry Mellor <[email protected]>

* [DOC] Fix typo in docstring and assert message (vllm-project#12194)

Signed-off-by: Yuan Tang <[email protected]>

* [DOC] Add missing docstring in LLMEngine.add_request() (vllm-project#12195)

Signed-off-by: Yuan Tang <[email protected]>

* [Bugfix] Fix incorrect types in LayerwiseProfileResults (vllm-project#12196)

Signed-off-by: Yuan Tang <[email protected]>

* [Model] Add Qwen2 PRM model support (vllm-project#12202)

Signed-off-by: Isotr0py <[email protected]>

* [Core] Interface for accessing model from `VllmRunner` (vllm-project#10353)

Signed-off-by: DarkLight1337 <[email protected]>

* [misc] add placeholder format.sh (vllm-project#12206)

Signed-off-by: youkaichao <[email protected]>

* [CI/Build] Remove dummy CI steps (vllm-project#12208)

Signed-off-by: DarkLight1337 <[email protected]>

* [CI/Build] Make pre-commit faster (vllm-project#12212)

Signed-off-by: DarkLight1337 <[email protected]>

* [Model] Upgrade Aria to transformers 4.48 (vllm-project#12203)

Signed-off-by: DarkLight1337 <[email protected]>

* [misc] print a message to suggest how to bypass commit hooks (vllm-project#12217)

Signed-off-by: youkaichao <[email protected]>

* [core][bugfix] configure env var during import vllm (vllm-project#12209)

Signed-off-by: youkaichao <[email protected]>

* [V1] Remove `_get_cache_block_size` (vllm-project#12214)

Signed-off-by: Chen Zhang <[email protected]>

* [Misc] Pass `attention` to impl backend (vllm-project#12218)

Signed-off-by: wangxiyuan <[email protected]>

* [Bugfix] Fix `HfExampleModels.find_hf_info` (vllm-project#12223)

Signed-off-by: DarkLight1337 <[email protected]>

* [CI] Pass local python version explicitly to pre-commit mypy.sh (vllm-project#12224)

Signed-off-by: Chen Zhang <[email protected]>

* Using ROCm6.3.1 base docker and building hipblas-common (#366)

* [Misc] Update CODEOWNERS (vllm-project#12229)

* fix: update platform detection for M-series arm based MacBook processors (vllm-project#12227)

Signed-off-by: isikhi <[email protected]>

* [misc] add cuda runtime version to usage data (vllm-project#12190)

Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>

* [bugfix] catch xgrammar unsupported array constraints (vllm-project#12210)

Signed-off-by: Jason Cheng <[email protected]>

* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) (vllm-project#12222)

Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>

* Add quantization and guided decoding CODEOWNERS (vllm-project#12228)

Signed-off-by: mgoin <[email protected]>

* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork (vllm-project#11777)

Signed-off-by: Gregory Shtrasberg <[email protected]>

* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 (vllm-project#12230)

Signed-off-by: NickLucche <[email protected]>

* [ci/build] disable failed and flaky tests (vllm-project#12240)

Signed-off-by: youkaichao <[email protected]>

* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` (vllm-project#12244)

Signed-off-by: DarkLight1337 <[email protected]>

* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration  (vllm-project#12237)

Signed-off-by: Jee Jee Li <[email protected]>

* [Misc] Remove redundant TypeVar from base model (vllm-project#12248)

Signed-off-by: DarkLight1337 <[email protected]>

* [Bugfix] Fix mm_limits access for merged multi-modal processor (vllm-project#12252)

Signed-off-by: DarkLight1337 <[email protected]>

* [torch.compile] transparent compilation with more logging (vllm-project#12246)

Signed-off-by: youkaichao <[email protected]>

* [V1][Bugfix] Fix data item ordering in mixed-modality inference (vllm-project#12259)

Signed-off-by: Roger Wang <[email protected]>

* Remove pytorch comments for outlines + compressed-tensors (vllm-project#12260)

Signed-off-by: Thomas Parnell <[email protected]>

* [Platform] improve platforms getattr (vllm-project#12264)

Signed-off-by: Mengqing Cao <[email protected]>

* [ci/build] update nightly torch for gh200 test (vllm-project#12270)

Signed-off-by: youkaichao <[email protected]>

* [Bugfix] fix race condition that leads to wrong order of token returned (vllm-project#10802)

Signed-off-by: Jannis Schönleber <[email protected]>

* [Kernel] fix moe_align_block_size error condition (vllm-project#12239)

Signed-off-by: Jinzhen Lin <[email protected]>

* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types  (vllm-project#10907)

Signed-off-by: rickyx <[email protected]>

* [Bugfix] Multi-sequence broken (vllm-project#11898)

Signed-off-by: Andy Lo <[email protected]>

* [Misc] Remove experimental dep from tracing.py (vllm-project#12007)

Signed-off-by: Adrian Cole <[email protected]>

* [Misc] Set default backend to SDPA for get_vit_attn_backend (vllm-project#12235)

Signed-off-by: wangxiyuan <[email protected]>

* [Core] Free CPU pinned memory on environment cleanup (vllm-project#10477)

* Update pre-commit.yml (#374)

* Update pre-commit.yml

* Reapplying missing format

* New codespell exclude location

---------

Co-authored-by: Kevin H. Luu <[email protected]>

* [bugfix] moe tuning. rm is_navi() (vllm-project#12273)

Signed-off-by: Divakar Verma <[email protected]>

* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes (vllm-project#12277)

Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>

* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose (vllm-project#12281)

Signed-off-by: Hongxia Yang <[email protected]>

* [VLM] Simplify post-processing of replacement info (vllm-project#12269)

Signed-off-by: DarkLight1337 <[email protected]>

* [ci/lint] Add back default arg for pre-commit (vllm-project#12279)

Signed-off-by: kevin <[email protected]>

* [CI] add docker volume prune to neuron CI (vllm-project#12291)

Signed-off-by: Liangfu Chen <[email protected]>

* [Ci/Build] Fix mypy errors on main (vllm-project#12296)

Signed-off-by: DarkLight1337 <[email protected]>

* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` (vllm-project#12288)

Signed-off-by: Nick Hill <[email protected]>

* [core] separate builder init and builder prepare for each batch (vllm-project#12253)

Signed-off-by: youkaichao <[email protected]>

* [Build] update requirements of no-device (vllm-project#12299)

Signed-off-by: Mengqing Cao <[email protected]>

* [Core] Support fully transparent sleep mode (vllm-project#11743)

Signed-off-by: youkaichao <[email protected]>

* [VLM] Avoid unnecessary tokenization (vllm-project#12310)

Signed-off-by: DarkLight1337 <[email protected]>

* [Model][Bugfix]: correct Aria model output (vllm-project#12309)

Signed-off-by: xffxff <[email protected]>

* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 (vllm-project#12313)

Signed-off-by: Roger Wang <[email protected]>

* [Doc] Add docs for prompt replacement (vllm-project#12318)

Signed-off-by: DarkLight1337 <[email protected]>

* [Misc] Fix the error in the tip for the --lora-modules parameter (vllm-project#12319)

Signed-off-by: wangerxiao <[email protected]>

* [Misc]  Improve the readability of BNB error messages  (vllm-project#12320)

Signed-off-by: Jee Jee Li <[email protected]>

* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init (#367)

* switching detokenize flag to be False

* detokenize = False for benchmarks

* restoring default in main vllm code for detokenize

* removing extra spaces

* moving detokenize to flag

* adding support for token ids

---------

Co-authored-by: maleksan85 <[email protected]>

* [Bugfix] Fix HPU multiprocessing executor (vllm-project#12167)

Signed-off-by: Konrad Zawora <[email protected]>

* [Core] Support `reset_prefix_cache` (vllm-project#12284)

* [Frontend][V1] Online serving performance improvements (vllm-project#12287)

* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD (vllm-project#12282)

Signed-off-by: Randall Smith <[email protected]>

* FP8 FA fixes (#381)

* FP8 FA fixes

Summary:
Add missing clamp and fix reciprocal scale computation.

* linter

* Returning the use of the proper stream in allreduce (#382)

* [Bugfix] Fixing  AMD LoRA CI test. (vllm-project#12329)

Signed-off-by: Alexei V. Ivanov <[email protected]>

* [Docs] Update FP8 KV Cache documentation (vllm-project#12238)

Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>

* [Docs] Document vulnerability disclosure process (vllm-project#12326)

Signed-off-by: Russell Bryant <[email protected]>

* [V1] Add `uncache_blocks` (vllm-project#12333)

* [doc] explain common errors around torch.compile (vllm-project#12340)

Signed-off-by: youkaichao <[email protected]>

* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update (vllm-project#12338)

Signed-off-by: zhenwei <[email protected]>

* [Bugfix] Fix k_proj's bias for whisper self attention (vllm-project#12342)

Signed-off-by: Isotr0py <[email protected]>

* [Kernel] Flash Attention 3 Support (vllm-project#12093)

Signed-off-by: Lucas Wilkinson <[email protected]>

* [Doc] Troubleshooting errors during model inspection (vllm-project#12351)

Signed-off-by: DarkLight1337 <[email protected]>

* [V1] Simplify M-RoPE (vllm-project#12352)

Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>

* [Bugfix] Fix broken internvl2 inference with v1 (vllm-project#12360)

Signed-off-by: Isotr0py <[email protected]>

* [core] add wake_up doc and some sanity check (vllm-project#12361)

Signed-off-by: youkaichao <[email protected]>

* [torch.compile] decouple compile sizes and cudagraph sizes (vllm-project#12243)

Signed-off-by: youkaichao <[email protected]>

* [FP8][Kernel] Dynamic kv cache scaling factors computation (vllm-project#11906)

Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>

* [TPU] Update TPU CI to use torchxla nightly on 20250122 (vllm-project#12334)

Signed-off-by: Siyuan Liu <[email protected]>

* [Docs] Document Phi-4 support (vllm-project#12362)

Signed-off-by: Isotr0py <[email protected]>

* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order  (vllm-project#11528)

Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>

* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script (vllm-project#12357)

Signed-off-by: Junichi Sato <[email protected]>

* [Docs] Add meetup slides (vllm-project#12345)

Signed-off-by: Woosuk Kwon <[email protected]>

* Using pytorch commit past the point when rowwise PR (pytorch/pytorch#144432) was merged (#384)

* [Docs] Update spec decode + structured output in compat matrix (vllm-project#12373)

Signed-off-by: Russell Bryant <[email protected]>

* [V1][Frontend] Coalesce bunched `RequestOutput`s (vllm-project#12298)

Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>

* Set weights_only=True when using torch.load() (vllm-project#12366)

Signed-off-by: Russell Bryant <[email protected]>

* [Bugfix] Path join when building local path for S3 clone (vllm-project#12353)

Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>

* Update compressed-tensors version (vllm-project#12367)

* [V1] Increase default batch size for H100/H200 (vllm-project#12369)

Signed-off-by: Woosuk Kwon <[email protected]>

* [perf] fix perf regression from vllm-project#12253 (vllm-project#12380)

Signed-off-by: youkaichao <[email protected]>

* [Misc] Use VisionArena Dataset for VLM Benchmarking (vllm-project#12389)

Signed-off-by: Roger Wang <[email protected]>

* [ci/build] fix wheel size check (vllm-project#12396)

Signed-off-by: youkaichao <[email protected]>

* [Hardware][Gaudi][Doc] Add missing step in setup instructions (vllm-project#12382)

* [ci/build] sync default value for wheel size (vllm-project#12398)

Signed-off-by: youkaichao <[email protected]>

* [Misc] Enable proxy support in benchmark script (vllm-project#12356)

Signed-off-by: Junichi Sato <[email protected]>

* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build (vllm-project#12375)

Signed-off-by: Lucas Wilkinson <[email protected]>

* Applying scales rename to fp8 config (#387)

* [Misc] Remove deprecated code (vllm-project#12383)

Signed-off-by: DarkLight1337 <[email protected]>

* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). (vllm-project#12405)

Signed-off-by: Lucas Wilkinson <[email protected]>

* Dev-docker Documentation Updates (#378)

* Dev-docker Documentation Updates

Minor updates to several sections, with links to other documents where appropriate.

* Fix formatting of GEMM filename

* README cleanup

- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording

* Expanded sample commands for Latency and Throughput

* Fix markdown links

* Fix pre-commit errors

* Updates from review

Initial updates to incorporate feedback from a review session held with @t-parry

* Update script args to match current recommendations

* Remove recommended max-num-seqs values for now

---------

Co-authored-by: Gregory Shtrasberg <[email protected]>

* [Bugfix][Kernel] Fix moe align block issue for mixtral (vllm-project#12413)

* [Bugfix] Fix BLIP-2 processing (vllm-project#12412)

Signed-off-by: DarkLight1337 <[email protected]>

* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 (vllm-project#12408)

Signed-off-by: Divakar Verma <[email protected]>

* [Misc] Add FA2 support to ViT MHA layer (vllm-project#12355)

Signed-off-by: Isotr0py <[email protected]>

* [TPU][CI] Update torchxla version in requirement-tpu.txt (vllm-project#12422)

Signed-off-by: Siyuan Liu <[email protected]>

* [Misc][Bugfix] FA3 support to ViT MHA layer (vllm-project#12435)

Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>

* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync (vllm-project#12094)

Signed-off-by: Keyun Tong <[email protected]>

* [V1][Bugfix] Fix assertion when mm hashing is turned off (vllm-project#12439)

Signed-off-by: Roger Wang <[email protected]>

* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 (vllm-project#12445)

* [Frontend] generation_config.json for  maximum tokens(vllm-project#12242)

Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>

* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 (vllm-project#12417)

Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>

* [Bugfix/CI] Fix broken kernels/test_mha.py (vllm-project#12450)

* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 (vllm-project#12434)

Signed-off-by: Lucas Wilkinson <[email protected]>

* [Build/CI] Fix libcuda.so linkage (vllm-project#12424)

Signed-off-by: Tyler Michael Smith <[email protected]>

* [Frontend] Rerank API (Jina- and Cohere-compatible API)  (vllm-project#12376)

Signed-off-by: Kyle Mistele <[email protected]>

* [DOC] Add link to vLLM blog (vllm-project#12460)

Signed-off-by: Yuan Tang <[email protected]>

* [V1] Avoid list creation in input preparation (vllm-project#12457)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Frontend] Support scores endpoint in run_batch (vllm-project#12430)

Signed-off-by: Pooya Davoodi <[email protected]>

* [Bugfix] Fix Granite 3.0 MoE model loading (vllm-project#12446)

Signed-off-by: DarkLight1337 <[email protected]>

* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata (vllm-project#12464)

Signed-off-by: Isotr0py <[email protected]>

* [V1][Minor] Minor optimizations for update_from_output (vllm-project#12454)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Bugfix] Fix gpt2 GGUF inference (vllm-project#12467)

Signed-off-by: Isotr0py <[email protected]>

* [Build] Only build 9.0a for scaled_mm and sparse kernels (vllm-project#12339)

Signed-off-by: Lucas Wilkinson <[email protected]>

* [V1][Metrics] Add initial Prometheus logger (vllm-project#12416)

Signed-off-by: Mark McLoughlin <[email protected]>

* [V1][CI/Test] Do basic test for top-p & top-k sampling (vllm-project#12469)

Signed-off-by: Woosuk Kwon <[email protected]>

* [FlashInfer] Upgrade to 0.2.0 (vllm-project#11194)

Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>

* Support FP8 FA from Quark format (#388)

* Support FP8 FA from Quark format

* Support FP8 FA from Quark format

* nit: update comment

* Direct call on ROCm

* 20250127 docs update (#392)

* updating code blocks

* typo

* updated manifest

* Including feedback

* whitespace

* Deepseek instructions

* hyperlink fix

* hyperlink fix

* updating what is new

* cpx update

* typo

* whitespace

* whitespace

* Faster Custom Paged Attention kernels (#372)

* integrate new cpa kernel, update tests and benchmark

* added comments to mfma4 kernel

* further comments for mfma16 kernel

* clang-format

* Lint

* add flag for logits rtz conversion and disable by default

* lint

* [Bugfix]: Fix paged attention unit tests of #372 (#389)

* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and  `csrc/rocm/attention.cu`.

* improve code documentation.

* lint

---------

Co-authored-by: vllmellm <[email protected]>

---------

Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>

* Using a more precise profiling on ROCm to properly account for weights padding (#394)

* Update Dockerfile.rocm

* [Bugfix]: inclucde the env variables required for running FastSyncLLM

Signed-off-by: vllmellm <[email protected]>

* fix pre-commit lint

Signed-off-by: vllmellm <[email protected]>

---------

Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Yikun <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Steve Luo <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]>
hongxiayang added a commit to ROCm/vllm that referenced this pull request Feb 5, 2025
* [Model] Initialize support for Deepseek-VL2 models (vllm-project#11578)

Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>

* [Hardware][CPU] Multi-LoRA implementation for the CPU backend (vllm-project#11100)

Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>

* [Hardware][TPU] workaround fix for MoE on TPU (vllm-project#11764)

* [V1][Core][1/n] Logging and Metrics (vllm-project#11962)

Signed-off-by: [email protected] <[email protected]>

* [Model] Support GGUF models newly added in `transformers` 4.46.0 (vllm-project#9685)

Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>

* [V1] [2/n] Logging and Metrics - `OutputProcessor` Abstraction (vllm-project#11973)

Signed-off-by: [email protected] <[email protected]>

* [MISC] fix typo in kv transfer send recv test (vllm-project#11983)

* [Bug] Fix usage of `.transpose()` and `.view()` consecutively. (vllm-project#11979)

* [CI][Spec Decode] fix: broken test for EAGLE model (vllm-project#11972)

Signed-off-by: Sungjae Lee <[email protected]>

* [Misc] Fix Deepseek V2 fp8 kv-scale remapping (vllm-project#11947)

Signed-off-by: Yida Wu <[email protected]>

* [Misc]Minor Changes about Worker (vllm-project#11555)

Signed-off-by: Chenguang Li <[email protected]>

* [platform] add ray_device_key (vllm-project#11948)

Signed-off-by: youkaichao <[email protected]>

* Fix Max Token ID for Qwen-VL-Chat (vllm-project#11980)

Signed-off-by: Alex-Brooks <[email protected]>

* [Kernel] unified_attention for Attention.forward (vllm-project#11967)

Signed-off-by: Chen Zhang <[email protected]>

* [Doc][V1] Update model implementation guide for V1 support (vllm-project#11998)

Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>

* [Doc] Organise installation documentation into categories and tabs (vllm-project#11935)

Signed-off-by: Harry Mellor <[email protected]>

* [platform] add device_control env var (vllm-project#12009)

Signed-off-by: youkaichao <[email protected]>

* [Platform] Move get_punica_wrapper() function to Platform (vllm-project#11516)

Signed-off-by: Shanshan Shen <[email protected]>

* bugfix: Fix signature mismatch in benchmark's `get_tokenizer` function (vllm-project#11982)

Signed-off-by: elijah <[email protected]>

* [Doc] Fix build from source and installation link in README.md (vllm-project#12013)

Signed-off-by: Yikun <[email protected]>

* Using list

* [Bugfix] Fix deepseekv3 gate bias error (vllm-project#12002)

Signed-off-by: mgoin <[email protected]>
Co-authored-by: mgoin <[email protected]>

* Revert "[misc] improve memory profiling (vllm-project#11809)"

This reverts commit 889e662.

* Multi-lingual P3L (#356)

* Commiting the *multilingual* P3L test.

* Created a *multi-lingual* P3L test.

* Making ruff happy.

* .

* Added a reference to the language-scripture Confluence table.

* Typo fixing.

* Harmonizing naming.

* Fixing comments in the header.

---------

Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>

* Trying to make scales work with compileable attention

* [Docs] Add Sky Computing Lab to project intro (vllm-project#12019)

Signed-off-by: Woosuk Kwon <[email protected]>

* [HPU][Bugfix] set_forward_context and CI test execution (vllm-project#12014)

Signed-off-by: Konrad Zawora <[email protected]>

* [Doc] Update Quantization Hardware Support Documentation (vllm-project#12025)

Signed-off-by: tjtanaa <[email protected]>
Co-authored-by: tjtanaa <[email protected]>

* [HPU][misc] add comments for explanation (vllm-project#12034)

Signed-off-by: youkaichao <[email protected]>

* [Bugfix] Fix various bugs in multi-modal processor (vllm-project#12031)

Signed-off-by: DarkLight1337 <[email protected]>

* [Kernel] Revert the API change of Attention.forward (vllm-project#12038)

Signed-off-by: Chen Zhang <[email protected]>

* [Platform] Add output for Attention Backend (vllm-project#11981)

Signed-off-by: wangxiyuan <[email protected]>

* [Bugfix][Kernel] Give unique name to BlockSparseFlashAttention (vllm-project#12040)

Signed-off-by: Chen Zhang <[email protected]>

* Explain where the engine args go when using Docker (vllm-project#12041)

Signed-off-by: Harry Mellor <[email protected]>

* Docs lint

* [Doc]: Update the Json Example of the `Engine Arguments` document (vllm-project#12045)

* [Misc]  Merge bitsandbytes_stacked_params_mapping and packed_modules_mapping (vllm-project#11924)

Signed-off-by: Jee Jee Li <[email protected]>

* [Kernel] Support MulAndSilu (vllm-project#11624)

Signed-off-by: Jee Jee Li <[email protected]>

* [HPU][Bugfix] Don't use /dev/accel/accel0 for HPU autodetection in setup.py (vllm-project#12046)

Signed-off-by: Konrad Zawora <[email protected]>

* [Platform] move current_memory_usage() into platform (vllm-project#11369)

Signed-off-by: Shanshan Shen <[email protected]>

* [V1][BugFix] Fix edge case in VLM scheduling (vllm-project#12065)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Misc] Add multipstep chunked-prefill support for FlashInfer (vllm-project#10467)

* [core] Turn off GPU communication overlap for Ray executor (vllm-project#12051)

Signed-off-by: Rui Qiao <[email protected]>

* [core] platform agnostic executor via collective_rpc (vllm-project#11256)

Signed-off-by: youkaichao <[email protected]>

* [Doc] Update examples to remove SparseAutoModelForCausalLM (vllm-project#12062)

Signed-off-by: Kyle Sayers <[email protected]>

* [V1][Prefix Cache] Move the logic of num_computed_tokens into KVCacheManager (vllm-project#12003)

* Fix: cases with empty sparsity config (vllm-project#12057)

Signed-off-by: Rahul Tuli <[email protected]>

* Type-fix: make execute_model output type optional (vllm-project#12020)

* [Platform] Do not raise error if _Backend is not found (vllm-project#12023)

Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>

* [Model]: Support internlm3 (vllm-project#12037)

* Misc: allow to use proxy in `HTTPConnection` (vllm-project#12042)

Signed-off-by: Yuan Zhou <[email protected]>

* [Misc][Quark] Upstream Quark format to VLLM (vllm-project#10765)

Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Michael Goin <[email protected]>

* [Doc]: Update `OpenAI-Compatible Server` documents (vllm-project#12082)

* [Bugfix] use right truncation for non-generative tasks (vllm-project#12050)

Signed-off-by: Joe Runde <[email protected]>

* [V1][Core] Autotune encoder cache budget (vllm-project#11895)

Signed-off-by: Roger Wang <[email protected]>

* [Bugfix] Fix _get_lora_device for HQQ marlin (vllm-project#12090)

Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>

* Allow hip sources to be directly included when compiling for rocm. (vllm-project#12087)

* [Core] Default to using per_token quantization for fp8 when cutlass is supported. (vllm-project#8651)

Signed-off-by: mgoin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: mgoin <[email protected]>

* [Doc] Add documentation for specifying model architecture (vllm-project#12105)

* Various cosmetic/comment fixes (vllm-project#12089)

Signed-off-by: mgoin <[email protected]>

* [Bugfix] Remove hardcoded `head_size=256` for Deepseek v2 and v3 (vllm-project#12067)

Signed-off-by: Isotr0py <[email protected]>

* Support torchrun and SPMD-style offline inference (vllm-project#12071)

Signed-off-by: youkaichao <[email protected]>

* [core] LLM.collective_rpc interface and RLHF example (vllm-project#12084)

Signed-off-by: youkaichao <[email protected]>

* [Bugfix] Fix max image feature size for Llava-one-vision (vllm-project#12104)

Signed-off-by: Roger Wang <[email protected]>

* Enable user marker for vllm profiling (#357)

* Enable user marker for vllm profiling

---------

Co-authored-by: Gregory Shtrasberg <[email protected]>

* [misc] Add LoRA kernel micro benchmarks (vllm-project#11579)

* [Model] Add support for deepseek-vl2-tiny model (vllm-project#12068)

Signed-off-by: Isotr0py <[email protected]>

* Deepseek V3 support (#364)

* Changing the hard coded datatype to see if it's enough for the model to work

* Picking the upstrteam moe kernel version

* make upstream fix for v3 also works for rocm v2

* Conditional fnuz dtype

* Requantizing from fn to fnuz

* Requantizing moe as well

* Actually requantizing moe weights

* Conditional requantization and assert on padding in block quant

* Format

---------

Co-authored-by: charlifu <[email protected]>

* [Bugfix] Set enforce_eager automatically for mllama (vllm-project#12127)

Signed-off-by: Chen Zhang <[email protected]>

* [Bugfix] Fix a path bug in disaggregated prefill example script. (vllm-project#12121)

Signed-off-by: Kuntai Du <[email protected]>

* [CI]add genai-perf benchmark in nightly benchmark (vllm-project#10704)

Signed-off-by: Kunshang Ji <[email protected]>

* [Doc] Add instructions on using Podman when SELinux is active (vllm-project#12136)

Signed-off-by: Yuan Tang <[email protected]>

* [Bugfix] Fix issues in CPU build Dockerfile (vllm-project#12135)

Signed-off-by: Yuan Tang <[email protected]>

* [BugFix] add more `is not None` check in VllmConfig.__post_init__ (vllm-project#12138)

Signed-off-by: Chen Zhang <[email protected]>

* [Misc] Add deepseek_vl2 chat template (vllm-project#12143)

Signed-off-by: Isotr0py <[email protected]>

* [ROCm][MoE] moe tuning support for rocm (vllm-project#12049)

Signed-off-by: Divakar Verma <[email protected]>

* [V1] Move more control of kv cache initialization from model_executor to EngineCore (vllm-project#11960)

Signed-off-by: Chen Zhang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>

* [Misc][LoRA] Improve the readability of LoRA error messages (vllm-project#12102)

Signed-off-by: Jee Jee Li <[email protected]>

* [CI/Build][CPU][Bugfix] Fix CPU CI (vllm-project#12150)

Signed-off-by: jiang1.li <[email protected]>

* [core] allow callable in collective_rpc (vllm-project#12151)

Signed-off-by: youkaichao <[email protected]>

* [Bugfix] Fix score api for missing max_model_len validation (vllm-project#12119)

Signed-off-by: Wallas Santos <[email protected]>

* [Bugfix] Mistral tokenizer encode accept list of str (vllm-project#12149)

Signed-off-by: Kunshang Ji <[email protected]>

* [AMD][FP8] Using MI300 FP8 format on ROCm for block_quant (vllm-project#12134)

Signed-off-by: Gregory Shtrasberg <[email protected]>

* [torch.compile] disable logging when cache is disabled (vllm-project#12043)

Signed-off-by: youkaichao <[email protected]>

* [misc] fix cross-node TP (vllm-project#12166)

Signed-off-by: youkaichao <[email protected]>

* [AMD][CI/Build][Bugfix] use pytorch stale wheel (vllm-project#12172)

Signed-off-by: hongxyan <[email protected]>

* [core] further polish memory profiling (vllm-project#12126)

Signed-off-by: youkaichao <[email protected]>

* [Docs] Fix broken link in SECURITY.md (vllm-project#12175)

Signed-off-by: Russell Bryant <[email protected]>

* [Model] Port deepseek-vl2 processor, remove dependency (vllm-project#12169)

Signed-off-by: Isotr0py <[email protected]>

* [core] clean up executor class hierarchy between v1 and v0 (vllm-project#12171)

Signed-off-by: youkaichao <[email protected]>

* [Misc] Support register quantization method out-of-tree (vllm-project#11969)

* [V1] Collect env var for usage stats (vllm-project#12115)

* [BUGFIX] Move scores to float32 in case of running xgrammar on cpu (vllm-project#12152)

Signed-off-by: Michal Adamczyk <[email protected]>

* [Bugfix] Fix multi-modal processors for transformers 4.48 (vllm-project#12187)

* [torch.compile] store inductor compiled Python file (vllm-project#12182)

Signed-off-by: youkaichao <[email protected]>

* benchmark_serving support --served-model-name param (vllm-project#12109)

Signed-off-by: zibai <[email protected]>
Co-authored-by: Roger Wang <[email protected]>

* [Misc] Add BNB support to GLM4-V model (vllm-project#12184)

Signed-off-by: Isotr0py <[email protected]>

* [V1] Add V1 support of Qwen2-VL (vllm-project#12128)

Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: DarkLight1337 <[email protected]>

* [Model] Support for fairseq2 Llama (vllm-project#11442)

Signed-off-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>

* [Bugfix] Fix num_heads value for simple connector when tp enabled (vllm-project#12074)

Signed-off-by: Shangming Cai <[email protected]>

* [torch.compile] fix sym_tensor_indices (vllm-project#12191)

Signed-off-by: youkaichao <[email protected]>

* Move linting to `pre-commit` (vllm-project#11975)

Signed-off-by: Harry Mellor <[email protected]>

* [DOC] Fix typo in docstring and assert message (vllm-project#12194)

Signed-off-by: Yuan Tang <[email protected]>

* [DOC] Add missing docstring in LLMEngine.add_request() (vllm-project#12195)

Signed-off-by: Yuan Tang <[email protected]>

* [Bugfix] Fix incorrect types in LayerwiseProfileResults (vllm-project#12196)

Signed-off-by: Yuan Tang <[email protected]>

* [Model] Add Qwen2 PRM model support (vllm-project#12202)

Signed-off-by: Isotr0py <[email protected]>

* [Core] Interface for accessing model from `VllmRunner` (vllm-project#10353)

Signed-off-by: DarkLight1337 <[email protected]>

* [misc] add placeholder format.sh (vllm-project#12206)

Signed-off-by: youkaichao <[email protected]>

* [CI/Build] Remove dummy CI steps (vllm-project#12208)

Signed-off-by: DarkLight1337 <[email protected]>

* [CI/Build] Make pre-commit faster (vllm-project#12212)

Signed-off-by: DarkLight1337 <[email protected]>

* [Model] Upgrade Aria to transformers 4.48 (vllm-project#12203)

Signed-off-by: DarkLight1337 <[email protected]>

* [misc] print a message to suggest how to bypass commit hooks (vllm-project#12217)

Signed-off-by: youkaichao <[email protected]>

* [core][bugfix] configure env var during import vllm (vllm-project#12209)

Signed-off-by: youkaichao <[email protected]>

* [V1] Remove `_get_cache_block_size` (vllm-project#12214)

Signed-off-by: Chen Zhang <[email protected]>

* [Misc] Pass `attention` to impl backend (vllm-project#12218)

Signed-off-by: wangxiyuan <[email protected]>

* [Bugfix] Fix `HfExampleModels.find_hf_info` (vllm-project#12223)

Signed-off-by: DarkLight1337 <[email protected]>

* [CI] Pass local python version explicitly to pre-commit mypy.sh (vllm-project#12224)

Signed-off-by: Chen Zhang <[email protected]>

* Using ROCm6.3.1 base docker and building hipblas-common (#366)

* [Misc] Update CODEOWNERS (vllm-project#12229)

* fix: update platform detection for M-series arm based MacBook processors (vllm-project#12227)

Signed-off-by: isikhi <[email protected]>

* [misc] add cuda runtime version to usage data (vllm-project#12190)

Signed-off-by: youkaichao <[email protected]>
Co-authored-by: Roger Wang <[email protected]>

* [bugfix] catch xgrammar unsupported array constraints (vllm-project#12210)

Signed-off-by: Jason Cheng <[email protected]>

* [Kernel] optimize moe_align_block_size for cuda graph and large num_experts (e.g. DeepSeek-V3) (vllm-project#12222)

Signed-off-by: Jinzhen Lin <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>

* Add quantization and guided decoding CODEOWNERS (vllm-project#12228)

Signed-off-by: mgoin <[email protected]>

* [AMD][Build] Porting dockerfiles from the ROCm/vllm fork (vllm-project#11777)

Signed-off-by: Gregory Shtrasberg <[email protected]>

* [BugFix] Fix GGUF tp>1 when vocab_size is not divisible by 64 (vllm-project#12230)

Signed-off-by: NickLucche <[email protected]>

* [ci/build] disable failed and flaky tests (vllm-project#12240)

Signed-off-by: youkaichao <[email protected]>

* [Misc] Rename `MultiModalInputsV2 -> MultiModalInputs` (vllm-project#12244)

Signed-off-by: DarkLight1337 <[email protected]>

* [Misc]Add BNB quantization for PaliGemmaForConditionalGeneration  (vllm-project#12237)

Signed-off-by: Jee Jee Li <[email protected]>

* [Misc] Remove redundant TypeVar from base model (vllm-project#12248)

Signed-off-by: DarkLight1337 <[email protected]>

* [Bugfix] Fix mm_limits access for merged multi-modal processor (vllm-project#12252)

Signed-off-by: DarkLight1337 <[email protected]>

* [torch.compile] transparent compilation with more logging (vllm-project#12246)

Signed-off-by: youkaichao <[email protected]>

* [V1][Bugfix] Fix data item ordering in mixed-modality inference (vllm-project#12259)

Signed-off-by: Roger Wang <[email protected]>

* Remove pytorch comments for outlines + compressed-tensors (vllm-project#12260)

Signed-off-by: Thomas Parnell <[email protected]>

* [Platform] improve platforms getattr (vllm-project#12264)

Signed-off-by: Mengqing Cao <[email protected]>

* [ci/build] update nightly torch for gh200 test (vllm-project#12270)

Signed-off-by: youkaichao <[email protected]>

* [Bugfix] fix race condition that leads to wrong order of token returned (vllm-project#10802)

Signed-off-by: Jannis Schönleber <[email protected]>

* [Kernel] fix moe_align_block_size error condition (vllm-project#12239)

Signed-off-by: Jinzhen Lin <[email protected]>

* [v1][stats][1/n] Add RequestStatsUpdate and RequestStats types  (vllm-project#10907)

Signed-off-by: rickyx <[email protected]>

* [Bugfix] Multi-sequence broken (vllm-project#11898)

Signed-off-by: Andy Lo <[email protected]>

* [Misc] Remove experimental dep from tracing.py (vllm-project#12007)

Signed-off-by: Adrian Cole <[email protected]>

* [Misc] Set default backend to SDPA for get_vit_attn_backend (vllm-project#12235)

Signed-off-by: wangxiyuan <[email protected]>

* [Core] Free CPU pinned memory on environment cleanup (vllm-project#10477)

* Update pre-commit.yml (#374)

* Update pre-commit.yml

* Reapplying missing format

* New codespell exclude location

---------

Co-authored-by: Kevin H. Luu <[email protected]>

* [bugfix] moe tuning. rm is_navi() (vllm-project#12273)

Signed-off-by: Divakar Verma <[email protected]>

* [BUGFIX] When skip_tokenize_init and multistep are set, execution crashes (vllm-project#12277)

Signed-off-by: maleksan85 <[email protected]>
Co-authored-by: maleksan85 <[email protected]>

* [Documentation][AMD] Add information about prebuilt ROCm vLLM docker for perf validation purpose (vllm-project#12281)

Signed-off-by: Hongxia Yang <[email protected]>

* [VLM] Simplify post-processing of replacement info (vllm-project#12269)

Signed-off-by: DarkLight1337 <[email protected]>

* [ci/lint] Add back default arg for pre-commit (vllm-project#12279)

Signed-off-by: kevin <[email protected]>

* [CI] add docker volume prune to neuron CI (vllm-project#12291)

Signed-off-by: Liangfu Chen <[email protected]>

* [Ci/Build] Fix mypy errors on main (vllm-project#12296)

Signed-off-by: DarkLight1337 <[email protected]>

* [Benchmark] More accurate TPOT calc in `benchmark_serving.py` (vllm-project#12288)

Signed-off-by: Nick Hill <[email protected]>

* [core] separate builder init and builder prepare for each batch (vllm-project#12253)

Signed-off-by: youkaichao <[email protected]>

* [Build] update requirements of no-device (vllm-project#12299)

Signed-off-by: Mengqing Cao <[email protected]>

* [Core] Support fully transparent sleep mode (vllm-project#11743)

Signed-off-by: youkaichao <[email protected]>

* [VLM] Avoid unnecessary tokenization (vllm-project#12310)

Signed-off-by: DarkLight1337 <[email protected]>

* [Model][Bugfix]: correct Aria model output (vllm-project#12309)

Signed-off-by: xffxff <[email protected]>

* [Bugfix][VLM] Fix mixed-modality inference backward compatibility for V0 (vllm-project#12313)

Signed-off-by: Roger Wang <[email protected]>

* [Doc] Add docs for prompt replacement (vllm-project#12318)

Signed-off-by: DarkLight1337 <[email protected]>

* [Misc] Fix the error in the tip for the --lora-modules parameter (vllm-project#12319)

Signed-off-by: wangerxiao <[email protected]>

* [Misc]  Improve the readability of BNB error messages  (vllm-project#12320)

Signed-off-by: Jee Jee Li <[email protected]>

* Skip tokenize/detokenize when it is disabled by arg --skip-tokenizer-init (#367)

* switching detokenize flag to be False

* detokenize = False for benchmarks

* restoring default in main vllm code for detokenize

* removing extra spaces

* moving detokenize to flag

* adding support for token ids

---------

Co-authored-by: maleksan85 <[email protected]>

* [Bugfix] Fix HPU multiprocessing executor (vllm-project#12167)

Signed-off-by: Konrad Zawora <[email protected]>

* [Core] Support `reset_prefix_cache` (vllm-project#12284)

* [Frontend][V1] Online serving performance improvements (vllm-project#12287)

* [AMD][Quantization] Add TritonScaledMMLinearKernel since int8 is broken for AMD (vllm-project#12282)

Signed-off-by: Randall Smith <[email protected]>

* FP8 FA fixes (#381)

* FP8 FA fixes

Summary:
Add missing clamp and fix reciprocal scale computation.

* linter

* Returning the use of the proper stream in allreduce (#382)

* [Bugfix] Fixing  AMD LoRA CI test. (vllm-project#12329)

Signed-off-by: Alexei V. Ivanov <[email protected]>

* [Docs] Update FP8 KV Cache documentation (vllm-project#12238)

Signed-off-by: mgoin <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>

* [Docs] Document vulnerability disclosure process (vllm-project#12326)

Signed-off-by: Russell Bryant <[email protected]>

* [V1] Add `uncache_blocks` (vllm-project#12333)

* [doc] explain common errors around torch.compile (vllm-project#12340)

Signed-off-by: youkaichao <[email protected]>

* [Hardware][Gaudi][BugFix] Fix dataclass error due to triton package update (vllm-project#12338)

Signed-off-by: zhenwei <[email protected]>

* [Bugfix] Fix k_proj's bias for whisper self attention (vllm-project#12342)

Signed-off-by: Isotr0py <[email protected]>

* [Kernel] Flash Attention 3 Support (vllm-project#12093)

Signed-off-by: Lucas Wilkinson <[email protected]>

* [Doc] Troubleshooting errors during model inspection (vllm-project#12351)

Signed-off-by: DarkLight1337 <[email protected]>

* [V1] Simplify M-RoPE (vllm-project#12352)

Signed-off-by: Roger Wang <[email protected]>
Co-authored-by: imkero <[email protected]>

* [Bugfix] Fix broken internvl2 inference with v1 (vllm-project#12360)

Signed-off-by: Isotr0py <[email protected]>

* [core] add wake_up doc and some sanity check (vllm-project#12361)

Signed-off-by: youkaichao <[email protected]>

* [torch.compile] decouple compile sizes and cudagraph sizes (vllm-project#12243)

Signed-off-by: youkaichao <[email protected]>

* [FP8][Kernel] Dynamic kv cache scaling factors computation (vllm-project#11906)

Signed-off-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>

* [TPU] Update TPU CI to use torchxla nightly on 20250122 (vllm-project#12334)

Signed-off-by: Siyuan Liu <[email protected]>

* [Docs] Document Phi-4 support (vllm-project#12362)

Signed-off-by: Isotr0py <[email protected]>

* [BugFix] Fix parameter names and `process_after_weight_loading` for W4A16 MoE Group Act Order  (vllm-project#11528)

Signed-off-by: ElizaWszola <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Michael Goin <[email protected]>

* [Misc] Fix OpenAI API Compatibility Issues in Benchmark Script (vllm-project#12357)

Signed-off-by: Junichi Sato <[email protected]>

* [Docs] Add meetup slides (vllm-project#12345)

Signed-off-by: Woosuk Kwon <[email protected]>

* Using pytorch commit past the point when rowwise PR (pytorch/pytorch#144432) was merged (#384)

* [Docs] Update spec decode + structured output in compat matrix (vllm-project#12373)

Signed-off-by: Russell Bryant <[email protected]>

* [V1][Frontend] Coalesce bunched `RequestOutput`s (vllm-project#12298)

Signed-off-by: Nick Hill <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>

* Set weights_only=True when using torch.load() (vllm-project#12366)

Signed-off-by: Russell Bryant <[email protected]>

* [Bugfix] Path join when building local path for S3 clone (vllm-project#12353)

Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>

* Update compressed-tensors version (vllm-project#12367)

* [V1] Increase default batch size for H100/H200 (vllm-project#12369)

Signed-off-by: Woosuk Kwon <[email protected]>

* [perf] fix perf regression from vllm-project#12253 (vllm-project#12380)

Signed-off-by: youkaichao <[email protected]>

* [Misc] Use VisionArena Dataset for VLM Benchmarking (vllm-project#12389)

Signed-off-by: Roger Wang <[email protected]>

* [ci/build] fix wheel size check (vllm-project#12396)

Signed-off-by: youkaichao <[email protected]>

* [Hardware][Gaudi][Doc] Add missing step in setup instructions (vllm-project#12382)

* [ci/build] sync default value for wheel size (vllm-project#12398)

Signed-off-by: youkaichao <[email protected]>

* [Misc] Enable proxy support in benchmark script (vllm-project#12356)

Signed-off-by: Junichi Sato <[email protected]>

* [Bugfix][Kernel] Fix CUDA 11.8 being broken by FA3 build (vllm-project#12375)

Signed-off-by: Lucas Wilkinson <[email protected]>

* Applying scales rename to fp8 config (#387)

* [Misc] Remove deprecated code (vllm-project#12383)

Signed-off-by: DarkLight1337 <[email protected]>

* [Bugfix][Kernel] FA3 Fix - RuntimeError: This flash attention build only supports pack_gqa (for build size reasons). (vllm-project#12405)

Signed-off-by: Lucas Wilkinson <[email protected]>

* Dev-docker Documentation Updates (#378)

* Dev-docker Documentation Updates

Minor updates to several sections, with links to other documents where appropriate.

* Fix formatting of GEMM filename

* README cleanup

- Reorder some sections of the README to make them easier to follow
- Improve formatting of bash commands
- Prefer use of huggingface model names instead of hard-coded directories
- Clean up wording

* Expanded sample commands for Latency and Throughput

* Fix markdown links

* Fix pre-commit errors

* Updates from review

Initial updates to incorporate feedback from a review session held with @t-parry

* Update script args to match current recommendations

* Remove recommended max-num-seqs values for now

---------

Co-authored-by: Gregory Shtrasberg <[email protected]>

* [Bugfix][Kernel] Fix moe align block issue for mixtral (vllm-project#12413)

* [Bugfix] Fix BLIP-2 processing (vllm-project#12412)

Signed-off-by: DarkLight1337 <[email protected]>

* [ROCm][MoE] MI300 tuned configs Mixtral-8x(7B,22B) | fp16, fp8 (vllm-project#12408)

Signed-off-by: Divakar Verma <[email protected]>

* [Misc] Add FA2 support to ViT MHA layer (vllm-project#12355)

Signed-off-by: Isotr0py <[email protected]>

* [TPU][CI] Update torchxla version in requirement-tpu.txt (vllm-project#12422)

Signed-off-by: Siyuan Liu <[email protected]>

* [Misc][Bugfix] FA3 support to ViT MHA layer (vllm-project#12435)

Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Co-authored-by: Isotr0py <[email protected]>

* [V1][Perf] Reduce scheduling overhead in model runner after cuda sync (vllm-project#12094)

Signed-off-by: Keyun Tong <[email protected]>

* [V1][Bugfix] Fix assertion when mm hashing is turned off (vllm-project#12439)

Signed-off-by: Roger Wang <[email protected]>

* [Misc] Revert FA on ViT vllm-project#12355 and vllm-project#12435 (vllm-project#12445)

* [Frontend] generation_config.json for  maximum tokens(vllm-project#12242)

Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>

* [Bugfix] Disable w16a16 2of4 sparse CompressedTensors24 (vllm-project#12417)

Signed-off-by: Tyler Michael Smith <[email protected]>
Co-authored-by: mgoin <[email protected]>

* [Bugfix/CI] Fix broken kernels/test_mha.py (vllm-project#12450)

* [Bugfix][Kernel] Fix perf regression caused by PR vllm-project#12405 (vllm-project#12434)

Signed-off-by: Lucas Wilkinson <[email protected]>

* [Build/CI] Fix libcuda.so linkage (vllm-project#12424)

Signed-off-by: Tyler Michael Smith <[email protected]>

* [Frontend] Rerank API (Jina- and Cohere-compatible API)  (vllm-project#12376)

Signed-off-by: Kyle Mistele <[email protected]>

* [DOC] Add link to vLLM blog (vllm-project#12460)

Signed-off-by: Yuan Tang <[email protected]>

* [V1] Avoid list creation in input preparation (vllm-project#12457)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Frontend] Support scores endpoint in run_batch (vllm-project#12430)

Signed-off-by: Pooya Davoodi <[email protected]>

* [Bugfix] Fix Granite 3.0 MoE model loading (vllm-project#12446)

Signed-off-by: DarkLight1337 <[email protected]>

* [Bugfix] Fix missing seq_start_loc in xformers prefill metadata (vllm-project#12464)

Signed-off-by: Isotr0py <[email protected]>

* [V1][Minor] Minor optimizations for update_from_output (vllm-project#12454)

Signed-off-by: Woosuk Kwon <[email protected]>

* [Bugfix] Fix gpt2 GGUF inference (vllm-project#12467)

Signed-off-by: Isotr0py <[email protected]>

* [Build] Only build 9.0a for scaled_mm and sparse kernels (vllm-project#12339)

Signed-off-by: Lucas Wilkinson <[email protected]>

* [V1][Metrics] Add initial Prometheus logger (vllm-project#12416)

Signed-off-by: Mark McLoughlin <[email protected]>

* [V1][CI/Test] Do basic test for top-p & top-k sampling (vllm-project#12469)

Signed-off-by: Woosuk Kwon <[email protected]>

* [FlashInfer] Upgrade to 0.2.0 (vllm-project#11194)

Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Co-authored-by: youkaichao <[email protected]>

* Support FP8 FA from Quark format (#388)

* Support FP8 FA from Quark format

* Support FP8 FA from Quark format

* nit: update comment

* Direct call on ROCm

* 20250127 docs update (#392)

* updating code blocks

* typo

* updated manifest

* Including feedback

* whitespace

* Deepseek instructions

* hyperlink fix

* hyperlink fix

* updating what is new

* cpx update

* typo

* whitespace

* whitespace

* Faster Custom Paged Attention kernels (#372)

* integrate new cpa kernel, update tests and benchmark

* added comments to mfma4 kernel

* further comments for mfma16 kernel

* clang-format

* Lint

* add flag for logits rtz conversion and disable by default

* lint

* [Bugfix]: Fix paged attention unit tests of #372 (#389)

* [Bugfix]: fix paged attention tests based on the updated kernels in `csrc/attention/paged_attention_v1.cu`,`csrc/attention/paged_attention_v2.cu` and  `csrc/rocm/attention.cu`.

* improve code documentation.

* lint

---------

Co-authored-by: vllmellm <[email protected]>

---------

Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: TJian <[email protected]>
Co-authored-by: vllmellm <[email protected]>

* Using a more precise profiling on ROCm to properly account for weights padding (#394)

* Update Dockerfile.rocm

* [Bugfix]: inclucde the env variables required for running FastSyncLLM

Signed-off-by: vllmellm <[email protected]>

* fix pre-commit lint

Signed-off-by: vllmellm <[email protected]>

* [Bugfix] included missing environment variable

Signed-off-by: vllmellm <[email protected]>

---------

Signed-off-by: Isotr0py <[email protected]>
Signed-off-by: Akshat Tripathi <[email protected]>
Signed-off-by: Oleg Mosalov <[email protected]>
Signed-off-by: Jee Jee Li <[email protected]>
Signed-off-by: [email protected] <[email protected]>
Signed-off-by: Sungjae Lee <[email protected]>
Signed-off-by: Yida Wu <[email protected]>
Signed-off-by: Chenguang Li <[email protected]>
Signed-off-by: youkaichao <[email protected]>
Signed-off-by: Alex-Brooks <[email protected]>
Signed-off-by: Chen Zhang <[email protected]>
Signed-off-by: Roger Wang <[email protected]>
Signed-off-by: Harry Mellor <[email protected]>
Signed-off-by: Shanshan Shen <[email protected]>
Signed-off-by: elijah <[email protected]>
Signed-off-by: Yikun <[email protected]>
Signed-off-by: mgoin <[email protected]>
Signed-off-by: Woosuk Kwon <[email protected]>
Signed-off-by: Konrad Zawora <[email protected]>
Signed-off-by: tjtanaa <[email protected]>
Signed-off-by: DarkLight1337 <[email protected]>
Signed-off-by: wangxiyuan <[email protected]>
Signed-off-by: yisheng <[email protected]>
Signed-off-by: Abatom <[email protected]>
Signed-off-by: Liangfu Chen <[email protected]>
Signed-off-by: Russell Bryant <[email protected]>
Signed-off-by: Yuan Zhou <[email protected]>
Signed-off-by: Sourashis Roy <[email protected]>
Signed-off-by: Nishidha Panpaliya <[email protected]>
Signed-off-by: Ilya Lavrenov <[email protected]>
Signed-off-by: simon-mo <[email protected]>
Signed-off-by: Wallas Santos <[email protected]>
Signed-off-by: jiang1.li <[email protected]>
Signed-off-by: yan ma <[email protected]>
Signed-off-by: Randall Smith <[email protected]>
Signed-off-by: Max de Bayser <[email protected]>
Signed-off-by: Maxime Fournioux <[email protected]>
Signed-off-by: Ye Qi <[email protected]>
Signed-off-by: Mengqing Cao <[email protected]>
Signed-off-by: Joe Runde <[email protected]>
Signed-off-by: Kunshang Ji <[email protected]>
Signed-off-by: Kuntai Du <[email protected]>
Signed-off-by: Ren MinMin <[email protected]>
Signed-off-by: Travis Johnson <[email protected]>
Signed-off-by: Fred Reiss <[email protected]>
Signed-off-by: shaochangxu.scx <[email protected]>
Signed-off-by: NickLucche <[email protected]>
Signed-off-by: Rafael Vasquez <[email protected]>
Signed-off-by: Rui Qiao <[email protected]>
Signed-off-by: Kyle Sayers <[email protected]>
Signed-off-by: Rahul Tuli <[email protected]>
Signed-off-by: kewang-xlnx <[email protected]>
Signed-off-by: kewang2 <[email protected]>
Signed-off-by: Varun Sundar Rabindranath <[email protected]>
Signed-off-by: Yuan Tang <[email protected]>
Signed-off-by: Divakar Verma <[email protected]>
Signed-off-by: Gregory Shtrasberg <[email protected]>
Signed-off-by: hongxyan <[email protected]>
Signed-off-by: Michal Adamczyk <[email protected]>
Signed-off-by: zibai <[email protected]>
Signed-off-by: Martin Gleize <[email protected]>
Signed-off-by: Shangming Cai <[email protected]>
Signed-off-by: isikhi <[email protected]>
Signed-off-by: Jason Cheng <[email protected]>
Signed-off-by: Jinzhen Lin <[email protected]>
Signed-off-by: Thomas Parnell <[email protected]>
Signed-off-by: Jannis Schönleber <[email protected]>
Signed-off-by: rickyx <[email protected]>
Signed-off-by: Andy Lo <[email protected]>
Signed-off-by: Adrian Cole <[email protected]>
Signed-off-by: maleksan85 <[email protected]>
Signed-off-by: Hongxia Yang <[email protected]>
Signed-off-by: kevin <[email protected]>
Signed-off-by: Nick Hill <[email protected]>
Signed-off-by: xffxff <[email protected]>
Signed-off-by: wangerxiao <[email protected]>
Signed-off-by: Alexei V. Ivanov <[email protected]>
Signed-off-by: zhenwei <[email protected]>
Signed-off-by: Lucas Wilkinson <[email protected]>
Signed-off-by: Siyuan Liu <[email protected]>
Signed-off-by: ElizaWszola <[email protected]>
Signed-off-by: Junichi Sato <[email protected]>
Signed-off-by: Omer Dayan (SW-GPU) <[email protected]>
Signed-off-by: Keyun Tong <[email protected]>
Signed-off-by: Matthew Hendrey <[email protected]>
Signed-off-by: Tyler Michael Smith <[email protected]>
Signed-off-by: Kyle Mistele <[email protected]>
Signed-off-by: Pooya Davoodi <[email protected]>
Signed-off-by: Mark McLoughlin <[email protected]>
Signed-off-by: Bowen Wang <[email protected]>
Signed-off-by: vllmellm <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Akshat Tripathi <[email protected]>
Co-authored-by: Oleg Mosalov <[email protected]>
Co-authored-by: Jee Jee Li <[email protected]>
Co-authored-by: Isotr0py <[email protected]>
Co-authored-by: Avshalom Manevich <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: Yangcheng Li <[email protected]>
Co-authored-by: Siyuan Li <[email protected]>
Co-authored-by: Sungjae Lee <[email protected]>
Co-authored-by: Concurrensee <[email protected]>
Co-authored-by: Chenguang Li <[email protected]>
Co-authored-by: youkaichao <[email protected]>
Co-authored-by: Alex Brooks <[email protected]>
Co-authored-by: Chen Zhang <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cyrus Leung <[email protected]>
Co-authored-by: Harry Mellor <[email protected]>
Co-authored-by: Shanshan Shen <[email protected]>
Co-authored-by: elijah <[email protected]>
Co-authored-by: Yikun Jiang <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Steve Luo <[email protected]>
Co-authored-by: mgoin <[email protected]>
Co-authored-by: Alexei-V-Ivanov-AMD <[email protected]>
Co-authored-by: Alexei V. Ivanov <[email protected]>
Co-authored-by: Gregory Shtrasberg <[email protected]>
Co-authored-by: Woosuk Kwon <[email protected]>
Co-authored-by: Konrad Zawora <[email protected]>
Co-authored-by: wangxiyuan <[email protected]>
Co-authored-by: maang-h <[email protected]>
Co-authored-by: YiSheng5 <[email protected]>
Co-authored-by: Zhonghua Deng <[email protected]>
Co-authored-by: Liangfu Chen <[email protected]>
Co-authored-by: XiaobingZhang <[email protected]>
Co-authored-by: Russell Bryant <[email protected]>
Co-authored-by: Yuan <[email protected]>
Co-authored-by: jiangjiadi <[email protected]>
Co-authored-by: jiadi.jjd <[email protected]>
Co-authored-by: sroy745 <[email protected]>
Co-authored-by: Jie Fu (傅杰) <[email protected]>
Co-authored-by: Divakar Verma <[email protected]>
Co-authored-by: WangErXiao <[email protected]>
Co-authored-by: Nishidha <[email protected]>
Co-authored-by: Ilya Lavrenov <[email protected]>
Co-authored-by: Simon Mo <[email protected]>
Co-authored-by: Wallas Henrique <[email protected]>
Co-authored-by: Li, Jiang <[email protected]>
Co-authored-by: Yan Ma <[email protected]>
Co-authored-by: rasmith <[email protected]>
Co-authored-by: Tyler Michael Smith <[email protected]>
Co-authored-by: Maximilien de Bayser <[email protected]>
Co-authored-by: Maxime Fournioux <[email protected]>
Co-authored-by: Guspan Tanadi <[email protected]>
Co-authored-by: Ye (Charlotte) Qi <[email protected]>
Co-authored-by: yeq <[email protected]>
Co-authored-by: Mengqing Cao <[email protected]>
Co-authored-by: Charles Frye <[email protected]>
Co-authored-by: Joe Runde <[email protected]>
Co-authored-by: Kunshang Ji <[email protected]>
Co-authored-by: cennn <[email protected]>
Co-authored-by: Kuntai Du <[email protected]>
Co-authored-by: minmin <[email protected]>
Co-authored-by: Ren MinMin <[email protected]>
Co-authored-by: Travis Johnson <[email protected]>
Co-authored-by: Fred Reiss <[email protected]>
Co-authored-by: shaochangxu <[email protected]>
Co-authored-by: shaochangxu.scx <[email protected]>
Co-authored-by: Nicolò Lucchesi <[email protected]>
Co-authored-by: sixgod <[email protected]>
Co-authored-by: Rafael Vasquez <[email protected]>
Co-authored-by: Elfie Guo <[email protected]>
Co-authored-by: Rui Qiao <[email protected]>
Co-authored-by: Kyle Sayers <[email protected]>
Co-authored-by: Rahul Tuli <[email protected]>
Co-authored-by: Keyun Tong <[email protected]>
Co-authored-by: RunningLeon <[email protected]>
Co-authored-by: kewang-xlnx <[email protected]>
Co-authored-by: kewang2 <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: Varun Sundar Rabindranath <[email protected]>
Co-authored-by: tvirolai-amd <[email protected]>
Co-authored-by: Michael Goin <[email protected]>
Co-authored-by: Zhaoyi Li <[email protected]>
Co-authored-by: charlifu <[email protected]>
Co-authored-by: Yuan Tang <[email protected]>
Co-authored-by: Cody Yu <[email protected]>
Co-authored-by: Hongxia Yang <[email protected]>
Co-authored-by: yancong <[email protected]>
Co-authored-by: Michal Adamczyk <[email protected]>
Co-authored-by: gujing <[email protected]>
Co-authored-by: imkero <[email protected]>
Co-authored-by: Martin Gleize <[email protected]>
Co-authored-by: mgleize user <[email protected]>
Co-authored-by: shangmingc <[email protected]>
Co-authored-by: Işık <[email protected]>
Co-authored-by: Roger Wang <[email protected]>
Co-authored-by: Cheng Kuan Yong Jason <[email protected]>
Co-authored-by: Jinzhen Lin <[email protected]>
Co-authored-by: Thomas Parnell <[email protected]>
Co-authored-by: Jannis Schönleber <[email protected]>
Co-authored-by: Ricky Xu <[email protected]>
Co-authored-by: Andy Lo <[email protected]>
Co-authored-by: Adrian Cole <[email protected]>
Co-authored-by: Jani Monoses <[email protected]>
Co-authored-by: Kevin H. Luu <[email protected]>
Co-authored-by: Aleksandr Malyshev <[email protected]>
Co-authored-by: maleksan85 <[email protected]>
Co-authored-by: Nick Hill <[email protected]>
Co-authored-by: zhou fan <[email protected]>
Co-authored-by: ilia-cher <[email protected]>
Co-authored-by: liuzhenwei <[email protected]>
Co-authored-by: Lucas Wilkinson <[email protected]>
Co-authored-by: Micah Williamson <[email protected]>
Co-authored-by: Siyuan Liu <[email protected]>
Co-authored-by: Dipika Sikka <[email protected]>
Co-authored-by: ElizaWszola <[email protected]>
Co-authored-by: Junichi Sato <[email protected]>
Co-authored-by: Robert Shaw <[email protected]>
Co-authored-by: omer-dayan <[email protected]>
Co-authored-by: Mohit Deopujari <[email protected]>
Co-authored-by: Jeremy Arnold <[email protected]>
Co-authored-by: Matthew Hendrey <[email protected]>
Co-authored-by: Kyle Mistele <[email protected]>
Co-authored-by: Pooya Davoodi <[email protected]>
Co-authored-by: Mark McLoughlin <[email protected]>
Co-authored-by: Bowen Wang <[email protected]>
Co-authored-by: Bowen Bao <[email protected]>
Co-authored-by: arakowsk-amd <[email protected]>
Co-authored-by: sanyalington <[email protected]>
Co-authored-by: Joe Shajrawi <[email protected]>
Co-authored-by: vllmellm <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants