Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add GPTQModel backend #7

Open
wants to merge 9 commits into
base: main
Choose a base branch
from

Conversation

CL-ModelCloud
Copy link

No description provided.

@facebook-github-bot
Copy link

Hi @CL-ModelCloud!

Thank you for your pull request and welcome to our community.

Action Required

In order to merge any pull request (code, docs, etc.), we require contributors to sign our Contributor License Agreement, and we don't seem to have one on file for you.

Process

In order for us to review and merge your suggested changes, please sign at https://code.facebook.com/cla. If you are contributing on behalf of someone else (eg your employer), the individual CLA may not be sufficient and your employer may need to sign the corporate CLA.

Once the CLA is signed, our tooling will perform checks and validations. Afterwards, the pull request will be tagged with CLA signed. The tagging process may take up to 1 hour after signing. Please give it that time before contacting us about it.

If you have received this in error or have any questions, please contact us at [email protected]. Thanks!

@CL-ModelCloud CL-ModelCloud marked this pull request as ready for review December 27, 2024 09:18
@facebook-github-bot facebook-github-bot added the CLA Signed This label is managed by the Meta Open Source bot. label Dec 27, 2024
@Qubitium
Copy link

@minimario @sidaw @mdeff PR ready for review. Add GPTQModel as optional backend in addition to vllm for benchmark inference:

  • Add new backend argument default to existing vllm backend
  • Add gptqmodel backend option
  • GPTQModel will allow the benchmark to inference gptq quantized models on Nvidia (CUDA), Intel XPU (XPU), Apple Silicon (MPS), Intel/AMD CPU hardware acceleration via IPEX (requires avx, amx, or xmx), and finally fallback to non-accelerated CPU inference using normal Torch code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
CLA Signed This label is managed by the Meta Open Source bot.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants