-
Notifications
You must be signed in to change notification settings - Fork 220
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Enable Intel GPU path and lora finetune and change examples to support different devices #631
Conversation
Hi @casper-hansen , would you please review this PR? Thx! |
Hi @casper-hansen . Have you received my email? Please let me know your opinion. Thanks! |
Hi @casper-hansen . Do you mind taking a review for this PR? Thx! |
Hi @casper-hansen . This PR enables Intel GPU platform for both inference and finetune. Do you mind taking a review? I will send you the performance later |
Signed-off-by: jiqing-feng <[email protected]>
Signed-off-by: jiqing-feng <[email protected]>
Hi @jiqing-feng, I am sorry for taking so long to answer once again. I have been away from open-source for a while since I maintain AutoAWQ in my free time. I have tested your changes and checked that generation is working as expected after your modifications. Thanks for your hard work on this, love to see it! |
This PR enables cpu lora finetune in GEMM_IPEX version and changes examples to support different devices like CPU.