Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add Code LLaMA #217

Closed
klink opened this issue Nov 8, 2023 · 4 comments · Fixed by #194
Closed

Add Code LLaMA #217

klink opened this issue Nov 8, 2023 · 4 comments · Fixed by #194
Assignees

Comments

@klink
Copy link
Contributor

klink commented Nov 8, 2023

No description provided.

@klink klink converted this from a draft issue Nov 8, 2023
@klink
Copy link
Contributor Author

klink commented Nov 8, 2023

please provide description for model capabilities (chat/ code-completion/ fine-tuning), gpu requirements and official links @JegernOUTT

@klink klink linked a pull request Nov 8, 2023 that will close this issue
@JegernOUTT
Copy link
Member

https://huggingface.co/TheBloke/CodeLlama-7B-fp16
code-completion / finetuning
20Gb+ for finetune, 15Gb+ for inference

@i-love-doufunao
Copy link

Is there any ETA to support Code Llama fine-tuning?

@klink
Copy link
Contributor Author

klink commented Nov 10, 2023

It's already live in our nightly docker, you can test it there, and we plan to release it to everyone ~next week.

@olegklimov olegklimov moved this from Released in Docker Nightly to Released in Docker V1.2 in Self-hosted / Enterprise Nov 26, 2023
@github-project-automation github-project-automation bot moved this from Released in Docker V1.2 to Released in Docker Nightly in Self-hosted / Enterprise Dec 18, 2023
@mitya52 mitya52 moved this from Released in Docker Nightly to Released in Docker V1.2 in Self-hosted / Enterprise Dec 22, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
Status: Released in Docker V1.2
Development

Successfully merging a pull request may close this issue.

3 participants