期待推出70b中文大模型,并用GGUF压缩成Q4_K_M #316
amd-zoybai
started this conversation in
General
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
英文模型已经可以支持70b,期待相应的中文模型:
3.9G llama-2-7b-chat.Q4_K_M.gguf
7.4G llama-2-13b-chat.Q4_K_M.gguf
39G llama-2-70b-chat.Q4_K_M.gguf
Beta Was this translation helpful? Give feedback.
All reactions