Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

llama-server <embedding> exited with status code -1073741819 #3150

Closed
spock opened this issue Sep 15, 2024 · 3 comments · Fixed by #3152
Closed

llama-server <embedding> exited with status code -1073741819 #3150

spock opened this issue Sep 15, 2024 · 3 comments · Fixed by #3152

Comments

@spock
Copy link

spock commented Sep 15, 2024

Describe the bug
Nightly and 0.17.0 fail in all tried modes (rocm, cpu) with the same message - on Windows.

Information about your version

tabby 0.17.0
tabby 0.18.0-dev.0

Information about your GPU
Have AMD GPU; command fails also when run on CPU.

Additional context

Command to start llama-server appears to have a wrong path separator in the model path, ggml/model.gguf which on Windows should probably be ggml\model.gguf. This is the command listed in the logs:

"C:\\Users\\username\\Downloads\\tabby_x86_64-windows-msvc\\dist\\tabby_x86_64-windows-msvc\\llama-server.exe" "-m" "C:\\Users\\username\\.tabby\\models\\TabbyML\\Nomic-Embed-Text\\ggml/model.gguf" "--cont-batching" "--port" "30888" "-np" "1" "--log-disable" "--ctx-size" "4096" "-ngl" "9999" "--embedding" "--ubatch-size" "4096", kill_on_drop: true 
@spock
Copy link
Author

spock commented Sep 15, 2024

the same nightly tabby 0.18.0-dev.0 started fine under WSL2

@wsxiaoys
Copy link
Member

fixing in #3152

@ANYMS-A
Copy link

ANYMS-A commented Sep 25, 2024

I met a similar issue while the ggml model's path seems fine:

Starting...2024-09-25T07:34:39.926298Z  WARN llama_cpp_server::supervisor: crates\llama-cpp-server\src\supervisor.rs:98: llama-server <embedding> exited with status code -1073741819, args: `Command { std: "C:\\Users\\xxx\\Downloads\\dist\\tabby_x86_64-windows-msvc\\llama-server.exe" "-m" "C:\\Users\\xxx\\.tabby\\models\\TabbyML\\Nomic-Embed-Text\\ggml\\model.gguf" "--cont-batching" "--port" "30888" "-np" "1" "--log-disable" "--ctx-size" "4096" "-ngl" "9999" "--embedding" "--ubatch-size" "4096", kill_on_drop: true }`

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants