You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I already have access to an Ollama server running on a headless GPU machine, so I made some edits to run this against that machine instead of using the internal llamacpp.py inference setup. Would you like a PR for this? I think it could be an option in preferences to run against a remote Ollama server(along with options for host / port).
Here are some initial outputs:
"A Sphere"
"A Pyramid"
"A Cube"
Also "A Cube"
"A Camera"
Thanks,
Aidan
The text was updated successfully, but these errors were encountered:
Sounds great! My goal is to make installation as easy as possible for people without ML experience, so as long as it's structured to not overwhelm new users, it sounds good to me (i.e. maybe in Show Developer Options)
Hi @dylanebert --
Thanks for putting this out!
I already have access to an Ollama server running on a headless GPU machine, so I made some edits to run this against that machine instead of using the internal
llamacpp.py
inference setup. Would you like a PR for this? I think it could be an option in preferences to run against a remote Ollama server(along with options for host / port).Here are some initial outputs:
"A Sphere"
"A Pyramid"
"A Cube"
Also "A Cube"
"A Camera"
Thanks,
Aidan
The text was updated successfully, but these errors were encountered: