Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Run against Ollama Backend? #12

Open
AidanNelson opened this issue Dec 6, 2024 · 1 comment
Open

Run against Ollama Backend? #12

AidanNelson opened this issue Dec 6, 2024 · 1 comment

Comments

@AidanNelson
Copy link

Hi @dylanebert --

Thanks for putting this out!

I already have access to an Ollama server running on a headless GPU machine, so I made some edits to run this against that machine instead of using the internal llamacpp.py inference setup. Would you like a PR for this? I think it could be an option in preferences to run against a remote Ollama server(along with options for host / port).

Here are some initial outputs:

"A Sphere"
Sphere in Blender software

"A Pyramid"
Pyramid shape in Blender software

"A Cube"
Screenshot 2024-12-06 at 1 14 47 PM

Also "A Cube"
Screenshot 2024-12-06 at 1 14 40 PM

"A Camera"
Screenshot 2024-12-06 at 1 23 58 PM

Thanks,
Aidan

@dylanebert
Copy link
Collaborator

Sounds great! My goal is to make installation as easy as possible for people without ML experience, so as long as it's structured to not overwhelm new users, it sounds good to me (i.e. maybe in Show Developer Options)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants