-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Remote GPU #74
Remote GPU #74
Conversation
That's a fantastic work! Will be testing it! |
Very interesting |
Are you guys done testing this? |
@E3V3A Still didn't get my hands on it, you can clone https://github.com/mynameisfiber/avatarify/tree/feat/remote-gpu and test it meanwhile. |
Hi @mynameisfiber , thank you very much for the PR. I've tried it on my Lenovo T470s and 2080 Ti sever and got ~9FPS, while running locally at 30FPS. Could you sync with our latest version so that we could merge neatly? |
@mynameisfiber I manually merged this branch into https://github.com/alievk/avatarify/tree/feat/remote-gpu Could you reopen your pull request to merge to this branch or should I close this PR? |
@alievk I'll just close this PR and you can manually merge the new branch! I'm glad it's working for you. Did you mean you got 30FPS using this branch and 9FPS locally with just your laptop? |
@mynameisfiber looks like the bottleneck was in the data serialization. If you directly send over encoded jpeg images [1] it features a decent performance (20FPS over WiFi connection to a dedicated server in the internet). [1] https://github.com/alievk/avatarify/blob/master/afy/predictor_remote.py#L49 |
My good GPU is on a headless machine so I hacked in some remote GPU capabilities and cleaned up some of the prediction code. Normally I'd also run
black
andisort
on things, but that doesn't seem to be the package standard so forgive the messy code!This pull request adds in the following arguments:
is-worker
: designates whether the current process is a dedicated workerworker-port
: what port the worker listens onworker-host
: what host the worker is oncompress
: whether to compress communications for faster throughput at the cost of some CPUWhat I do to have the system up is on my server I run:
and on my laptop I run:
and profit!
Currently I get about 10FPS on WIFI connected to my servers RTX 2080TI. I presume most of the decrease in framerate is because of network latency. If I had a spare network cable, I could test doing things on the wire.
NOTE: This PR adds extra python dependencies which requires running
./scripts/install.sh
again.