Why is FishSpeech API Faster Than My Self-Hosted Setup? #870
Unanswered
RomaricLocuta
asked this question in
Q&A
Replies: 1 comment
-
same issue |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone,
I'm trying to understand the performance gap between inference with the FishSpeech API and my self-hosted deployment Here's my configuration:
Technical Details:
--compile
and testing both with and without--half
precision, but neither impacts the observed performance.Observations:
As the attached graph shows, the FishSpeech API is faster at processing the 89 requests (maximum 5 requests sent at the same time) than my local setup.
Questions:
Thanks for any insights!
Beta Was this translation helpful? Give feedback.
All reactions