Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can someone help me with AllTalk TTS #408

Closed
F1re4 opened this issue Nov 16, 2024 · 3 comments
Closed

Can someone help me with AllTalk TTS #408

F1re4 opened this issue Nov 16, 2024 · 3 comments

Comments

@F1re4
Copy link

F1re4 commented Nov 16, 2024

Hi all. I would like to ask you this question: I tried to install it all day today but there was an error, I was able to launch the custom interface, but when I ginirazzi the following problem appears

ERROR: Exception in ASGI application
Traceback (most recent call last):
File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\site-packages\starlette\responses.py", line 264, in call
await wrap(partial(self.listen_for_disconnect, receive))
File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\site-packages\starlette\responses.py", line 260, in wrap
await func()
File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\site-packages\starlette\responses.py", line 237, in listen_for_disconnect
message = await receive()
^^^^^^^^^^^^^^^
File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\site-packages\uvicorn\protocols\http\h11_impl.py", line 535, in receive
await self.message_event.wait()
File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\asyncio\locks.py", line 213, in wait
await fut
asyncio.exceptions.CancelledError: Cancelled by cancel scope 248286cee50

During handling of the above exception, another exception occurred:

+-+---------------- 1 ----------------
| Traceback (most recent call last):
| File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\site-packages\starlette\responses.py", line 260, in wrap
| await func()
| File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\site-packages\starlette\responses.py", line 249, in stream_response
| async for chunk in self.body_iterator:
| File "F:\textai\alltalk_tts-main\tts_server.py", line 506, in generate_audio_internal
| gpt_cond_latent, speaker_embedding = model.get_conditioning_latents(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
| return func(*args, **kwargs)
| ^^^^^^^^^^^^^^^^^^^^^
| File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\site-packages\TTS\tts\models\xtts.py", line 365, in get_conditioning_latents
| speaker_embedding = self.get_speaker_embedding(audio, load_sr)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\site-packages\torch\utils_contextlib.py", line 115, in decorate_context
| return func(*args, **kwargs)
| ^^^^^^^^^^^^^^^^^^^^^
| File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\site-packages\TTS\tts\models\xtts.py", line 318, in get_speaker_embedding
| audio_16k = torchaudio.functional.resample(audio, sr, 16000)
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\site-packages\torchaudio\functional\functional.py", line 1519, in resample
| kernel, width = _get_sinc_resample_kernel(
| ^^^^^^^^^^^^^^^^^^^^^^^^^^
| File "F:\textai\alltalk_tts-main\alltalk_environment\env\Lib\site-packages\torchaudio\functional\functional.py", line 1417, in _get_sinc_resample_kernel
| idx = torch.arange(-width, width + orig_freq, dtype=idx_dtype, device=device)[None, None] / orig_freq
| ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
| RuntimeError: CUDA error: operation not supported
| Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
|
+------------------------------------

If possible, how can you work with AllTalk TTS with an AMD RX6700xt GPU? I tried with ZLUDA but it didn’t work, or I configured it wrong. If this is still impossible, then there are some alternatives that would work with Koboldcpp for my video card, thanks in advance

@brianjorden
Copy link

Take a look at this: #377

@F1re4
Copy link
Author

F1re4 commented Nov 17, 2024

Take a look at this: #377

thanks, i fixed some by installing HIP SDK but get that one what should i so?
raise RuntimeError(f"File at path {self.path} does not exist.")
RuntimeError: File at path F:\textai\alltalk_tts-main\outputs\undefined does not exist.
when i use basic version and when i use AllTalk Generator
[AllTalk TTSGen] Hello, this is a preview of voice arnold.
An error occurred: CUDA error: invalid argument
Compile with TORCH_USE_CUDA_DSA to enable device-side assertions.
this one

@erew123
Copy link
Owner

erew123 commented Nov 17, 2024

@F1re4 This is everything I know about ROCm support:

Beyond that, I couldn't tell you anything

@erew123 erew123 closed this as completed Nov 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants