We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
failed to generate audio when using voice conversion using subtoaudio coqui wraper.
logs :
[/usr/local/lib/python3.10/dist-packages/subtoaudio/subtoaudio.py](https://localhost:8080/#) in convert_to_audio(self, sub_data, speaker, language, voice_conversion, speaker_wav, voice_dir, output_path, tempo_mode, tempo_speed, tempo_limit, shift_mode, shift_limit, save_temp, speed, emotion, **kwargs) 120 for entry_data in data: 121 audio_path = f"{temp_folder}/{entry_data['audio_name']}" --> 122 self.apitts.tts_with_vc_to_file(f"{entry_data['text']}",file_path=audio_path,**convert_param) 123 124 [/usr/local/lib/python3.10/dist-packages/TTS/api.py](https://localhost:8080/#) in tts_with_vc_to_file(self, text, language, speaker_wav, file_path) 473 Output file path. Defaults to "output.wav". 474 """ --> 475 wav = self.tts_with_vc(text=text, language=language, speaker_wav=speaker_wav) 476 save_wav(wav=wav, path=file_path, sample_rate=self.voice_converter.vc_config.audio.output_sample_rate) [/usr/local/lib/python3.10/dist-packages/TTS/api.py](https://localhost:8080/#) in tts_with_vc(self, text, language, speaker_wav) 451 if self.voice_converter is None: 452 self.load_vc_model_by_name("voice_conversion_models/multilingual/vctk/freevc24") --> 453 wav = self.voice_converter.voice_conversion(source_wav=fp.name, target_wav=speaker_wav) 454 return wav 455 [/usr/local/lib/python3.10/dist-packages/TTS/utils/synthesizer.py](https://localhost:8080/#) in voice_conversion(self, source_wav, target_wav) 251 252 def voice_conversion(self, source_wav: str, target_wav: str) -> List[int]: --> 253 output_wav = self.vc_model.voice_conversion(source_wav, target_wav) 254 return output_wav 255 [/usr/local/lib/python3.10/dist-packages/torch/utils/_contextlib.py](https://localhost:8080/#) in decorate_context(*args, **kwargs) 113 def decorate_context(*args, **kwargs): 114 with ctx_factory(): --> 115 return func(*args, **kwargs) 116 117 return decorate_context [/usr/local/lib/python3.10/dist-packages/TTS/vc/models/freevc.py](https://localhost:8080/#) in voice_conversion(self, src, tgt) 645 """ 646 --> 647 wav_tgt = self.load_audio(tgt).cpu().numpy() 648 wav_tgt, _ = librosa.effects.trim(wav_tgt, top_db=20) 649 [/usr/local/lib/python3.10/dist-packages/TTS/vc/models/freevc.py](https://localhost:8080/#) in load_audio(self, wav) 630 if isinstance(wav, list): 631 wav = torch.from_numpy(np.array(wav)).to(self.device) --> 632 return wav.float() 633 634 @torch.inference_mode() AttributeError: 'NoneType' object has no attribute 'float'
The text was updated successfully, but these errors were encountered:
use this https://github.com/bnsantoso/sub-to-audio#voice-conversion instead using colab form.
Sorry, something went wrong.
No branches or pull requests
failed to generate audio when using voice conversion using subtoaudio coqui wraper.
logs :
The text was updated successfully, but these errors were encountered: