-
Notifications
You must be signed in to change notification settings - Fork 4.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] tts.tts_with_vc_to_file cannot use cpu #3797
Comments
The XTTS model natively supports voice cloning, so just use the following (and pick just one of from TTS.api import TTS
device = "cpu"
print(device)
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
tts.tts_to_file(text="Hello world!", speaker='Andrew Chipper',speaker_wav="/path/to/voice_sample.wav", language="en",file_path="/path/to/outputs/xttsv2_en_output.wav") This should run correctly on the CPU. The |
Hey Enno, thanks a lot for the pointer, I didn't realise that some models have voice cloning built in rather than with I was then trying to run the model in |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels. |
Dear coqui devs/community, |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs. Thank you for your contributions. You might also look our discussion channels. |
Thanks for letting me know! |
Describe the bug
Similar to #3787, but also when running
xtts_v2
model with voice cloning (vocoder model), usingdevice='cpu'
results to the following error:To Reproduce
import torch
from TTS.api import TTS
device = "cpu"
print(device)
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
tts.tts_with_vc_to_file(text="Hello world!", speaker='Andrew Chipper',speaker_wav="/path/to/voice_sample.wav", language="en",file_path="/path/to/outputs/xttsv2_en_output.wav")
Expected behavior
The inference should run without using CUDA or reporting any CUDA/CUDNN/GPU-related errors.
Logs
Environment
Additional context
Note: Even though I do have CUDA and an NVIDIA GPU on my laptop, I want to use CPU because the VRAM of my GPU is not enough for the model I wanted to use.
The text was updated successfully, but these errors were encountered: