Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Result of hq_wav2lip model training with LRS2 dataset #583

Open
linhnv97 opened this issue Nov 9, 2023 · 1 comment
Open

Result of hq_wav2lip model training with LRS2 dataset #583

linhnv97 opened this issue Nov 9, 2023 · 1 comment

Comments

@linhnv97
Copy link

linhnv97 commented Nov 9, 2023

Thank you so much for your model. I tried to train the hq_wav2lip model after training the expert discriminator (get loss ~ 0.23) but the result was not good when inference. This is a training loss:
training loss
and this is the video result after running the inference:
https://github.com/Rudrabha/Wav2Lip/assets/140048495/3b6255c8-20c0-4c94-8eb2-3981f5f2780c
Please let me know, what are the reasons for this issue.

@xingdi1990
Copy link

Hi, Did you try the officially published checkpoints for inferencing the same face video? If the same issue happens, I would like to believe the issue comes from the video itself instead of the model. Based on my experience, your sample video contains lots of pose variations and lighting changes. Those could make the degradations on the synthesized result.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants