-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
wav2lip_train training error:Expected more than 1 value per channel when training, got input size torch.Size([1, 512, 1, 1]) #563
Comments
Have you been able to resolve this issue? I am also using a custom dataset and getting the same error. |
No,I haven't |
I just resolved this issue by adding syncnet.eval() after @cncbec @ulucsahin I am also using a custom dataset and my guess is either this error was not encountered with the original dataset or the model was being set to validate in the eval functions included for LRS2 data which I'm not using |
I experienced the same problem, and whenever sync loss is reduced to 0.75 or less, an error is reported |
!!! 因为pytorch在训练时会使用BatchNormal,就要保证batch_size>1,所以在model.train()下会触发_verify_batch_size()判断。 |
Hi, Sorry for bothering you, but since you are using a custom dataset, can you please show me the structure of the filelist for the custom dataset. because when i try to train on my own dataset, the progress is 0% always and i think it is an issue with the filelist(Ex: train.txt file) |
Hello, may I ask if this issue has been resolved? I have also encountered this problem |
这个是什么意思,我现在batchsize已经设置为16了还是报错这个 |
Using my personal data set, the batch size in hparams.py is 16
The text was updated successfully, but these errors were encountered: