This repository was archived by the owner on Jul 22, 2024. It is now read-only.
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Here may include a issue when multiGPUs are used.
Since the default self.batch_size=8 at L452, when multiGPUs are used and data shape is more than 8, I found the
domain_batchedat line452 has actually fewer dimensions than it should be, which leads to the zip error at line 460. (suppose for 3 GPU with train batch size=4, with all domains, line 452 only returns one element, while the other three at line 456 returns 2 elements: one with shape (8,), the other with shape (4,) ) It is noticed that domain_batched is not used afterward. So may be a straightforward way is to delete it.