Fix configs and functions for distributed rank checks #111
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Include the following four modifications.
configs/base/dfine_hgnetv2.yml: The weight file is currently being downloaded to an unusual location. Modified so that it downloads directly into the project folder for convenience.
configs/deim_dfine/deim_hgnetv2_n_coco.yml: The output_dir in this file is inconsistent with other models. Updated to maintain consistency.
configs/deim_dfine/dfine_hgnetv2_n_coco.yml: This file is the only one that directly specifies total_batch_size in train_dataloader and val_dataloader. Because of this, as noted in the README, modifying total_batch_size in configs/base/dataloader.yml does not properly affect training when using deim_hgnetv2_n. Fixed for consistency with the intended configuration behavior.
engine/backbone/hgnetv2.py: When training with a single GPU, the rank-checking function for loading HGNetV2 pretraining weights causes an error. This has been corrected.