Skip to content
This repository was archived by the owner on Oct 31, 2023. It is now read-only.

Missing required positional arguments #237

Open
grossmanm opened this issue Nov 9, 2022 · 6 comments
Open

Missing required positional arguments #237

grossmanm opened this issue Nov 9, 2022 · 6 comments

Comments

@grossmanm
Copy link

Hi, I'm following the instructional code on the readme and ran
python train_dense_encoder.py \ train_datasets=[nq_train] \ dev_datasets=[nq_dev] \ train=biencoder_local \ output_dir={path to checkpoints dir}

after install the nq_train and nq_dev datasets however whenever I run this I get an error in pytorch
torch/nn/modules/module.py", line 1190, in _call_impl return forward_call(*input, **kwargs) TypeError: forward() missing 6 required positional arguments: 'question_ids', 'question_segments', 'question_attn_mask', 'context_ids', 'ctx_segments', and 'ctx_attn_mask'

I'm not sure what could be causing this.

@grossmanm
Copy link
Author

This issue has already been resolved.

@Aaron617
Copy link

Aaron617 commented May 6, 2023

Hello, I encountered the same error as you did and I was wondering if you would be so kind as to share the method you used to solve it. I would greatly appreciate it. Thank you very much for your time and help.

1 similar comment
@Aaron617
Copy link

Aaron617 commented May 6, 2023

Hello, I encountered the same error as you did and I was wondering if you would be so kind as to share the method you used to solve it. I would greatly appreciate it. Thank you very much for your time and help.

@Adamits
Copy link

Adamits commented Aug 30, 2023

This issue has already been resolved.

Can you share the solution. I am running into this when training on GPU (though the error does not occur on CPU).

@yeliusf
Copy link

yeliusf commented Sep 13, 2023

Hi, I encountered the same error as you did. Is there anyone able to share the solution? Thanks!

@Cphyr
Copy link

Cphyr commented Oct 30, 2023

Hey, after some trial and error, I think you are trying to run the single-node version (without torch.distributed) on a multi GPUs setup. If this is the case then the following solution will enable you to run the code on one of those GPUs.

  1. open 'dpr/options.py' file.
  2. change 'n_gpu' to 1 by hand.

image

In case you would like to run using all GPUs, the highlighted addition in 'train_dense_encoder.py' might solve the issue:
image
and running it as follows:
python -m torch.distributed.launch --nproc_per_node=2 train_dense_encoder.py train=biencoder_nq train_datasets=[nq_train] dev_datasets=[nq_dev] train=biencoder_nq output_dir=outputs/

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants