-
Notifications
You must be signed in to change notification settings - Fork 352
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Opacus with torch_geometric.nn and GCN's #588
Comments
Same issue here. Did you address this error? : ) @sagerkudrick |
Hey @marlowe518 I did, the problem was with this: model, optimizer, data_loader = privacy_engine.make_private_with_epsilon( batch_first = True results in "tensor is shape [K, batch_size, ...], if false: [batch_size, ...]", the input tensor to the model is modified by this, throwing off the positional argument, I was able to solve this by making batch_first=False |
I'm not entirely sure if Opacus supports graphs though- validating using PrivacyEngine says that our GCN model is valid, but we're running into a new error here:
We're using the default DataLoader from
And our trainer:
|
I get similar behavior when I wrap one of my models with GradSamplerModule(). Were you able to solve this issue? Doesn't work with batch_first=false when I use it with GradSampleModule(model,batch_first=False) |
@sagerkudrick I have the same problem. Have you solved this mistake? |
Does Opacus work with GCNConv?
I'm attempting to use Opacus with a GCN, with the model defined as such:
When training, however, I'm running into
Occuring within
On the
loss.backward()
, it's worth noting that training and evaluating work regularly, but upon doingand training, it begins to throw the error with the new model
Thank you!
The text was updated successfully, but these errors were encountered: