Gradient Computation with Inverse Wishart Prior #1556
Replies: 2 comments 4 replies
-
Hmm interesting. This seems like an issue with the Wishart prior if the same code works with the LKJPrior works without any issues. I just took a look and I didn't see any obvious issues. Can you share a full code example to repro ? It could also be that some of the data that you're sticking into the prior or model was modified in place causing this error. |
Beta Was this translation helpful? Give feedback.
-
To follow up on this, I used the suggested code for FixedTaskNoiseMultitaskLikelihood at the end of the following discussion, and ran into the following error message TypeError: torch.Size() takes an iterable of 'int' (item 0 is 'torch.Size') in the self.noise_covar() function in the likelihood. I have attached my code, as well as the error here. Thanks a lot for your help! |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Hi all,
I am working with a multi-task Gaussian Process with a task matrix drawn from an Inverse Wishart prior, instead of the default LKJ Covariance prior. To implement this process, I am using the newly implemented Inverse Wishart prior in the latest, unstable version of gpytorch.
After obtaining new data, I would like to fit this multi-task GP using the function fit_gpytorch_mll, but I am getting the following error:
RuntimeError: one of the variables needed for gradient computation has been modified by an inplace operation: [torch.DoubleTensor [1, 6]] is at version 1; expected version 0 instead. Hint: the backtrace further above shows the operation that failed to compute its gradient. The variable in question was changed in there or anywhere later. Good luck!
I don't run into this error if I use the default LKJ Covariance prior. One possibility is that there is something wrong in how I've drawn the task covariance matrix from the inverse wishart prior, and I've replicated my code here.
`train_Y_t = torch.transpose(train_Y,1,0)
scale_matrix = data_covar_module(train_X).inv_matmul(train_Y, left_tensor = train_Y_t)
task_covar_prior = InverseWishartPrior(2+train_X.shape[0], scale_matrix)`
Another possibility is that the Inverse Wishart prior in gpytorch could require modification to fix this.
Any thoughts regarding this would be much appreciated - thanks a lot!
Beta Was this translation helpful? Give feedback.
All reactions