-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Drop torch_tensor_to_array
#303
Drop torch_tensor_to_array
#303
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks @jwallwork23 The code all looks sensible to me, but I have a couple of things I would appreciate if you can explain.
To summarise my understanding: instead of generating a tensor (pointer) that needs to then be mapped to an array, we instead require a user to 'pre-bind' an array to the output tensor of an arithmetic expression that will then be populated with any data resulting from computation. Is that correct?
It occurs to me here that all of our tests are for "correct" outputs, we do not test for errors. As discussed last week, this is not doable in pFUnit, but out of interest do you have an idea of what the outcome is if the user does not "pre-declare" the array associated with the output tensor?
Also, just to check my understanding, we do not call torch_tensor_delete
in the autograd example at all now because it is cleaned up by the final
method when going out of scope at the end of the program?
That's correct.
Well you can't use
Yes, I dropped it for simplicity since we can do that now. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks. Already approved, but happy to re-approve after changes and feel free to merge as you please.
* Output array needs to be created empty in autograd example * Do not interrogate tensors that haven't been constructed * Drop `torch_tensor_to_array` (#303) * Drop torch_tensor_to_<array/blob> * Remove torch_tensor_to_array from autograd example * Use std::move for assignment * Update constructors/destructors tests to avoid to_array * Update summary of autograd example * Update overloads tests to avoid to_array * Fix previous API change note
Closes #293.
Note #272 will need to be merged first as this branch builds upon it.
As discussed in today's meeting,
torch_tensor_to_array
is no longer needed. It was added because the overloaded mathematical operators created newtorch:Tensor
s in C++, rather than working with pointers to Fortran arrays. Since that was addressed in #297, the expected (correct) result fromtorch_tensor_to_array
can be achieved by creating the tensor from which you extract data from usingtorch_tensor_from_array
applied to the array you want to extract into.This is much neater and allows us to drop all cleanup (the remaining
nullify
statements) from the unit tests.