Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Drop torch_tensor_to_array #303

Conversation

jwallwork23
Copy link
Collaborator

@jwallwork23 jwallwork23 commented Mar 3, 2025

Closes #293.

Note #272 will need to be merged first as this branch builds upon it.

As discussed in today's meeting, torch_tensor_to_array is no longer needed. It was added because the overloaded mathematical operators created new torch:Tensors in C++, rather than working with pointers to Fortran arrays. Since that was addressed in #297, the expected (correct) result from torch_tensor_to_array can be achieved by creating the tensor from which you extract data from using torch_tensor_from_array applied to the array you want to extract into.

This is much neater and allows us to drop all cleanup (the remaining nullify statements) from the unit tests.

@jwallwork23 jwallwork23 added bug Something isn't working autograd Tasks towards the online training / automatic differentiation feature labels Mar 3, 2025
@jwallwork23 jwallwork23 self-assigned this Mar 3, 2025
@jwallwork23 jwallwork23 marked this pull request as ready for review March 3, 2025 12:12
Copy link
Member

@jatkinson1000 jatkinson1000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks @jwallwork23 The code all looks sensible to me, but I have a couple of things I would appreciate if you can explain.

To summarise my understanding: instead of generating a tensor (pointer) that needs to then be mapped to an array, we instead require a user to 'pre-bind' an array to the output tensor of an arithmetic expression that will then be populated with any data resulting from computation. Is that correct?

It occurs to me here that all of our tests are for "correct" outputs, we do not test for errors. As discussed last week, this is not doable in pFUnit, but out of interest do you have an idea of what the outcome is if the user does not "pre-declare" the array associated with the output tensor?

Also, just to check my understanding, we do not call torch_tensor_delete in the autograd example at all now because it is cleaned up by the final method when going out of scope at the end of the program?

@jwallwork23
Copy link
Collaborator Author

To summarise my understanding: instead of generating a tensor (pointer) that needs to then be mapped to an array, we instead require a user to 'pre-bind' an array to the output tensor of an arithmetic expression that will then be populated with any data resulting from computation. Is that correct?

That's correct.

It occurs to me here that all of our tests are for "correct" outputs, we do not test for errors. As discussed last week, this is not doable in pFUnit, but out of interest do you have an idea of what the outcome is if the user does not "pre-declare" the array associated with the output tensor?

Well you can't use torch_tensor_from_array without having declared the array, no?

Also, just to check my understanding, we do not call torch_tensor_delete in the autograd example at all now because it is cleaned up by the final method when going out of scope at the end of the program?

Yes, I dropped it for simplicity since we can do that now.

Copy link
Member

@jatkinson1000 jatkinson1000 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks. Already approved, but happy to re-approve after changes and feel free to merge as you please.

@jwallwork23 jwallwork23 merged commit 4f6967f into 269_tensor-op-different-device-error Mar 3, 2025
jwallwork23 added a commit that referenced this pull request Mar 3, 2025
* Output array needs to be created empty in autograd example
* Do not interrogate tensors that haven't been constructed

* Drop `torch_tensor_to_array` (#303)

* Drop torch_tensor_to_<array/blob>
* Remove torch_tensor_to_array from autograd example
* Use std::move for assignment
* Update constructors/destructors tests to avoid to_array
* Update summary of autograd example
* Update overloads tests to avoid to_array
* Fix previous API change note
@jwallwork23 jwallwork23 deleted the 293_drop-torch_tensor_to_array branch March 3, 2025 18:13
@jwallwork23 jwallwork23 mentioned this pull request Mar 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
autograd Tasks towards the online training / automatic differentiation feature bug Something isn't working
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants