Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

torch_tensor_to_array does not account for index ordering #293

Closed
jwallwork23 opened this issue Feb 25, 2025 · 1 comment
Closed

torch_tensor_to_array does not account for index ordering #293

jwallwork23 opened this issue Feb 25, 2025 · 1 comment
Assignees
Labels
autograd Tasks towards the online training / automatic differentiation feature bug Something isn't working

Comments

@jwallwork23
Copy link
Collaborator

Running the tensor manipulation example proposed in #291, I notice that torch_tensor_to_array does not properly account for index ordering. The output of the example is currently

test 1
    Start 1: tensor_manipulation

1: Test command: /home/joe/software/FTorch/build/examples/0_Tensor/tensor_manipulation
1: Test timeout computed to be: 1500
1:  Contents of first input tensor:
1:  1  1  1
1:  1  1  1
1: [ CPUFloatType{2,3} ]
1:  Contents of second input tensor:
1:    1.00000000       2.00000000       3.00000000       4.00000000       5.00000000       6.00000000    
1:  Output:
1:    2.00000000       4.00000000       6.00000000       3.00000000       5.00000000       7.00000000    
1:  Tensor manipulation example ran successfully
1/1 Test #1: tensor_manipulation ..............   Passed    0.23 sec

100% tests passed, 0 tests failed out of 1

It takes a $2\times3$ tensor of ones and adds it to a $2\times3$ tensor containing the integers 1-6 that was created using the Fortran reshape intrinsic. The array generated by the reshape is printed as the "second input tensor". We do the addition using the overloaded + operator but then when we call torch_tensor_to_array, the output is not what we'd expect.

I think the issue is that torch_tensor_from_blob accounts for strides, whereas torch_tensor_to_blob doesn't. Therefore, we switch index orderings from Fortran to C, but then don't switch back.

@jwallwork23 jwallwork23 self-assigned this Feb 25, 2025
@jwallwork23 jwallwork23 added bug Something isn't working autograd Tasks towards the online training / automatic differentiation feature labels Feb 25, 2025
This was referenced Feb 25, 2025
@jwallwork23
Copy link
Collaborator Author

Closed as completed by #303.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
autograd Tasks towards the online training / automatic differentiation feature bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant