@@ -29,7 +29,10 @@ include:
29
29
* ` torch_tensor_from_array ` , which allows the user to create a ` torch_tensor `
30
30
with the same rank, shape, and data type as a given Fortran array. Note that
31
31
the data is * not* copied - the tensor data points to the Fortran array,
32
- meaning the array must have been declared with the ` target ` property.
32
+ meaning the array must have been declared with the ` target ` property. The
33
+ array will continue to be pointed to even when operations are applied to the
34
+ tensor, so this subroutine can be used 'in advance' to set up an array for
35
+ outputting data.
33
36
34
37
It is * compulsory* to call one of the constructors before interacting with it in
35
38
any of the ways described in the following. Each of the constructors sets the
@@ -53,22 +56,20 @@ include:
53
56
the tensor resides on as an integer. For a CPU device, this index should be
54
57
set to -1 (the default). For GPU devices, the index should be non-negative
55
58
(defaulting to 0).
56
- * ` torch_tensor_to_array ` , which allows the user to extract the data held within
57
- a ` torch_tensor ` object into a Fortran array. Note that the data is * not*
58
- copied - the Fortran array points to the tensor data, meaning it must be
59
- declared with the ` pointer ` property.
60
59
61
60
Procedures for interrogation are implemented as methods as well as stand-alone
62
- procedures (with the exception of ` torch_tensor_to_array ` ). For example,
63
- ` tensor%get_rank ` can be used in place of ` torch_tensor_get_rank ` , omitting the
64
- first argument (which would be the tensor itself). The naming pattern is
65
- similar for the other methods (simply drop the preceding ` torch_tensor_ ` ).
61
+ procedures. For example, ` tensor%get_rank ` can be used in place of
62
+ ` torch_tensor_get_rank ` , omitting the first argument (which would be the tensor
63
+ itself). The naming pattern is similar for the other methods (simply drop the
64
+ preceding ` torch_tensor_ ` ).
66
65
67
66
### Tensor deallocation
68
67
69
68
We provide a subroutine for deallocating the memory associated with a
70
69
` torch_tensor ` object: ` torch_tensor_delete ` . An interface is provided such that
71
- this can also be applied to arrays of tensors.
70
+ this can also be applied to arrays of tensors. Calling this subroutine manually
71
+ is optional as it is called as a destructor when the ` torch_tensor ` goes out of
72
+ scope anyway.
72
73
73
74
### Operator overloading
74
75
0 commit comments