-
Notifications
You must be signed in to change notification settings - Fork 820
[slimtensor] Add utility functions to common_shims_slim #16992
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Pull Request resolved: #16454 Add SlimTensor-based implementations of basic property getter AOTI shim functions: 1. `aoti_torch_get_data_ptr()` - Returns pointer to tensor data 2. `aoti_torch_get_sizes()` - Returns pointer to sizes array (SlimTensor stores int64_t directly) 3. `aoti_torch_get_strides()` - Returns pointer to strides array (SlimTensor stores int64_t directly) 4. `aoti_torch_get_dtype()` - Returns the scalar type as int32_t 5. `aoti_torch_get_dim()` - Returns the number of dimensions Key design: - Create a new common_shim_slim.h for working on new API while not impact the current pipeline. Will use common_shim_slim.{h/cpp} to replace current common_shim.{h/cpp} when everything has been set up. - Uses `#ifdef CUDA_AVAILABLE` conditional compilation to seperate the implementation between cuda backend and mps backend since SlimTensor hasn't have mps support yet. Will remove the branch once SlimTensor support mps. - Refactored to a header-only library so the caller's preprocessor flags determine which tensor type is used. This design supports both CUDA backend (SlimTensor) and MPS backend (ETensor) from a single library. ghstack-source-id: 336530252 @exported-using-ghexport Differential Revision: [D90126254](https://our.internmc.facebook.com/intern/diff/D90126254/)
…slim Pull Request resolved: #16455 Add storage and device property getter AOTI shim functions to the header-only common_shims_slim library: 1. `aoti_torch_get_storage_offset()` - Returns the storage offset (SlimTensor: real offset, ETensor: always 0) 2. `aoti_torch_get_storage_size()` - Returns storage size in bytes 3. `aoti_torch_get_device_type()` - Returns device type (SlimTensor: real type, ETensor: CPU=0) 4. `aoti_torch_get_device_index()` - Returns device index (SlimTensor: real index, ETensor: 0) ghstack-source-id: 336530255 @exported-using-ghexport Differential Revision: [D90126251](https://our.internmc.facebook.com/intern/diff/D90126251/)
Pull Request resolved: #16457 Add utility functions to the header-only common_shims_slim library: 1. DType constants: - `aoti_torch_dtype_float32()` - Returns 6 (ScalarType::Float) - `aoti_torch_dtype_bfloat16()` - Returns 15 (ScalarType::BFloat16) - `aoti_torch_dtype_int64()` - Returns 4 (ScalarType::Long) - `aoti_torch_dtype_int32()` - Returns 3 (ScalarType::Int) - `aoti_torch_dtype_int16()` - Returns 2 (ScalarType::Short) - `aoti_torch_dtype_int8()` - Returns 1 (ScalarType::Char) - `aoti_torch_dtype_bool()` - Returns 11 (ScalarType::Bool) 2. Device type constants: - `aoti_torch_device_type_cpu()` - Returns 0 (DeviceType::CPU) - `aoti_torch_device_type_cuda()` - Returns 1 (DeviceType::CUDA) 3. Grad mode functions (not supported in ExecuTorch): - `aoti_torch_grad_mode_is_enabled()` - Always returns false - `aoti_torch_grad_mode_set_enabled()` - Returns Ok for false, NotSupported for true ghstack-source-id: 336530259 @exported-using-ghexport Differential Revision: [D90126250](https://our.internmc.facebook.com/intern/diff/D90126250/)
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/executorch/16992
Note: Links to docs will display an error until the docs builds have been completed. This comment was automatically generated by Dr. CI and updates every 15 minutes. |
This PR needs a
|
This PR was created by the merge bot to help merge the original PR into the main branch.
ghstack PR number: #16457 by @Gasoonjia
^ Please use this as the source of truth for the PR details, comments, and reviews
ghstack PR base: https://github.com/pytorch/executorch/tree/gh/gasoonjia/98/base
ghstack PR head: https://github.com/pytorch/executorch/tree/gh/gasoonjia/98/head
Merge bot PR base: https://github.com/pytorch/executorch/tree/gh/gasoonjia/97/orig
Merge bot PR head: https://github.com/pytorch/executorch/tree/gh/gasoonjia/98/orig
Differential Revision: D90126250
@diff-train-skip-merge