-
Notifications
You must be signed in to change notification settings - Fork 3.9k
GH-47520: [C++][Tensor] Correct sparse tensor creation from dense tensor with negative zero #47586
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
GH-47520: [C++][Tensor] Correct sparse tensor creation from dense tensor with negative zero #47586
Conversation
|
@rok, could you review this? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've done a high over pass and posted some questions. I want to do a deeper pass in more details later this week.
The templating approach seems more maintainable. We should make sure everything is tested.
ConvertRowMajorTensor<IndexType, ValueType>(tensor_, indices, values); | ||
} else if (tensor_.is_column_major()) { | ||
DISPATCH(CONVERT_COLUMN_MAJOR_TENSOR, index_elsize, value_elsize, indices, values, | ||
nonzero_count); | ||
ConvertColumnMajorTensor<IndexType, ValueType>(tensor_, indices, values, | ||
nonzero_count); | ||
} else { | ||
DISPATCH(CONVERT_STRIDED_TENSOR, index_elsize, value_elsize, indices, values, | ||
nonzero_count); | ||
ConvertStridedTensor<IndexType, ValueType>(tensor_, indices, values); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Switching from DISPATCH to templates seems ok to me, but perhaps @pitrou can double-check?
09c2259
to
8e7620f
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for the update @andishgar !
I read through the logic and things look good to me. It's especially nice that element size calculations are no longer done by us. Some minor comments, but overall I think this is practically ready to merge.
I would now ask @pitrou to do a pass, especially on the template part.
|
||
std::vector<int64_t> coords(2); | ||
int64_t k = 0; | ||
std::fill_n(indptr, index_elsize, 0); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
8e7620f
to
c8c66e1
Compare
Rationale for this change
As mentioned here case 1.
What changes are included in this PR?
Handle negative zero in sparse tensor creation.
Are these changes tested?
Yes, I ran the relevant unit tests.
Are there any user-facing changes?
No.
This PR contains a "Critical Fix".
(If the changes fix either (a) a security vulnerability, (b) a bug that causes incorrect or invalid data to be produced, or (c) a bug that causes a crash—even when the API contract is upheld—please provide an explanation. If not, you can remove this.)
Reference: case 1