Skip to content

Conversation

andishgar
Copy link
Contributor

@andishgar andishgar commented Sep 17, 2025

Rationale for this change

As mentioned here case 1.

What changes are included in this PR?

Handle negative zero in sparse tensor creation.

Are these changes tested?

Yes, I ran the relevant unit tests.

Are there any user-facing changes?

No.

This PR contains a "Critical Fix".
(If the changes fix either (a) a security vulnerability, (b) a bug that causes incorrect or invalid data to be produced, or (c) a bug that causes a crash—even when the API contract is upheld—please provide an explanation. If not, you can remove this.)

Reference: case 1

Copy link

⚠️ GitHub issue #47520 has been automatically assigned in GitHub to PR creator.

@andishgar andishgar marked this pull request as draft September 17, 2025 13:15
@andishgar andishgar marked this pull request as ready for review September 17, 2025 16:48
@andishgar
Copy link
Contributor Author

@rok, could you review this?

Copy link
Member

@rok rok left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've done a high over pass and posted some questions. I want to do a deeper pass in more details later this week.

The templating approach seems more maintainable. We should make sure everything is tested.

Comment on lines +201 to +200
ConvertRowMajorTensor<IndexType, ValueType>(tensor_, indices, values);
} else if (tensor_.is_column_major()) {
DISPATCH(CONVERT_COLUMN_MAJOR_TENSOR, index_elsize, value_elsize, indices, values,
nonzero_count);
ConvertColumnMajorTensor<IndexType, ValueType>(tensor_, indices, values,
nonzero_count);
} else {
DISPATCH(CONVERT_STRIDED_TENSOR, index_elsize, value_elsize, indices, values,
nonzero_count);
ConvertStridedTensor<IndexType, ValueType>(tensor_, indices, values);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Switching from DISPATCH to templates seems ok to me, but perhaps @pitrou can double-check?

@github-actions github-actions bot added awaiting changes Awaiting changes and removed awaiting review Awaiting review labels Sep 22, 2025
@andishgar andishgar force-pushed the resolve_negative_zero_in_sparse_tensor_creataion branch from 09c2259 to 8e7620f Compare September 23, 2025 14:38
@github-actions github-actions bot added awaiting change review Awaiting change review and removed awaiting changes Awaiting changes labels Sep 23, 2025
@andishgar andishgar requested a review from rok September 23, 2025 15:38
Copy link
Member

@rok rok left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for the update @andishgar !
I read through the logic and things look good to me. It's especially nice that element size calculations are no longer done by us. Some minor comments, but overall I think this is practically ready to merge.
I would now ask @pitrou to do a pass, especially on the template part.

@rok rok requested a review from pitrou September 25, 2025 11:42
@github-actions github-actions bot added awaiting review Awaiting review awaiting changes Awaiting changes awaiting committer review Awaiting committer review and removed awaiting change review Awaiting change review awaiting review Awaiting review awaiting changes Awaiting changes labels Sep 25, 2025

std::vector<int64_t> coords(2);
int64_t k = 0;
std::fill_n(indptr, index_elsize, 0);
Copy link
Contributor Author

@andishgar andishgar Sep 26, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@rok regarding this:
the above code is equivalent to indptr[0] = 0; in my code. The reason is that indptr always starts with 0, so I set the first element to zero and then initialize the remaining entries in a loop.

@andishgar andishgar force-pushed the resolve_negative_zero_in_sparse_tensor_creataion branch from 8e7620f to c8c66e1 Compare September 26, 2025 08:41
@andishgar andishgar requested a review from rok September 26, 2025 09:27
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants