Skip to content

[WIP] Add mkl::linalg_lu relative Ops #1563

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 24 commits into
base: main
Choose a base branch
from
Open

[WIP] Add mkl::linalg_lu relative Ops #1563

wants to merge 24 commits into from

Conversation

yucai-intel
Copy link
Contributor

@yucai-intel yucai-intel commented Apr 9, 2025

Follow #1511

  • linalg_lu_factor_ex & linalg_lu_factor_ex.out
  • linalg_lu & linalg_lu.out
  • linalg_lu_solve & linalg_lu_solve.out
  • lu_unpack.out & lu_unpack.out

Tensor& self_,
Tensor& pivots_,
std::vector<int32_t>& infos_) {
#ifdef USE_ONEMKL
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since BatchLinearAlgebra.cpp is wrapped by USE_ONEMKL, we can remove USE_ONEMKL from functions.

const Tensor& pivots_,
std::vector<int32_t>& infos_,
TransposeType t) {
#ifdef USE_ONEMKL
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ditto

}

template <>
int64_t mkl_getri_scratchpad<c10::complex<float>>(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The specialization of templates introduces redundancy. We'd better refine it in another PR.

Comment on lines +1222 to +1214
// if (LU_new.has_value())
LU.copy_(LU_use);
// return std::tuple<Tensor&, Tensor&, Tensor&>(LU, pivots, info);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove debug code.

Comment on lines 68 to 69
REGISTER_XPU_DISPATCH(lu_solve_stub, &native::xpu::lu_solve_mkl);
REGISTER_XPU_DISPATCH(lu_factor_stub, &native::xpu::lu_factor_mkl);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Before build with oneMKL XPU is default ON, we still need fallback.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants