Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cpu: x64: brgemm, matmul: add f32:f16 configuration support on AVX2 and AVX512_CORE (fixes MFDNN-11992) #2272

Merged
merged 7 commits into from
Jan 6, 2025

Conversation

dzarukin
Copy link
Contributor

MFDNN-11992

The change adds f32:f16:f32 support on AVX512_CORE and AVX2 through a up-conversion path.
Extended a brgemm kernel (thanks @dmitry-gorokhov) and brgemm matmul copy routines to support the conversion.

@dzarukin dzarukin requested review from a team as code owners December 14, 2024 02:08
@github-actions github-actions bot added the platform:cpu-x64 Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64 label Dec 14, 2024
@dzarukin
Copy link
Contributor Author

make test
enable benchdnn_nightly
disable benchdnn_all
enable benchdnn_conv
enable benchdnn_matmul
enable benchdnn_ip

@vpirogov
Copy link
Member

Uh-oh, : in comment body breaks commit message checker...

@dmitry-gorokhov
Copy link
Contributor

Hey @dzarukin, thanks for upstreaming this!
I am wondering if you can include bf16 weights support as well? Should be quite similar from my understanding.

@dzarukin dzarukin force-pushed the dzarukin/f32f16_matmul branch from b9440a2 to 45cf038 Compare December 16, 2024 18:35
@github-actions github-actions bot added the component:tests Codeowner: @oneapi-src/onednn-arch label Dec 16, 2024
@dzarukin
Copy link
Contributor Author

make test
enable benchdnn_nightly
disable benchdnn_all
enable benchdnn_conv
enable benchdnn_matmul
enable benchdnn_ip

@dzarukin dzarukin force-pushed the dzarukin/f32f16_matmul branch 2 times, most recently from df2d9cd to 9210383 Compare December 19, 2024 22:38
@dzarukin
Copy link
Contributor Author

make test
enable benchdnn_nightly
disable benchdnn_all
enable benchdnn_conv
enable benchdnn_matmul
enable benchdnn_ip

@dzarukin
Copy link
Contributor Author

Hey @dzarukin, thanks for upstreaming this! I am wondering if you can include bf16 weights support as well? Should be quite similar from my understanding.

Added, thanks for a reminder.

@dzarukin dzarukin force-pushed the dzarukin/f32f16_matmul branch from 9210383 to 33a89d9 Compare December 19, 2024 23:00
src/cpu/x64/brgemm/jit_brgemm_kernel.cpp Outdated Show resolved Hide resolved
src/cpu/x64/brgemm/jit_brgemm_kernel.cpp Outdated Show resolved Hide resolved
@dzarukin dzarukin force-pushed the dzarukin/f32f16_matmul branch from 33a89d9 to cf0c2bd Compare January 2, 2025 18:33
@dzarukin
Copy link
Contributor Author

dzarukin commented Jan 2, 2025

make test
enable benchdnn_nightly
disable benchdnn_all
enable benchdnn_conv
enable benchdnn_matmul
enable benchdnn_ip

@dzarukin dzarukin requested review from xuxinzen and mgouicem January 3, 2025 19:38
@dzarukin dzarukin force-pushed the dzarukin/f32f16_matmul branch from cf0c2bd to f05eebb Compare January 6, 2025 17:39
@dzarukin dzarukin merged commit c7827a8 into main Jan 6, 2025
17 checks passed
@dzarukin dzarukin deleted the dzarukin/f32f16_matmul branch January 6, 2025 20:08
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
component:tests Codeowner: @oneapi-src/onednn-arch platform:cpu-x64 Intel64/AMD64 processors. Codeowner: @oneapi-src/onednn-cpu-x64
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants