-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Please support AVX512_FP16 #2822
Comments
oneDNN uses instructions from AVX512_FP16 ISA extension on processors with Intel AVX 10.1/512 instruction set support (4th and 5th generation Intel Xeon Scalable Processors and Intel Xeon 6 processors). Default numerical behavior for oneDNN functions requires |
Hello @DaiShaoJie77, the configuration of oneDNN functions might prevent the test from utilizing the AVX512_FP16 ISA in some use cases. For more details of the issue, could you please provide the oneDNN verbose log by setting |
Hi, I don't understand what you mean. Do you mean to add an option somewhere to enable fp16? Which file and where is it?@vpirogov |
I tried to set this parameter in the environment variable, but it just printed some data types and did not use the instruction set I wanted.@shu1chen |
Please send us the oneDNN verbose log so we can identify the exact issue you're experiencing. Since we haven't received the verbose log, we're unable to determine how you're using oneDNN. For example, if your input data to oneDNN is FP32 and you wish to use AVX512_FP16 ISA, you'll need to set the fpmath_mode to
You can find the documentation for Primitive Attributes in the link. |
Chips supporting AVX512_FP16 have been released for more than a year. Why does Intel's open source GPU computing sub-computing library still not support AVX512_FP16? AVX512_FP16 is the instruction I expect to use
The text was updated successfully, but these errors were encountered: