Skip to content

Commit f743ffe

Browse files
committed
RFC-0030: Incorporate comments
1 parent 83270da commit f743ffe

File tree

1 file changed

+1
-1
lines changed

1 file changed

+1
-1
lines changed

RFC-0030-native-fp8-dtype.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -10,7 +10,7 @@ More and more companies working on Deep Learning accelerators are experimenting
1010
Since fp8 data type seems to be a natural evolution of currently used fp16/bf16, to reduce computation of big DL models, it’s worth to standardize this type. Few attempts of this were done recently:
1111

1212
* Nvidia, Arm and Intel - https://arxiv.org/pdf/2209.05433.pdf
13-
* GraphCore and AMD - https://arxiv.org/pdf/2206.02915.pdf
13+
* GraphCore, AMD and Qualcomm - https://arxiv.org/pdf/2206.02915.pdf
1414
* Tesla - https://tesla-cdn.thron.com/static/MXMU3S_tesla-dojo-technology_1WDVZN.pdf
1515

1616
This RFC proposes adding two 8-bit floating point data types variants to PyTorch, based on the Nvidia/Arm/Intel paper. It’s important to consider these two variants, because they’re already known to be used by Nvidia H100 and Intel Gaudi2 accelerators.

0 commit comments

Comments
 (0)