|
225 | 225 | "\n",
|
226 | 226 | "Note: In order to avoid confusions between `Mesh` and `Layout`, the term *dimension* is always associated with `Mesh`, and the term *axis* with `Tensor` and `Layout` in this guide.\n",
|
227 | 227 | "\n",
|
228 |
| - "The rank of `Layout` should be the same as the rank of the `Tensor` where the `Layout` is applied. For each of the `Tensor`'s axes the `Layout` may specifiy a mesh dimension to shard the tensor across, or specify the axis as \"unsharded\".\n", |
| 228 | + "The rank of `Layout` should be the same as the rank of the `Tensor` where the `Layout` is applied. For each of the `Tensor`'s axes the `Layout` may specify a mesh dimension to shard the tensor across, or specify the axis as \"unsharded\".\n", |
229 | 229 | "The tensor is replicated across any mesh dimensions that it is not sharded across.\n",
|
230 | 230 | "\n",
|
231 | 231 | "The rank of a `Layout` and the number of dimensions of a `Mesh` do not need to match. The `unsharded` axes of a `Layout` do not need to be associated to a mesh dimension, and `unsharded` mesh dimensions do not need to be associated with a `layout` axis.\n",
|
|
528 | 528 | "\n",
|
529 | 529 | "So far you've worked with the `my_first_dtensor`, which is a rank-1 DTensor fully replicated across a dim-1 `Mesh`.\n",
|
530 | 530 | "\n",
|
531 |
| - "Next create and inspect DTensors that are sharded across a dim-2 `Mesh`. The next example does this with a 3x2 `Mesh` on 6 CPU devices, where size of mesh dimension `'x'` is 3 devices, and and size of mesh dimension`'y'` is 2 devices." |
| 531 | + "Next create and inspect DTensors that are sharded across a dim-2 `Mesh`. The next example does this with a 3x2 `Mesh` on 6 CPU devices, where size of mesh dimension `'x'` is 3 devices, and size of mesh dimension`'y'` is 2 devices." |
532 | 532 | ]
|
533 | 533 | },
|
534 | 534 | {
|
|
696 | 696 | "To accomplish this, for each [TensorFlow Graph](https://www.tensorflow.org/guide/intro_to_graphs), DTensor produces and executes an equivalent [SPMD](https://en.wikipedia.org/wiki/SPMD) graph in a procedure called *SPMD expansion*. A few critical steps in DTensor SPMD expansion are:\n",
|
697 | 697 | "\n",
|
698 | 698 | " - Propagating the sharding `Layout` of DTensor in the TensorFlow graph\n",
|
699 |
| - " - Rewriting TensorFlow Ops on the global DTensor with equivalent TensorFlow Ops on the componenent tensors, inserting collective and communication Ops when necessary\n", |
| 699 | + " - Rewriting TensorFlow Ops on the global DTensor with equivalent TensorFlow Ops on the component tensors, inserting collective and communication Ops when necessary\n", |
700 | 700 | " - Lowering backend neutral TensorFlow Ops to backend specific TensorFlow Ops.\n",
|
701 | 701 | "\n",
|
702 | 702 | "The final result is that **DTensor is a drop-in replacement for Tensor**.\n",
|
|
0 commit comments