Activation Checkpointing for Llama3 branch #773
+169
−117
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This keeps residual3 for all layers, and then up to N layers for everything else, with relatively little complexity...
This means if you set "-ac 16", it will only recompute 50% of activations with a 32 layer model, and 75% of activations with a 64 layer model. The value must be a multiple of the number of layers in the model. It can be combined with "-r 1" and "-r 2", e.g. "-ac 16 -r 2" is faster than "-ac 1 -r 0" and lower memory than "-ac 16 -r 0".
This is not the absolute minimum memory strategy: with e.g. "-ac 4", we still store every residual3, even though we could only store 1 in 4, but that would be more complicated because it requires also temporarily storing an extra 3 or 4 residuals somewhere (the ones that are being recomputed). More importantly, this allows us to not recompute one of the 2 big matmuls, so it's a very attractive performance vs memory trade-off.
However, for Llama3 405B with a context length of 128K, that's 4GiB per layer, or 504GiB for all layers, which obviously doesn't fit on a single GPU, not even on a GH200... so we will probably need that extra complexity sooner rather than later.