From 0bf13ab0fd322f50cd4a272bf12f03b1b8a3dba8 Mon Sep 17 00:00:00 2001 From: Chanchana Sornsoontorn Date: Wed, 15 Mar 2023 05:05:05 +0700 Subject: [PATCH 1/2] FIx broken pytorch lightning doc link --- docs/train.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/train.md b/docs/train.md index 1c94ea17ba..e85908100a 100644 --- a/docs/train.md +++ b/docs/train.md @@ -187,7 +187,7 @@ trainer.fit(model, dataloader) Thanks to our organized dataset pytorch object and the power of pytorch_lightning, the entire code is just super short. -Now, you may take a look at [Pytorch Lightning Official DOC](https://pytorch-lightning.readthedocs.io/en/latest/api/pytorch_lightning.trainer.trainer.Trainer.html?highlight=trainer) to find out how to enable many useful features like gradient accumulation, multiple GPU training, accelerated dataset loading, flexible checkpoint saving, etc. All these only need about one line of code. Great! +Now, you may take a look at [Pytorch Lightning Official DOC](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.trainer.trainer.Trainer.html#trainer) to find out how to enable many useful features like gradient accumulation, multiple GPU training, accelerated dataset loading, flexible checkpoint saving, etc. All these only need about one line of code. Great! Note that if you find OOM, perhaps you need to enable [Low VRAM mode](low_vram.md), and perhaps you also need to use smaller batch size and gradient accumulation. Or you may also want to use some “advanced” tricks like sliced attention or xformers. For example: From 7d60271a28a62939181fc6175c3cd2234a9ebc74 Mon Sep 17 00:00:00 2001 From: Chanchana Sornsoontorn Date: Wed, 15 Mar 2023 23:57:46 +0700 Subject: [PATCH 2/2] damn these people changing the link every day; someone is having too much free time --- docs/train.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/train.md b/docs/train.md index e85908100a..fa773925e2 100644 --- a/docs/train.md +++ b/docs/train.md @@ -187,7 +187,7 @@ trainer.fit(model, dataloader) Thanks to our organized dataset pytorch object and the power of pytorch_lightning, the entire code is just super short. -Now, you may take a look at [Pytorch Lightning Official DOC](https://pytorch-lightning.readthedocs.io/en/stable/api/pytorch_lightning.trainer.trainer.Trainer.html#trainer) to find out how to enable many useful features like gradient accumulation, multiple GPU training, accelerated dataset loading, flexible checkpoint saving, etc. All these only need about one line of code. Great! +Now, you may take a look at [Pytorch Lightning Official DOC](https://lightning.ai/docs/pytorch/stable/api/lightning.pytorch.trainer.trainer.Trainer.html#trainer) to find out how to enable many useful features like gradient accumulation, multiple GPU training, accelerated dataset loading, flexible checkpoint saving, etc. All these only need about one line of code. Great! Note that if you find OOM, perhaps you need to enable [Low VRAM mode](low_vram.md), and perhaps you also need to use smaller batch size and gradient accumulation. Or you may also want to use some “advanced” tricks like sliced attention or xformers. For example: