Skip to content

Commit 8e028f4

Browse files
HuanyuZhangfacebook-github-bot
authored andcommitted
Improve documentation of Github and Website (#723)
Summary: Pull Request resolved: #723 Improve the documentation of Github and Opacus website, specifically: 1. Added a "Latest updates" section in Github `readme`. 2. Updated outdated documentation, and highlighted new features like Ghost clipping. 3. Fixed the API library from the website which did not include some newly added files (e.g., `fast_gradient_clipping_utils`). Reviewed By: iden-kalemaj Differential Revision: D68637848 fbshipit-source-id: d8a46d88f13e68e858787dc0ff983adcb4cac39c
1 parent c7d6144 commit 8e028f4

11 files changed

+49
-25
lines changed

README.md

+9-9
Original file line numberDiff line numberDiff line change
@@ -23,6 +23,13 @@ This code release is aimed at two target audiences:
2323
2. Differential Privacy researchers will find this easy to experiment and tinker
2424
with, allowing them to focus on what matters.
2525

26+
27+
## Latest updates
28+
29+
2024-12-18: We updated this [tutorial](https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb) to show how [LoRA](https://arxiv.org/abs/2106.09685) and [peft](https://huggingface.co/docs/peft/en/index) library could be used in conjuncture with DP-SGD.
30+
31+
2024-08-20: We introduced [Fast Gradient Clipping](https://arxiv.org/abs/2009.03106) and Ghost Clipping(https://arxiv.org/abs/2110.05679) to Opacus, significantly reducing the memory requirements of DP-SGD. Please refer to our [blogpost](https://pytorch.org/blog/clipping-in-opacus/) for more information.
32+
2633
## Installation
2734

2835
The latest release of Opacus can be installed via `pip`:
@@ -76,23 +83,16 @@ shows an end-to-end run using Opacus. The
7683
[examples](https://github.com/pytorch/opacus/tree/main/examples/) folder
7784
contains more such examples.
7885

79-
### Migrating to 1.0
80-
81-
Opacus 1.0 introduced many improvements to the library, but also some breaking
82-
changes. If you've been using Opacus 0.x and want to update to the latest
83-
release, please use this
84-
[Migration Guide](https://github.com/pytorch/opacus/blob/main/Migration_Guide.md)
85-
8686
## Learn more
8787

8888
### Interactive tutorials
8989

9090
We've built a series of IPython-based tutorials as a gentle introduction to
9191
training models with privacy and using various Opacus features.
9292

93+
- [Building text classifier with Differential Privacy on BERT](https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb)
9394
- [Building an Image Classifier with Differential Privacy](https://github.com/pytorch/opacus/blob/main/tutorials/building_image_classifier.ipynb)
9495
- [Training a differentially private LSTM model for name classification](https://github.com/pytorch/opacus/blob/main/tutorials/building_lstm_name_classifier.ipynb)
95-
- [Building text classifier with Differential Privacy on BERT](https://github.com/pytorch/opacus/blob/main/tutorials/building_text_classifier.ipynb)
9696
- [Opacus Guide: Introduction to advanced features](https://github.com/pytorch/opacus/blob/main/tutorials/intro_to_advanced_features.ipynb)
9797
- [Opacus Guide: Grad samplers](https://github.com/pytorch/opacus/blob/main/tutorials/guide_to_grad_sampler.ipynb)
9898
- [Opacus Guide: Module Validator and Fixer](https://github.com/pytorch/opacus/blob/main/tutorials/guide_to_module_validator.ipynb)
@@ -119,12 +119,12 @@ Consider citing the report if you use Opacus in your papers, as follows:
119119
If you want to learn more about DP-SGD and related topics, check out our series
120120
of blogposts and talks:
121121

122+
- [Enabling Fast Gradient Clipping and Ghost Clipping in Opacus](https://pytorch.org/blog/clipping-in-opacus/)
122123
- [Differential Privacy Series Part 1 | DP-SGD Algorithm Explained](https://medium.com/pytorch/differential-privacy-series-part-1-dp-sgd-algorithm-explained-12512c3959a3)
123124
- [Differential Privacy Series Part 2 | Efficient Per-Sample Gradient Computation in Opacus](https://medium.com/pytorch/differential-privacy-series-part-2-efficient-per-sample-gradient-computation-in-opacus-5bf4031d9e22)
124125
- [PriCon 2020 Tutorial: Differentially Private Model Training with Opacus](https://www.youtube.com/watch?v=MWPwofiQMdE&list=PLUNOsx6Az_ZGKQd_p4StdZRFQkCBwnaY6&index=52)
125126
- [Differential Privacy on PyTorch | PyTorch Developer Day 2020](https://www.youtube.com/watch?v=l6fbl2CBnq0)
126127
- [Opacus v1.0 Highlights | PyTorch Developer Day 2021](https://www.youtube.com/watch?v=U1mszp8lzUI)
127-
- [Enabling Fast Gradient Clipping and Ghost Clipping in Opacus](https://pytorch.org/blog/clipping-in-opacus/)
128128

129129
## FAQ
130130

docs/faq.md

+3-3
Original file line numberDiff line numberDiff line change
@@ -13,8 +13,8 @@ Yes! Opacus is open-source for public use, and it is licensed under the [Apache
1313

1414
## How can I report a bug or ask a question?
1515

16-
You can report bugs by submitting GitHub issues. To submit a GitHub issue, please [click here](https://github.com/pytorch/opacus/issues).
17-
You can ask questions in our dedicated PyTorch [Discussion Forum](https://discuss.pytorch.org/c/opacus/29). We actively monitor questions in the PyTorch forums with the category `Opacus`.
16+
You can report bugs or ask questions by submitting GitHub issues. To submit a GitHub issue, please [click here](https://github.com/pytorch/opacus/issues).
17+
<!-- You can ask questions in our dedicated PyTorch [Discussion Forum](https://discuss.pytorch.org/c/opacus/29). We actively monitor questions in the PyTorch forums with the category `Opacus`. -->
1818

1919
## I'd like to contribute to Opacus. How can I do that?
2020

@@ -76,7 +76,7 @@ If these interventions don’t help (or the model starts to converge but its pri
7676

7777
## How to deal with out-of-memory errors?
7878

79-
Dealing with per-sample gradients will inevitably put more pressure on your memory: after all, if you want to train with batch size 64, you are looking to keep 64 copies of your parameter gradients. The first sanity check to do is to make sure that you don’t go out of memory with "standard" training (without DP). That should guarantee that you can train with batch size of 1 at least. Then, you can check your memory usage with e.g. `nvidia-smi` as usual, gradually increasing the batch size until you find your sweet spot. Note that this may mean that you still train with small batch size, which comes with its own training behavior (i.e. higher variance between batches). Training with larger batch sizes can be beneficial, and we built `virtual_step` to make this possible while still memory efficient (see *what is virtual batch size* in these FAQs).
79+
Dealing with per-sample gradients will inevitably put more pressure on your memory: after all, if you want to train with batch size 64, you are looking to keep 64 copies of your parameter gradients. The first sanity check to do is to make sure that you don’t go out of memory with "standard" training (without DP). That should guarantee that you can train with batch size of 1 at least. Then, you can check your memory usage with e.g. `nvidia-smi` as usual, gradually increasing the batch size until you find your sweet spot. Note that this may mean that you still train with small batch size, which comes with its own training behavior (i.e. higher variance between batches). Training with larger batch sizes can be beneficial. To this end, we built [Fast Gradient Clipping](https://pytorch.org/blog/clipping-in-opacus/) and `virtual_step` (see *what is virtual batch size* in these FAQs) to make DP-SGD memory efficient.
8080

8181
## What does epsilon=1.1 really mean? How about delta?
8282

tutorials/README.md

+1-1
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,5 @@
11
# Tutorials
2-
This folder contains multiple tutorials to get you started on training differentially private models!
2+
This folder contains multiple tutorials to get you started on training differentially private models! We recommend "building_text_classifier.ipynb" to experiment with latest Opacus features such as Fast Gradient Clipping, LoRA, and fine-tuning Hugging Face Transformers.
33

44
Note that you may not have all the required packages. You can install opacus's dev version, which will
55
bring in all the required packages in these tutorials:

website/pages/tutorials/index.js

+12-11
Original file line numberDiff line numberDiff line change
@@ -20,7 +20,9 @@ const React = require('react');
2020

2121
const CWD = process.cwd();
2222

23-
const CompLibrary = require(`${CWD}/node_modules/docusaurus/lib/core/CompLibrary.js`);
23+
const CompLibrary = require(
24+
`${CWD}/node_modules/docusaurus/lib/core/CompLibrary.js`,
25+
);
2426
const Container = CompLibrary.Container;
2527
const MarkdownBlock = CompLibrary.MarkdownBlock;
2628

@@ -69,7 +71,8 @@ class TutorialHome extends React.Component {
6971
<a
7072
href="https://bit.ly/per-sample-gradient-computing-opacus-layers"
7173
target="_blank">
72-
Efficient Per-Sample Gradient Computation for More Layers in Opacus
74+
Efficient Per-Sample Gradient Computation for More Layers in
75+
Opacus
7376
</a>
7477
</li>
7578
<li>
@@ -81,13 +84,18 @@ class TutorialHome extends React.Component {
8184
</li>
8285
</ol>
8386
<h4>Videos*</h4>
84-
<p>* Note that Opacus API has changed over time and some of the code samples and demos in the videos may not work. The concepts presented in the videos though are concrete and still valid.</p>
87+
<p>
88+
* Note that Opacus API has changed over time and some of the code
89+
samples and demos in the videos may not work. The concepts
90+
presented in the videos though are concrete and still valid.
91+
</p>
8592
<ol>
8693
<li>
8794
<a
8895
href="https://www.youtube.com/watch?v=U1mszp8lzUI"
8996
target="_blank">
90-
PyTorch Developer Day 2021: Fast and Flexible Differential Privacy Framework for PyTorch
97+
PyTorch Developer Day 2021: Fast and Flexible Differential
98+
Privacy Framework for PyTorch
9199
</a>
92100
</li>
93101
<li>
@@ -114,13 +122,6 @@ class TutorialHome extends React.Component {
114122
Differentially Private Deep Learning In 20 Lines Of Code
115123
</a>
116124
</li>
117-
<li>
118-
<a
119-
href="https://blog.openmined.org/pysyft-opacus-federated-learning-with-differential-privacy/"
120-
target="_blank">
121-
PySyft + Opacus: Federated Learning With Differential Privacy
122-
</a>
123-
</li>
124125
</ol>
125126
</div>
126127
</Container>
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
GradSampleModuleFastGradientClipping
2+
================
3+
4+
.. automodule:: opacus.grad_sample.grad_sample_module_fast_gradient_clipping
5+
:members:

website/sphinx/source/index.rst

+1
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@ Opacus API Reference
1313

1414
privacy_engine
1515
grad_sample_module
16+
grad_sample_module_fast_gradient_clipping
1617
optim/optimizers
1718
data_loader
1819
accounting/accounting
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
DistributedDPOptimizerFastGradientClipping
2+
==============
3+
4+
.. automodule:: opacus.optimizers.ddpoptimizer_fast_gradient_clipping
5+
:members:
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
DPOptimizerFastGradientClipping
2+
==============
3+
4+
.. automodule:: opacus.optimizers.optimizer_fast_gradient_clipping
5+
:members:

website/sphinx/source/optim/optimizers.rst

+2-1
Original file line numberDiff line numberDiff line change
@@ -3,7 +3,8 @@ Optimizers
33
.. toctree::
44

55
dp_optimizer
6+
dp_optimizer_fast_gradient_clipping
67
dp_per_layer_optimizer
78
dp_ddp_optimizer
9+
dp_ddp_optimizer_fast_gradient_clipping
810
dp_ddp_per_layer_optimizer
9-
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
Fast Gradient Clipping Utils
2+
=============
3+
4+
.. automodule:: opacus.utils.fast_gradient_clipping_utils
5+
:members:

website/sphinx/source/utils/utils.rst

+1
Original file line numberDiff line numberDiff line change
@@ -6,3 +6,4 @@ Utils
66
tensor_utils
77
packed_sequences
88
uniform_sampler
9+
fast_gradient_clipping_utils

0 commit comments

Comments
 (0)