Skip to content

Commit b0074a4

Browse files
rohitgr7SkafteNickiBordacarmoccatchaton
authored
Update auto-opt docs (Lightning-AI#6037)
* fix docs * update on comments * Apply suggestions from code review Co-authored-by: Nicki Skafte <[email protected]> * Apply suggestions from code review Co-authored-by: Nicki Skafte <[email protected]> * Apply suggestions from code review Co-authored-by: Carlos Mocholí <[email protected]> * rm comment * Update docs/source/common/lightning_module.rst Co-authored-by: chaton <[email protected]> Co-authored-by: Nicki Skafte <[email protected]> Co-authored-by: Jirka Borovec <[email protected]> Co-authored-by: Carlos Mocholí <[email protected]> Co-authored-by: chaton <[email protected]>
1 parent c46c23a commit b0074a4

File tree

5 files changed

+108
-68
lines changed

5 files changed

+108
-68
lines changed

README.md

Lines changed: 24 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -72,7 +72,7 @@ Lightning is rigurously tested across multiple GPUs, TPUs CPUs and against major
7272

7373
<details>
7474
<summary>Current build statuses</summary>
75-
75+
7676
<center>
7777

7878
| System / PyTorch ver. | 1.4 (min. req.)* | 1.5 | 1.6 | 1.7 (latest) | 1.8 (nightly) |
@@ -93,9 +93,9 @@ Lightning is rigurously tested across multiple GPUs, TPUs CPUs and against major
9393

9494
<details>
9595
<summary>Bleeding edge build status (1.2)</summary>
96-
96+
9797
<center>
98-
98+
9999
![CI base testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20base%20testing/badge.svg?branch=release%2F1.2-dev&event=push)
100100
![CI complete testing](https://github.com/PyTorchLightning/pytorch-lightning/workflows/CI%20complete%20testing/badge.svg?branch=release%2F1.2-dev&event=push)
101101
![PyTorch & Conda](https://github.com/PyTorchLightning/pytorch-lightning/workflows/PyTorch%20&%20Conda/badge.svg?branch=release%2F1.2-dev&event=push)
@@ -121,13 +121,13 @@ pip install pytorch-lightning
121121
<!-- following section will be skipped from PyPI description -->
122122

123123
#### Install with optional dependencies
124-
124+
125125
```bash
126126
pip install pytorch-lightning['extra']
127127
```
128-
128+
129129
#### Conda
130-
130+
131131
```bash
132132
conda install pytorch-lightning -c conda-forge
133133
```
@@ -229,7 +229,7 @@ Here are some examples:
229229

230230
<details>
231231
<summary>Highlighted feature code snippets</summary>
232-
232+
233233
```python
234234
# 8 GPUs
235235
# no code changes needed
@@ -240,66 +240,66 @@ Here are some examples:
240240
```
241241

242242
<summary>Train on TPUs without code changes</summary>
243-
243+
244244
```python
245245
# no code changes needed
246246
trainer = Trainer(tpu_cores=8)
247247
```
248248

249249
<summary>16-bit precision</summary>
250-
250+
251251
```python
252252
# no code changes needed
253253
trainer = Trainer(precision=16)
254254
```
255255

256256
<summary>Experiment managers</summary>
257-
257+
258258
```python
259259
from pytorch_lightning import loggers
260-
260+
261261
# tensorboard
262262
trainer = Trainer(logger=TensorBoardLogger('logs/'))
263-
263+
264264
# weights and biases
265265
trainer = Trainer(logger=loggers.WandbLogger())
266-
266+
267267
# comet
268268
trainer = Trainer(logger=loggers.CometLogger())
269-
269+
270270
# mlflow
271271
trainer = Trainer(logger=loggers.MLFlowLogger())
272-
272+
273273
# neptune
274274
trainer = Trainer(logger=loggers.NeptuneLogger())
275-
275+
276276
# ... and dozens more
277277
```
278278

279279
<summary>EarlyStopping</summary>
280-
280+
281281
```python
282282
es = EarlyStopping(monitor='val_loss')
283283
trainer = Trainer(callbacks=[es])
284284
```
285285

286286
<summary>Checkpointing</summary>
287-
287+
288288
```python
289289
checkpointing = ModelCheckpoint(monitor='val_loss')
290290
trainer = Trainer(callbacks=[checkpointing])
291291
```
292292

293293
<summary>Export to torchscript (JIT) (production use)</summary>
294-
294+
295295
```python
296296
# torchscript
297297
autoencoder = LitAutoEncoder()
298298
torch.jit.save(autoencoder.to_torchscript(), "model.pt")
299299
```
300300

301301
<summary>Export to ONNX (production use)</summary>
302-
302+
303303
```python
304304
# onnx
305305
with tempfile.NamedTemporaryFile(suffix='.onnx', delete=False) as tmpfile:
@@ -315,6 +315,10 @@ For complex/professional level work, you have optional full control of the train
315315

316316
```python
317317
class LitAutoEncoder(pl.LightningModule):
318+
def __init__(self):
319+
super().__init__()
320+
self.automatic_optimization = False
321+
318322
def training_step(self, batch, batch_idx, optimizer_idx):
319323
# access your optimizers with use_pl_optimizer=False. Default is True
320324
(opt_a, opt_b) = self.optimizers(use_pl_optimizer=True)

docs/source/common/lightning_module.rst

Lines changed: 78 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -841,7 +841,7 @@ The current step (does not reset each epoch)
841841

842842
hparams
843843
~~~~~~~
844-
After calling `save_hyperparameters` anything passed to init() is available via hparams.
844+
After calling ``save_hyperparameters`` anything passed to ``__init__()`` is available via the ``hparams`` attribute.
845845

846846
.. code-block:: python
847847
@@ -932,9 +932,85 @@ True if using TPUs
932932

933933
--------------
934934

935+
automatic_optimization
936+
~~~~~~~~~~~~~~~~~~~~~~
937+
When set to ``False``, Lightning does not automate the optimization process. This means you are responsible for handling your optimizers. However, we do take care of precision and any accelerators used.
938+
939+
.. code-block:: python
940+
941+
def __init__(self):
942+
self.automatic_optimization = False
943+
944+
def training_step(self, batch, batch_idx):
945+
opt = self.optimizers(use_pl_optimizer=True)
946+
947+
loss = ...
948+
self.manual_backward(loss, opt)
949+
opt.step()
950+
opt.zero_grad()
951+
952+
This is recommended only if using 2+ optimizers AND if you know how to perform the optimization procedure properly. Note that automatic optimization can still be used with multiple optimizers by relying on the ``optimizer_idx`` parameter. Manual optimization is most useful for research topics like reinforcement learning, sparse coding, and GAN research.
953+
954+
In the multi-optimizer case, ignore the ``optimizer_idx`` argument and use the optimizers directly
955+
956+
.. code-block:: python
957+
958+
def __init__(self):
959+
self.automatic_optimization = False
960+
961+
def training_step(self, batch, batch_idx, optimizer_idx):
962+
# access your optimizers with use_pl_optimizer=False. Default is True
963+
(opt_a, opt_b) = self.optimizers(use_pl_optimizer=True)
964+
965+
gen_loss = ...
966+
opt_a.zero_grad()
967+
self.manual_backward(gen_loss, opt_a)
968+
opt_a.step()
969+
970+
disc_loss = ...
971+
opt_b.zero_grad()
972+
self.manual_backward(disc_loss, opt_b)
973+
opt_b.step()
974+
975+
--------------
976+
977+
example_input_array
978+
~~~~~~~~~~~~~~~~~~~
979+
Set and access example_input_array which is basically a single batch.
980+
981+
.. code-block:: python
982+
983+
def __init__(self):
984+
self.example_input_array = ...
985+
self.generator = ...
986+
987+
def on_train_epoch_end(...):
988+
# generate some images using the example_input_array
989+
gen_images = self.generator(self.example_input_array)
990+
991+
--------------
992+
993+
datamodule
994+
~~~~~~~~~~
995+
Set or access your datamodule.
996+
997+
.. code-block:: python
998+
999+
def configure_optimizers(self):
1000+
num_training_samples = len(self.datamodule.train_dataloader())
1001+
...
1002+
1003+
--------------
1004+
1005+
model_size
1006+
~~~~~~~~~~
1007+
Get the model file size (in megabytes) using ``self.model_size`` inside LightningModule.
1008+
1009+
--------------
1010+
9351011
Hooks
9361012
^^^^^
937-
This is the pseudocode to describe how all the hooks are called during a call to `.fit()`
1013+
This is the pseudocode to describe how all the hooks are called during a call to ``.fit()``.
9381014

9391015
.. code-block:: python
9401016

docs/source/common/production_inference.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,7 @@ Exporting to ONNX
99
-----------------
1010
PyTorch Lightning provides a handy function to quickly export your model to ONNX format, which allows the model to be independent of PyTorch and run on an ONNX Runtime.
1111

12-
To export your model to ONNX format call the `to_onnx` function on your Lightning Module with the filepath and input_sample.
12+
To export your model to ONNX format call the ``to_onnx`` function on your Lightning Module with the filepath and input_sample.
1313

1414
.. code-block:: python
1515
@@ -18,7 +18,7 @@ To export your model to ONNX format call the `to_onnx` function on your Lightnin
1818
input_sample = torch.randn((1, 64))
1919
model.to_onnx(filepath, input_sample, export_params=True)
2020
21-
You can also skip passing the input sample if the `example_input_array` property is specified in your LightningModule.
21+
You can also skip passing the input sample if the ` example_input_array ` property is specified in your LightningModule.
2222

2323
Once you have the exported model, you can run it on your ONNX runtime in the following way:
2424

docs/source/common/trainer.rst

Lines changed: 0 additions & 37 deletions
Original file line numberDiff line numberDiff line change
@@ -330,43 +330,6 @@ Example::
330330
# default used by the Trainer
331331
trainer = Trainer(amp_level='O2')
332332
333-
automatic_optimization
334-
^^^^^^^^^^^^^^^^^^^^^^
335-
When set to False, Lightning does not automate the optimization process. This means you are responsible for your own
336-
optimizer behavior
337-
338-
Example::
339-
340-
def training_step(self, batch, batch_idx):
341-
# access your optimizers with use_pl_optimizer=False. Default is True
342-
opt = self.optimizers(use_pl_optimizer=True)
343-
344-
loss = ...
345-
self.manual_backward(loss, opt)
346-
opt.step()
347-
opt.zero_grad()
348-
349-
This is not recommended when using a single optimizer, instead it's recommended when using 2+ optimizers
350-
AND you are an expert user. Most useful for research like RL, sparse coding and GAN research.
351-
352-
In the multi-optimizer case, ignore the optimizer_idx flag and use the optimizers directly
353-
354-
Example::
355-
356-
def training_step(self, batch, batch_idx, optimizer_idx):
357-
# access your optimizers with use_pl_optimizer=False. Default is True
358-
(opt_a, opt_b) = self.optimizers(use_pl_optimizer=True)
359-
360-
gen_loss = ...
361-
self.manual_backward(gen_loss, opt_a)
362-
opt_a.step()
363-
opt_a.zero_grad()
364-
365-
disc_loss = ...
366-
self.manual_backward(disc_loss, opt_b)
367-
opt_b.step()
368-
opt_b.zero_grad()
369-
370333
auto_scale_batch_size
371334
^^^^^^^^^^^^^^^^^^^^^
372335

docs/source/starter/new-project.rst

Lines changed: 4 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -258,16 +258,13 @@ Manual optimization
258258
However, for certain research like GANs, reinforcement learning, or something with multiple optimizers
259259
or an inner loop, you can turn off automatic optimization and fully control the training loop yourself.
260260

261-
First, turn off automatic optimization:
262-
263-
.. testcode::
264-
265-
trainer = Trainer(automatic_optimization=False)
266-
267-
Now you own the train loop!
261+
Turn off automatic optimization and you control the train loop!
268262

269263
.. code-block:: python
270264
265+
def __init__(self):
266+
self.automatic_optimization = False
267+
271268
def training_step(self, batch, batch_idx, optimizer_idx):
272269
# access your optimizers with use_pl_optimizer=False. Default is True
273270
(opt_a, opt_b, opt_c) = self.optimizers(use_pl_optimizer=True)

0 commit comments

Comments
 (0)