Skip to content

Commit dc7e230

Browse files
committed
args
1 parent 295a70e commit dc7e230

File tree

4 files changed

+0
-7
lines changed

4 files changed

+0
-7
lines changed

docs/source/common/optimizers.rst

Lines changed: 0 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -300,8 +300,6 @@ override the :meth:`optimizer_step` function.
300300

301301
For example, here step optimizer A every 2 batches and optimizer B every 4 batches
302302

303-
.. note:: When using Trainer(enable_pl_optimizer=True), there is no need to call `.zero_grad()`.
304-
305303
.. testcode::
306304

307305
def optimizer_zero_grad(self, current_epoch, batch_idx, optimizer, opt_idx):

pytorch_lightning/core/lightning.py

Lines changed: 0 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -1324,9 +1324,6 @@ def optimizer_step(
13241324
By default, Lightning calls ``step()`` and ``zero_grad()`` as shown in the example
13251325
once per optimizer.
13261326
1327-
.. tip:: With ``Trainer(enable_pl_optimizer=True)``, you can use ``optimizer.step()`` directly
1328-
and it will handle zero_grad, accumulated gradients, AMP, TPU and more automatically for you.
1329-
13301327
Warning:
13311328
If you are overriding this method, make sure that you pass the ``optimizer_closure`` parameter
13321329
to ``optimizer.step()`` function as shown in the examples. This ensures that

tests/plugins/test_rpc_sequential_plugin.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -42,7 +42,6 @@ def test_rpc_sequential_plugin_manual(tmpdir, args=None):
4242
gpus=2,
4343
distributed_backend="ddp",
4444
plugins=[RPCSequentialPlugin(balance=[2, 1], rpc_timeout_sec=5 * 60)],
45-
enable_pl_optimizer=True,
4645
)
4746

4847
trainer.fit(model)

tests/utilities/test_all_gather_grad.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -89,7 +89,6 @@ def training_epoch_end(self, outputs) -> None:
8989
max_epochs=1,
9090
log_every_n_steps=1,
9191
accumulate_grad_batches=2,
92-
enable_pl_optimizer=True,
9392
gpus=2,
9493
accelerator="ddp",
9594
)

0 commit comments

Comments
 (0)