Skip to content

Commit f9d0cd8

Browse files
committed
Merge branch 'master' into fix-slurm-rank
2 parents e49bd32 + b7a22ba commit f9d0cd8

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

43 files changed

+1348
-226
lines changed

CHANGELOG.md

Lines changed: 26 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -13,9 +13,15 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
1313
- Added more explicit exception message when trying to execute `trainer.test()` or `trainer.validate()` with `fast_dev_run=True` ([#6667](https://github.com/PyTorchLightning/pytorch-lightning/pull/6667))
1414

1515

16+
- Added `LightningCLI` class to provide simple reproducibility with minimum boilerplate training cli. ([#4492](https://github.com/PyTorchLightning/pytorch-lightning/pull/4492))
17+
18+
1619
- Trigger warning when non-metric logged value with multi processes hasn't been reduced ([#6417](https://github.com/PyTorchLightning/pytorch-lightning/pull/6417))
1720

1821

22+
- Added `gradient_clip_algorithm` argument to Trainer for gradient clipping by value ([#6123](https://github.com/PyTorchLightning/pytorch-lightning/pull/6123)).
23+
24+
1925
- Added a way to print to terminal without breaking up the progress bar ([#5470](https://github.com/PyTorchLightning/pytorch-lightning/pull/5470))
2026

2127

@@ -173,6 +179,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
173179
- Set better defaults for `rank_zero_only.rank` when training is launched with SLURM and torchelastic ([#6802](https://github.com/PyTorchLightning/pytorch-lightning/pull/6802/))
174180

175181

182+
- Sanitize `None` params during pruning ([#6836](https://github.com/PyTorchLightning/pytorch-lightning/pull/6836))
183+
184+
176185
- Made the `Plugin.reduce` method more consistent across all Plugins to reflect a mean-reduction by default ([#6011](https://github.com/PyTorchLightning/pytorch-lightning/pull/6011))
177186

178187

@@ -199,6 +208,22 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
199208

200209
- Fixed torch distributed not available in setup hook for DDP ([#6506](https://github.com/PyTorchLightning/pytorch-lightning/pull/6506))
201210

211+
212+
- Fixed TPU Colab hang issue, post training ([#6816](https://github.com/PyTorchLightning/pytorch-lightning/pull/6816))
213+
214+
215+
- Enforce an epoch scheduler interval when using SWA ([#6588](https://github.com/PyTorchLightning/pytorch-lightning/pull/6588))
216+
217+
218+
- Fixed an issue with `IterableDataset` when `__len__` is not defined ([#6828](https://github.com/PyTorchLightning/pytorch-lightning/pull/6828))
219+
220+
221+
- Fixed `EarlyStopping` logic when `min_epochs` or `min_steps` requirement is not met ([#6705](https://github.com/PyTorchLightning/pytorch-lightning/pull/6705))
222+
223+
224+
- Fixed a bug where `TensorBoardLogger` would give a warning and not log correctly to a symbolic link `save_dir` ([#6730](https://github.com/PyTorchLightning/pytorch-lightning/pull/6730))
225+
226+
202227
## [1.2.6] - 2021-03-30
203228

204229
### Changed
@@ -231,10 +256,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
231256
- Fixed comparing required versions ([#6434](https://github.com/PyTorchLightning/pytorch-lightning/pull/6434))
232257
- Fixed duplicate logs appearing in console when using the python logging module ([#6275](https://github.com/PyTorchLightning/pytorch-lightning/pull/6275))
233258
- Added Autocast in validation, test and predict modes for Native AMP ([#6565](https://github.com/PyTorchLightning/pytorch-lightning/pull/6565))
234-
235-
236259
- Fixed resolve a bug with omegaconf and xm.save ([#6741](https://github.com/PyTorchLightning/pytorch-lightning/pull/6741))
237260

261+
238262
## [1.2.4] - 2021-03-16
239263

240264
### Changed

docs/source/advanced/tpu.rst

Lines changed: 1 addition & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -64,8 +64,7 @@ To get a TPU on colab, follow these steps:
6464

6565
.. code-block::
6666
67-
!curl https://raw.githubusercontent.com/pytorch/xla/master/contrib/scripts/env-setup.py -o pytorch-xla-env-setup.py
68-
!python pytorch-xla-env-setup.py --version 1.7 --apt-packages libomp5 libopenblas-dev
67+
!pip install cloud-tpu-client==0.10 https://storage.googleapis.com/tpu-pytorch/wheels/torch_xla-1.8-cp37-cp37m-linux_x86_64.whl
6968
7069
5. Once the above is done, install PyTorch Lightning (v 0.7.0+).
7170

docs/source/advanced/training_tricks.rst

Lines changed: 8 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -26,8 +26,10 @@ The effect is a large effective batch size of size KxN.
2626

2727
Gradient Clipping
2828
-----------------
29-
Gradient clipping may be enabled to avoid exploding gradients. Specifically, this will `clip the gradient
30-
norm <https://pytorch.org/docs/stable/nn.html#torch.nn.utils.clip_grad_norm_>`_ computed over all model parameters together.
29+
Gradient clipping may be enabled to avoid exploding gradients. By default, this will `clip the gradient norm
30+
<https://pytorch.org/docs/stable/nn.html#torch.nn.utils.clip_grad_norm_>`_ computed over all model parameters together.
31+
If ``gradient_clip_algorithm`` option is set to ``value``, which is ``norm`` by default, this will
32+
`clip the gradient value <https://pytorch.org/docs/stable/nn.html#torch.nn.utils.clip_grad_value_>`_ for each parameter instead.
3133

3234
.. seealso:: :class:`~pytorch_lightning.trainer.trainer.Trainer`
3335

@@ -39,6 +41,10 @@ norm <https://pytorch.org/docs/stable/nn.html#torch.nn.utils.clip_grad_norm_>`_
3941
# clip gradients with norm above 0.5
4042
trainer = Trainer(gradient_clip_val=0.5)
4143

44+
# clip gradients with value above 0.5
45+
# gradient_clip_algorithm types => :class:`~pytorch_lightning.utilities.enums.GradClipAlgorithmType`
46+
trainer = Trainer(gradient_clip_val=0.5, gradient_clip_algorithm='value')
47+
4248
----------
4349

4450
Stochastic Weight Averaging

docs/source/api_references.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -93,5 +93,6 @@ Utilities API
9393
:toctree: api
9494
:nosignatures:
9595

96+
cli
9697
argparse_utils
9798
seed

0 commit comments

Comments
 (0)