Skip to content

Commit 2a837ac

Browse files
kaushikb11Borda
authored andcommitted
update lightning version to v1.2.2
remove unneccessary import Update CHANGELOG resolve a bug remove print resolve bug fix pep8 issues
1 parent e9517df commit 2a837ac

File tree

5 files changed

+4
-24
lines changed

5 files changed

+4
-24
lines changed

CHANGELOG.md

Lines changed: 0 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -9,41 +9,20 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
99

1010
### Added
1111

12-
1312
- Added `checkpoint` parameter to callback's `on_save_checkpoint` hook ([#6072](https://github.com/PyTorchLightning/pytorch-lightning/pull/6072))
1413

15-
1614
### Changed
1715

1816
- Changed the order of `backward`, `step`, `zero_grad` to `zero_grad`, `backward`, `step` ([#6147](https://github.com/PyTorchLightning/pytorch-lightning/pull/6147))
19-
20-
2117
- Changed default for DeepSpeed CPU Offload to False, due to prohibitively slow speeds at smaller scale ([#6262](https://github.com/PyTorchLightning/pytorch-lightning/pull/6262))
2218

23-
24-
### Deprecated
25-
26-
27-
### Removed
28-
29-
3019
### Fixed
3120

3221
- Fixed epoch level schedulers not being called when `val_check_interval < 1.0` ([#6075](https://github.com/PyTorchLightning/pytorch-lightning/pull/6075))
33-
34-
3522
- Fixed multiple early stopping callbacks ([#6197](https://github.com/PyTorchLightning/pytorch-lightning/pull/6197))
36-
37-
3823
- Fixed incorrect usage of `detach()`, `cpu()`, `to()` ([#6216](https://github.com/PyTorchLightning/pytorch-lightning/pull/6216))
39-
40-
4124
- Fixed LBFGS optimizer support which didn't converge in automatic optimization ([#6147](https://github.com/PyTorchLightning/pytorch-lightning/pull/6147))
42-
43-
4425
- Prevent `WandbLogger` from dropping values ([#5931](https://github.com/PyTorchLightning/pytorch-lightning/pull/5931))
45-
46-
4726
- Fixed error thrown when using valid distributed mode in multi node ([#6297](https://github.com/PyTorchLightning/pytorch-lightning/pull/6297)
4827

4928

pytorch_lightning/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
import time
66

77
_this_year = time.strftime("%Y")
8-
__version__ = '1.2.1'
8+
__version__ = '1.2.2'
99
__author__ = 'William Falcon et al.'
1010
__author_email__ = '[email protected]'
1111
__license__ = 'Apache-2.0'

pytorch_lightning/trainer/training_loop.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -514,6 +514,7 @@ def run_training_epoch(self):
514514
# VALIDATE IF NEEDED + CHECKPOINT CALLBACK
515515
# -----------------------------------------
516516
should_check_val = self.should_check_val_fx(batch_idx, is_last_batch)
517+
517518
if should_check_val:
518519
self.trainer.run_evaluation()
519520
val_loop_called = True
@@ -577,7 +578,7 @@ def run_training_epoch(self):
577578
self.trainer.run_evaluation(on_epoch=True)
578579

579580
# reset stage to train
580-
self.trainer._running_stage = RunningStage.TRAINING
581+
self.trainer._set_running_stage(RunningStage.TRAINING, self.trainer.lightning_module)
581582

582583
# increment the global step once
583584
# progress global step according to grads progress

tests/accelerators/test_accelerator_connector.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,6 @@
3535
)
3636
from pytorch_lightning.plugins.environments import ClusterEnvironment, SLURMEnvironment, TorchElasticEnvironment
3737
from pytorch_lightning.utilities import _DEEPSPEED_AVAILABLE
38-
from pytorch_lightning.utilities.exceptions import MisconfigurationException
3938
from tests.helpers.boring_model import BoringModel
4039

4140

tests/overrides/test_data_parallel.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -147,6 +147,7 @@ def training_step(self, batch, batch_idx):
147147
model = TestModel().to(device)
148148
model.trainer = MagicMock()
149149
model.trainer._running_stage = RunningStage.TRAINING
150+
model.running_stage = RunningStage.TRAINING
150151
batch = torch.rand(2, 32).to(device)
151152
batch_idx = 0
152153

0 commit comments

Comments
 (0)