Skip to content

Commit c954fbc

Browse files
SeanNarenlexierule
authored andcommitted
v1.2.4 & chnagelog
Remove import Fix runif import Increment version Update CHANGELOG.md
1 parent 6bb24c2 commit c954fbc

File tree

4 files changed

+16
-28
lines changed

4 files changed

+16
-28
lines changed

CHANGELOG.md

Lines changed: 11 additions & 24 deletions
Original file line numberDiff line numberDiff line change
@@ -51,9 +51,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
5151
- Changed `setup()` and `teardown()` stage argument to take any of `{fit,validate,test,predict}` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386))
5252

5353

54-
- Changed the default of `find_unused_parameters` back to `True` in DDP and DDP Spawn ([#6438](https://github.com/PyTorchLightning/pytorch-lightning/pull/6438))
55-
56-
5754
### Deprecated
5855

5956
- `period` has been deprecated in favor of `every_n_val_epochs` in the `ModelCheckpoint` callback ([#6146](https://github.com/PyTorchLightning/pytorch-lightning/pull/6146))
@@ -110,43 +107,36 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
110107
- Fixed `ModelCheckpoint(save_top_k=0, save_last=True)` not saving the `last` checkpoint ([#6136](https://github.com/PyTorchLightning/pytorch-lightning/pull/6136))
111108

112109

113-
- Expose DeepSpeed loss parameters to allow users to fix loss instability ([#6115](https://github.com/PyTorchLightning/pytorch-lightning/pull/6115))
114-
115-
116110
- Fixed duplicate logs appearing in console when using the python logging module ([#5509](https://github.com/PyTorchLightning/pytorch-lightning/pull/5509), [#6275](https://github.com/PyTorchLightning/pytorch-lightning/pull/6275))
117111

118112

119-
- Fixed DP reduction with collection ([#6324](https://github.com/PyTorchLightning/pytorch-lightning/pull/6324))
120-
121-
122113
- Fixed `.teardown(stage='fit')` getting called during `trainer.test` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386))
123114

124115

125116
- Fixed `.on_fit_{start,end}()` getting called during `trainer.test` ([#6386](https://github.com/PyTorchLightning/pytorch-lightning/pull/6386))
126117

127118

128-
- Fixed an issue where the tuner would not tune the learning rate if also tuning the batch size ([#4688](https://github.com/PyTorchLightning/pytorch-lightning/pull/4688))
119+
- Fixed LightningModule `all_gather` on cpu tensors ([#6416](https://github.com/PyTorchLightning/pytorch-lightning/pull/6416))
129120

130121

131-
- Fixed broacast to use PyTorch `broadcast_object_list` and add `reduce_decision` ([#6410](https://github.com/PyTorchLightning/pytorch-lightning/pull/6410))
122+
## [1.2.4] - 2021-03-16
132123

124+
### Changed
133125

134-
- Fixed logger creating directory structure too early in DDP ([#6380](https://github.com/PyTorchLightning/pytorch-lightning/pull/6380))
126+
- Changed the default of `find_unused_parameters` back to `True` in DDP and DDP Spawn ([#6438](https://github.com/PyTorchLightning/pytorch-lightning/pull/6438))
135127

128+
### Fixed
136129

130+
- Expose DeepSpeed loss parameters to allow users to fix loss instability ([#6115](https://github.com/PyTorchLightning/pytorch-lightning/pull/6115))
131+
- Fixed DP reduction with collection ([#6324](https://github.com/PyTorchLightning/pytorch-lightning/pull/6324))
132+
- Fixed an issue where the tuner would not tune the learning rate if also tuning the batch size ([#4688](https://github.com/PyTorchLightning/pytorch-lightning/pull/4688))
133+
- Fixed broadcast to use PyTorch `broadcast_object_list` and add `reduce_decision` ([#6410](https://github.com/PyTorchLightning/pytorch-lightning/pull/6410))
134+
- Fixed logger creating directory structure too early in DDP ([#6380](https://github.com/PyTorchLightning/pytorch-lightning/pull/6380))
137135
- Fixed DeepSpeed additional memory use on rank 0 when default device not set early enough ([#6460](https://github.com/PyTorchLightning/pytorch-lightning/pull/6460))
138-
139-
140-
- Fixed LightningModule `all_gather` on cpu tensors ([#6416](https://github.com/PyTorchLightning/pytorch-lightning/pull/6416))
141-
142-
143136
- Fixed `DummyLogger.log_hyperparams` raising a `TypeError` when running with `fast_dev_run=True` ([#6398](https://github.com/PyTorchLightning/pytorch-lightning/pull/6398))
144-
145-
146137
- Fixed an issue with `Tuner.scale_batch_size` not finding the batch size attribute in the datamodule ([#5968](https://github.com/PyTorchLightning/pytorch-lightning/pull/5968))
147-
148-
149138
- Fixed an exception in the layer summary when the model contains torch.jit scripted submodules ([#6511](https://github.com/PyTorchLightning/pytorch-lightning/pull/6511))
139+
- Fixed when Train loop config was run during `Trainer.predict` ([#6541](https://github.com/PyTorchLightning/pytorch-lightning/pull/6541))
150140

151141

152142
## [1.2.3] - 2021-03-09
@@ -166,9 +156,6 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
166156
- Fixed `Trainer` not resetting `lightning_optimizers` when calling `Trainer.fit()` multiple times ([#6372](https://github.com/PyTorchLightning/pytorch-lightning/pull/6372))
167157

168158

169-
- Fixed `DummyLogger.log_hyperparams` raising a `TypeError` when running with `fast_dev_run=True` ([#6398](https://github.com/PyTorchLightning/pytorch-lightning/pull/6398))
170-
171-
172159
## [1.2.2] - 2021-03-02
173160

174161
### Added

pytorch_lightning/__init__.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -5,7 +5,7 @@
55
import time
66

77
_this_year = time.strftime("%Y")
8-
__version__ = '1.2.3'
8+
__version__ = '1.2.4'
99
__author__ = 'William Falcon et al.'
1010
__author_email__ = '[email protected]'
1111
__license__ = 'Apache-2.0'

pytorch_lightning/trainer/connectors/logger_connector/epoch_result_store.py

Lines changed: 0 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -13,7 +13,6 @@
1313
# limitations under the License.
1414
from collections import defaultdict
1515
from typing import Any, Dict, List, Optional, Tuple
16-
from weakref import proxy
1716

1817
import torch
1918

tests/checkpointing/test_checkpoint_callback_frequency.py

Lines changed: 4 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,6 @@
1919

2020
from pytorch_lightning import callbacks, seed_everything, Trainer
2121
from tests.helpers import BoringModel
22-
from tests.helpers.runif import RunIf
2322

2423

2524
@mock.patch.dict(os.environ, {"PL_DEV_DEBUG": "1"})
@@ -102,7 +101,10 @@ def training_step(self, batch, batch_idx):
102101

103102

104103
@mock.patch('torch.save')
105-
@RunIf(special=True, min_gpus=2)
104+
@pytest.mark.skipif(torch.cuda.device_count() < 2, reason="test requires multi-GPU machine")
105+
@pytest.mark.skipif(
106+
not os.getenv("PL_RUNNING_SPECIAL_TESTS", '0') == '1', reason="test should be run outside of pytest"
107+
)
106108
@pytest.mark.parametrize(['k', 'epochs', 'val_check_interval', 'expected'], [(1, 1, 1.0, 1), (2, 2, 0.3, 5)])
107109
def test_top_k_ddp(save_mock, tmpdir, k, epochs, val_check_interval, expected):
108110

0 commit comments

Comments
 (0)