Skip to content

Releases: Lightning-AI/pytorch-lightning

Memory fixes inbound!

19 Sep 16:26
Compare
Choose a tag to compare

[0.6.1] - 2022-09-19

Added

  • Add support to upload files to the Drive through an asynchronous upload_file endpoint (#14703)

Changed

  • Application storage prefix moved from app_id to project_id/app_id (#14583)
  • LightningCloud client calls to use keyword arguments instead of positional arguments (#14685)

Fixed

  • Making threadpool non-default from LightningCloud client (#14757)
  • Resolved a bug where the state change detection using DeepDiff won't work with Path, Drive objects (#14465)
  • Resolved a bug where the wrong client was passed to collect cloud logs (#14684)
  • Resolved the memory leak issue with the Lightning Cloud package and bumped the requirements to use the latest version (#14697)
  • Fixing 5000 log line limitation for Lightning AI BYOC cluster logs (#14458)
  • Fixed a bug where the uploaded command file wasn't properly parsed (#14532)
  • Resolved LightningApp(..., debug=True) (#14464)

Contributors

@dmitsf @hhsecond @tchaton @nohalon @krshrimali @pritamsoni-hsr @nmiculinic @ethanwharris @yurijmikhalevich @Felonious-Spellfire @otaj @Borda

If we forgot someone due to not matching commit email with GitHub account, let us know :)

PyTorch Lightning 1.7.6: Standard patch release

13 Sep 19:19
Compare
Choose a tag to compare

[1.7.6] - 2022-09-13

Changed

  • Improved the error messaging when passing Trainer.method(model, x_dataloader=None) with no module-method implementations available (#14614)

Fixed

  • Reset the dataloaders on OOM failure in batch size finder to use the last successful batch size (#14372)
  • Fixed an issue to keep downscaling the batch size in case there hasn't been even a single successful optimal batch size with mode="power" (#14372)
  • Fixed an issue where self.log-ing a tensor would create a user warning from PyTorch about cloning tensors (#14599)
  • Fixed compatibility when torch.distributed is not available (#14454)

Contributors

@akihironitta @awaelchli @Borda @carmocca @dependabot @krshrimali @mauvilsa @pierocor @rohitgr7 @wangraying

If we forgot someone due to not matching commit email with GitHub account, let us know :)

BYOC cluster management

08 Sep 12:44
9251269
Compare
Choose a tag to compare

[0.6.0] - 2022-09-08

Added

  • Introduce lightning connect (#14452)
  • Adds PanelFrontend to easily create complex UI in Python (#13531)
  • Add support for Lightning App Commands through the configure_commands hook on LightningFlow and ClientCommand (#13602)
  • Add support for Lightning AI BYOC cluster management (#13835)
  • Add support to see Lightning AI BYOC cluster logs (#14334)
  • Add support to run Lightning apps on Lightning AI BYOC clusters (#13894)
  • Add support for listing Lightning AI apps (#13987)
  • Adds LightningTrainingComponent. LightningTrainingComponent orchestrates multi-node training in the cloud (#13830)
  • Add support for printing application logs using CLI lightning show logs <app_name> [components] (#13634)
  • Add support for Lightning API through the configure_api hook on the LightningFlow and the Post, Get, Delete, Put with HttpMethods (#13945)
  • Added a warning when configure_layout returns URLs configured with HTTP instead of HTTPS (#14233)
  • Add --app_args support from the CLI (#13625)

Changed

  • Default values and parameter names for Lightning AI BYOC cluster management (#14132)
  • Run the flow only if the state has changed from the previous execution (#14076)
  • Increased DeepDiff's verbose level to properly handle dict changes (#13960)
  • Setup: added requirement freeze for the next major version (#14480)

Fixed

  • Unification of app template: moved app.py to root dir for lightning init app <app_name> template (#13853)
  • Fixed an issue with lightning --version command (#14433)
  • Fixed imports of collections.abc for py3.10 (#14345)

Contributors

@adam-lightning, @awaelchli, @Borda, @dmitsf, @manskx, @MarcSkovMadsen, @nicolai86, @tchaton

If we forgot someone due to not matching commit email with GitHub account, let us know :]

PyTorch Lightning 1.7.5: Standard patch release

07 Sep 04:08
Compare
Choose a tag to compare

[1.7.5] - 2022-09-06

Fixed

  • Squeezed tensor values when logging with LightningModule.log (#14489)
  • Fixed WandbLogger save_dir is not set after creation (#14326)
  • Fixed Trainer.estimated_stepping_batches when maximum number of epochs is not set (#14317)

Contributors

@carmocca @dependabot @robertomest @rohitgr7 @tshu-w

If we forgot someone due to not matching commit email with GitHub account, let us know :)

PyTorch Lightning 1.7.4: Standard patch release

31 Aug 17:21
Compare
Choose a tag to compare

[1.7.4] - 2022-08-31

Added

  • Added an environment variable PL_DISABLE_FORK that can be used to disable all forking in the Trainer (#14319)

Fixed

  • Fixed LightningDataModule hparams parsing (#12806)
  • Reset epoch progress with batch size scaler (#13846)
  • Fixed restoring the trainer after using lr_find() so that the correct LR schedule is used for the actual training (#14113)
  • Fixed incorrect values after transferring data to an MPS device (#14368)

Contributors

@rohitgr7 @tanmoyio @justusschock @cschell @carmocca @Callidior @awaelchli @j0rd1smit @dependabot @Borda @otaj

PyTorch Lightning 1.7.3: Standard patch release

25 Aug 19:06
Compare
Choose a tag to compare

[1.7.3] - 2022-08-25

Fixed

  • Fixed an assertion error when using a ReduceOnPlateau scheduler with the Horovod strategy (#14215)
  • Fixed an AttributeError when accessing LightningModule.logger and the Trainer has multiple loggers (#14234)
  • Fixed wrong num padding for RichProgressBar (#14296)
  • Added back support for logging in the configure_gradient_clipping hook after unintended removal in v1.7.2 (#14298)
  • Fixed an issue to avoid the impact of sanity check on reload_dataloaders_every_n_epochs for validation (#13964)

Contributors

@awaelchli @Borda @carmocca @dependabot @kaushikb11 @otaj @rohitgr7

Dependency hotfix

22 Aug 15:57
Compare
Choose a tag to compare

[0.5.7] - 2022-08-22

Changed

  • Release LAI docs as stable (#14250)
  • Compatibility for Python 3.10

Fixed

  • Pinning starsessions to 1.x (#14333)
  • Parsed local package versions (#13933)

Contributors

@Borda, @hhsecond, @manskx

If we forgot someone due to not matching commit email with GitHub account, let us know :]

Minor patch release

18 Aug 13:20
Compare
Choose a tag to compare

[0.5.6] - 2022-08-16

Fixed

  • Resolved a bug where the install command was not installing the latest version of an app/component by default (#14181)

Contributors

@manskx

If we forgot someone due to not matching commit email with GitHub account, let us know :]

PyTorch Lightning 1.7.2: Standard patch release

17 Aug 20:25
4fae327
Compare
Choose a tag to compare

[1.7.2] - 2022-08-17

Added

  • Added FullyShardedNativeNativeMixedPrecisionPlugin to handle precision for DDPFullyShardedNativeStrategy (#14092)
  • Added profiling to these hooks: on_before_batch_transfer, transfer_batch_to_device, on_after_batch_transfer, configure_gradient_clipping, clip_gradients (#14069)

Changed

  • Updated compatibility for LightningLite to run with the latest DeepSpeed 0.7.0 (13967)
  • Raised a MisconfigurationException if batch transfer hooks are overriden with IPUAccelerator (13961)
  • The default project name in WandbLogger is now "lightning_logs" (#14145)
  • The WandbLogger.name property no longer returns the name of the experiment, and instead returns the project's name (#14145)

Fixed

  • Fixed a bug that caused spurious AttributeError when multiple DataLoader classes are imported (#14117)
  • Fixed epoch-end logging results not being reset after the end of the epoch (#14061)
  • Fixed saving hyperparameters in a composition where the parent class is not a LightningModule or LightningDataModule (#14151)
  • Fixed epoch-end logging results not being reset after the end of the epoch (#14061)
  • Fixed the device placement when LightningModule.cuda() gets called without specifying a device index and the current cuda device was not 0 (#14128)
  • Avoided false positive warning about using sync_dist when using torchmetrics (#14143)
  • Avoid metadata.entry_points deprecation warning on Python 3.10 (#14052)
  • Avoid raising the sampler warning if num_replicas=1 (#14097)
  • Fixed resuming from a checkpoint when using Stochastic Weight Averaging (SWA) (#9938)
  • Avoided requiring the FairScale package to use precision with the fsdp native strategy (#14092)
  • Fixed an issue in which the default name for a run in WandbLogger would be set to the project name instead of a randomly generated string (#14145)
  • Fixed not preserving set attributes on DataLoader and BatchSampler when instantiated inside *_dataloader hooks (#14212)

Contributors

@adamreeve @akihironitta @awaelchli @Borda @carmocca @dependabot @otaj @rohitgr7

PyTorch Lightning 1.7.1: Standard patch release

09 Aug 19:41
Compare
Choose a tag to compare

[1.7.1] - 2022-08-09

Fixed

  • Casted only floating point tensors to fp16 with IPUs (#13983)
  • Casted tensors to fp16 before moving them to device with DeepSpeedStrategy (#14000)
  • Fixed the NeptuneLogger dependency being unrecognized (#13988)
  • Fixed an issue where users would be warned about unset max_epochs even when fast_dev_run was set (#13262)
  • Fixed MPS device being unrecognized (#13992)
  • Fixed incorrect precision="mixed" being used with DeepSpeedStrategy and IPUStrategy (#14041)
  • Fixed dtype inference during gradient norm computation (#14051)
  • Fixed a bug that caused ddp_find_unused_parameters to be set False, whereas the intended default is True (#14095)

Contributors

@adamjstewart @akihironitta @awaelchli @Birch-san @carmocca @clementpoiret @dependabot @rohitgr7