Skip to content

Commit efda48f

Browse files
SeanNarenBorda
andauthored
Disable CPU Offload as default for DeepSpeed (#6262)
* Change default for CPU offload to false for best throughput/memory efficiency * Add changelog * default Co-authored-by: Jirka Borovec <[email protected]>
1 parent 3371d32 commit efda48f

File tree

2 files changed

+5
-2
lines changed

2 files changed

+5
-2
lines changed

CHANGELOG.md

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -20,6 +20,9 @@ The format is based on [Keep a Changelog](http://keepachangelog.com/en/1.0.0/).
2020
- Changed the order of `backward`, `step`, `zero_grad` to `zero_grad`, `backward`, `step` ([#6147](https://github.com/PyTorchLightning/pytorch-lightning/pull/6147))
2121

2222

23+
- Changed default for DeepSpeed CPU Offload to False, due to prohibitively slow speeds at smaller scale ([#6262](https://github.com/PyTorchLightning/pytorch-lightning/pull/6262))
24+
25+
2326
- Renamed `pytorch_lightning.callbacks.swa` to `pytorch_lightning.callbacks.stochastic_weight_avg` ([#6259](https://github.com/PyTorchLightning/pytorch-lightning/pull/6259))
2427

2528

pytorch_lightning/plugins/training_type/deepspeed.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -66,7 +66,7 @@ def __init__(
6666
self,
6767
zero_optimization: bool = True,
6868
stage: int = 2,
69-
cpu_offload: bool = True,
69+
cpu_offload: bool = False,
7070
contiguous_gradients: bool = True,
7171
overlap_comm: bool = True,
7272
allgather_partitions: bool = True,
@@ -104,7 +104,7 @@ def __init__(
104104
stage: Different stages of the ZeRO Optimizer. 0 is disabled,
105105
1 is optimizer state partitioning, 2 is optimizer+gradient state partitioning (default: 2)
106106
107-
cpu_offload: Enable offloading optimizer memory and computation to CPU (default: True)
107+
cpu_offload: Enable offloading optimizer memory and computation to CPU
108108
109109
contiguous_gradients: Copies gradients to a continuous buffer as they are produced.
110110
Avoids memory fragmentation during backwards. Useful when training large models. (default: True)

0 commit comments

Comments
 (0)