Skip to content

Commit f381ffe

Browse files
committed
Add SMP 1.3 docs
1 parent dacacb6 commit f381ffe

File tree

7 files changed

+1247
-3
lines changed

7 files changed

+1247
-3
lines changed

doc/api/training/smd_model_parallel_general.rst

Lines changed: 39 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -218,7 +218,6 @@ table are optional.
218218
| | | | contexts. |
219219
+---------------------------+-------------------------+-------------------+-----------------------+
220220

221-
222221
.. rubric:: TensorFlow-specific parameters
223222

224223
.. table::
@@ -304,6 +303,45 @@ table are optional.
304303
| | | | provided by |
305304
| | | | SageMaker. |
306305
+-------------------+-------------------------+-----------------+-----------------------------------+
306+
| ``active_microbatches`` | int | ``partitions`` + 2 | The number of |
307+
| | | ``optimize`` is | memory |
308+
| | | ``"speed"``, | balancing in |
309+
| | | else 0.8 | the |
310+
| | | | auto-partitioni |
311+
| | | | ng |
312+
| | | | objective, as |
313+
| | | | opposed to |
314+
| | | | balancing |
315+
| | | | computational |
316+
| | | | load. If 0.0, |
317+
| | | | the library only tries |
318+
| | | | to balance |
319+
| | | | computation; if |
320+
| | | | 1.0 the library only |
321+
| | | | tries to |
322+
| | | | balance the |
323+
| | | | memory use. Any |
324+
| | | | value in |
325+
| | | | between |
326+
| | | | interpolates |
327+
| | | | between these |
328+
| | | | extremes. |
329+
+-------------------+-------------------------+-----------------+-----------------------------------+
330+
| ``determinstic_server`` | bool | ``False`` | Must be set to |
331+
| | | | ``True`` if |
332+
| | | | hybrid |
333+
| | | | model/data |
334+
| | | | parallelism is |
335+
| | | | used |
336+
| | | | with ``DistributedDataParallel``. |
337+
| | | | ``DistributedDataParallel`` |
338+
| | | | is used with |
339+
| | | | NCCL backend, |
340+
| | | | and uses the |
341+
| | | | ``MASTER_PORT`` |
342+
| | | | provided by |
343+
| | | | SageMaker. |
344+
+-------------------+-------------------------+-----------------+-----------------------------------+
307345

308346
``mpi`` Parameters
309347
^^^^^^^^^^^^^^^^^^^^^^^^^^^^

doc/api/training/smd_model_parallel_release_notes/smd_model_parallel_change_log.md

Lines changed: 39 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,42 @@
1+
# Sagemaker Distributed Model Parallel 1.3.0 Release Notes
2+
3+
- New Features
4+
- Bug Fixes
5+
- Known Issues
6+
7+
## New Features
8+
9+
### PyTorch
10+
11+
#### Add support for PyTorch 1.8
12+
13+
- Adds a new method to DistributedModel ``register_comm_hook`` (for PyTorch 1.8 and newer only). This method behaves the same as the corresponding method with the same name in
14+
`torch.DistributedDataParallel` API. Please refer to the [SageMaker distributed model parallel API documentation](https://sagemaker.readthedocs.io/en/stable/api/training/smd_model_parallel_pytorch.html#smp.DistributedModel) for more information.
15+
16+
#### Others
17+
- Adds a configuration ``active_microbatches`` to the SageMaker SDK API for launching jobs, to control the number of active microbatches during training. This helps limit memory usage in cases where the number of microbatches is high. Please refer to the [SageMaker Python SDK parameters API documentation](https://sagemaker.readthedocs.io/en/stable/api/training/smd_model_parallel_general.html) for more information.
18+
19+
- Adds a configuration ``deterministic_server`` to the SageMaker SDK API for launching jobs, which ensures that the execution server for pipeline parallelism processes requests in a deterministic order across data parallel ranks. Please refer to the [SageMaker Python SDK parameters API documentation](https://sagemaker.readthedocs.io/en/stable/api/training/smd_model_parallel_general.html) for more information.
20+
21+
- Parameter passing is now supported in ``module.forward`` methods for DistributedModel and its submodules. This removes the restriction of having to pass `nn.Parameter` to the `__init__` call and making it a member of the module to use it.
22+
## Bug Fixes
23+
24+
### PyTorch
25+
26+
- Fixed a case where training hangs due to a module having computation which requires grads that is not used by the final output of the module. Now such a situtation raises an error with suggestions on making such computation compatible.
27+
28+
- Fixed an issue with buffers which caused the buffers to not be on the correct device after a model is partitioned, and not be synchronized across steps (when ``broadcast_buffers`` is True). This could have caused correctness issues in models with buffers.
29+
30+
## Known Issues
31+
32+
### PyTorch
33+
34+
- ``mp_barrier`` and ``get_mp_process_group`` are wrongly marked as deprecated methods. Please ignore the deprecation warning.
35+
36+
- A crash was observed when ``optimizer.step()`` was called for certain optimizers such as AdaDelta, when the partition on which this method was called has no local parameters assigned to it after partitioning. This is due to a bug in PyTorch which [has since been fixed](https://github.com/pytorch/pytorch/pull/52944). Till that makes its way to the next release of PyTorch, please only call ``optimizer.step()`` on processes which have at least one local parameter. This can be checked like this ``len(list(model.local_parameters())) > 0``.
37+
38+
- A performance regression still exists when training on SMP with PyTorch 1.7.1 compared to 1.6. The rootcause was found to be the slowdown in performance of `.grad` method calls in PyTorch 1.7.1 compared to 1.6. Please see the related discussion: https://github.com/pytorch/pytorch/issues/50636. This issue does not exist with PyTorch 1.8.
39+
140
# Sagemaker Distributed Model Parallel 1.2.0 Release Notes
241

342
- New Features

0 commit comments

Comments
 (0)