Skip to content

Commit d5cad97

Browse files
committed
archive 1.6.0 doc
1 parent 4cb56d6 commit d5cad97

File tree

8 files changed

+2261
-2
lines changed

8 files changed

+2261
-2
lines changed

doc/api/training/smd_model_parallel_release_notes/smd_model_parallel_change_log.rst

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,8 +19,11 @@ Sagemaker Distributed Model Parallel 1.7.0 Release Notes
1919

2020
**New Features**
2121

22+
Additional tensor parallelism features for PyTorch:
23+
2224
* Support for query key layer scaling to avoid overflow for large model
23-
* Support for FP32 residual addition to avoid overflow (NaN loss values) for large models when using FP16.
25+
* Support for FP32 residual addition to avoid overflow (NaN loss values)
26+
for large models when using FP16
2427

2528
**Improvements**
2629

doc/api/training/smp_versions/archives.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
.. toctree::
44
:maxdepth: 1
55

6+
v1_6_0.rst
67
v1_5_0.rst
78
v1_4_0.rst
89
v1_3_0.rst

doc/api/training/smp_versions/latest/smd_model_parallel_pytorch.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@ you need to add the following import statement at the top of your training scrip
1616
<https://docs.aws.amazon.com/sagemaker/latest/dg/model-parallel-customize-training-script-pt.html>`_
1717
to learn how to use the following API in your PyTorch training script.
1818

19-
.. py:class:: smp.DistributedModel()
19+
.. class:: smp.DistributedModel
2020

2121
A sub-class of ``torch.nn.Module`` which specifies the model to be
2222
partitioned. Accepts a ``torch.nn.Module`` object ``module`` which is

0 commit comments

Comments
 (0)