Skip to content

Commit 7ad725b

Browse files
author
Sergey Togulev
committed
Merge remote-tracking branch 'sergtogul/1.8.1_PT_containers' into new_tf_containers
2 parents f3861da + 077c6ed commit 7ad725b

File tree

6 files changed

+22
-22
lines changed

6 files changed

+22
-22
lines changed

doc/api/training/sdp_versions/latest/smd_data_parallel_pytorch.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -153,9 +153,9 @@ you will have for distributed training with the distributed data parallel librar
153153
PyTorch API
154154
===========
155155

156-
**Supported versions:**
156+
.. rubric:: Supported versions
157157

158-
- PyTorch 1.6.0, 1.8.0
158+
**PyTorch 1.7.1, 1.8.0**
159159

160160

161161
.. function:: smdistributed.dataparallel.torch.distributed.is_available()

doc/api/training/sdp_versions/latest/smd_data_parallel_tensorflow.rst

Lines changed: 7 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -16,8 +16,9 @@ The following steps show you how to convert a TensorFlow 2.x training
1616
script to utilize the distributed data parallel library.
1717

1818
The distributed data parallel library APIs are designed to be close to Horovod APIs.
19-
See `SageMaker distributed data parallel TensorFlow examples <https://sagemaker-examples.readthedocs.io/en/latest/training/distributed_training/index.html#tensorflow-distributed>`__ for additional details on how to implement the data parallel library
20-
API offered for TensorFlow.
19+
See `SageMaker distributed data parallel TensorFlow examples
20+
<https://sagemaker-examples.readthedocs.io/en/latest/training/distributed_training/index.html#tensorflow-distributed>`__
21+
for additional details on how to implement the data parallel library.
2122

2223
- First import the distributed data parallel library’s TensorFlow client and initialize it:
2324

@@ -156,8 +157,10 @@ TensorFlow API
156157

157158
.. rubric:: Supported versions
158159

159-
- TensorFlow 2.x - 2.3.1
160-
160+
TensorFlow is supported in version 1.0.0 of ``sagemakerdistributed.dataparallel``.
161+
Reference version 1.0.0 `TensorFlow API documentation
162+
<https://sagemaker.readthedocs.io/en/stable/api/training/sdp_versions/latest/smd_data_parallel_tensorflow.html#tensorflow-sdp-api>`_
163+
for supported TensorFlow versions.
161164

162165
.. function:: smdistributed.dataparallel.tensorflow.init()
163166

doc/api/training/sdp_versions/v1.0.0/smd_data_parallel_pytorch.rst

Lines changed: 6 additions & 8 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,10 @@ PyTorch Guide to SageMaker's distributed data parallel library
44

55
.. admonition:: Contents
66

7-
- :ref:`pytorch-sdp-modify`
8-
- :ref:`pytorch-sdp-api`
7+
- :ref:`pytorch-sdp-modify-1.0.0`
8+
- :ref:`pytorch-sdp-api-1.0.0`
99

10-
.. _pytorch-sdp-modify:
11-
:noindex:
10+
.. _pytorch-sdp-modify-1.0.0:
1211

1312
Modify a PyTorch training script to use SageMaker data parallel
1413
======================================================================
@@ -149,15 +148,14 @@ you will have for distributed training with the distributed data parallel librar
149148
    main()
150149
151150
152-
.. _pytorch-sdp-api:
153-
:noindex:
151+
.. _pytorch-sdp-api-1.0.0:
154152

155153
PyTorch API
156154
===========
157155

158-
**Supported versions:**
156+
.. rubric:: Supported versions
159157

160-
- PyTorch 1.6.0
158+
**PyTorch 1.6.0, 1.7.1**
161159

162160

163161
.. function:: smdistributed.dataparallel.torch.distributed.is_available()

doc/api/training/sdp_versions/v1.0.0/smd_data_parallel_tensorflow.rst

Lines changed: 5 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,10 @@ TensorFlow Guide to SageMaker's distributed data parallel library
44

55
.. admonition:: Contents
66

7-
- :ref:`tensorflow-sdp-modify`
8-
- :ref:`tensorflow-sdp-api`
7+
- :ref:`tensorflow-sdp-modify-1.0.0`
8+
- :ref:`tensorflow-sdp-api-1.0.0`
99

10-
.. _tensorflow-sdp-modify:
11-
:noindex:
10+
.. _tensorflow-sdp-modify-1.0.0:
1211

1312
Modify a TensorFlow 2.x training script to use SageMaker data parallel
1413
======================================================================
@@ -150,15 +149,14 @@ script you will have for distributed training with the library.
150149
    checkpoint.save(checkpoint_dir)
151150
152151
153-
.. _tensorflow-sdp-api:
154-
:noindex:
152+
.. _tensorflow-sdp-api-1.0.0:
155153

156154
TensorFlow API
157155
==============
158156

159157
.. rubric:: Supported versions
160158

161-
- TensorFlow 2.x - 2.3.1
159+
**TensorFlow 2.3.x - 2.4.1**
162160

163161

164162
.. function:: smdistributed.dataparallel.tensorflow.init()

doc/api/training/smp_versions/latest/smd_model_parallel_pytorch.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -6,7 +6,7 @@
66
PyTorch API
77
===========
88

9-
**Supported versions: 1.7.1, 1.8.0**
9+
**Supported versions: 1.6.0, 1.7.1, 1.8.0**
1010

1111
This API document assumes you use the following import statements in your training scripts.
1212

doc/frameworks/huggingface/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -9,3 +9,4 @@ For general information about using the SageMaker Python SDK, see :ref:`overview
99
:maxdepth: 2
1010

1111
sagemaker.huggingface
12+
Use Hugging Face with the SageMaker Python SDK <https://huggingface.co/transformers/sagemaker.html>

0 commit comments

Comments
 (0)