Skip to content

Commit 949e40c

Browse files
committed
Upgrade the TF2 version from 2.3.1 to 2.4.1
1 parent f6065f2 commit 949e40c

File tree

4 files changed

+4
-4
lines changed

4 files changed

+4
-4
lines changed

training/distributed_training/tensorflow/data_parallel/bert/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
ARG region
22

3-
FROM 763104351884.dkr.ecr.us-west-2.amazonaws.com/tensorflow-training:2.3.1-gpu-py37-cu110-ubuntu18.04
3+
FROM 763104351884.dkr.ecr.us-east-1.amazonaws.com/tensorflow-training:2.4.1-gpu-py37-cu110-ubuntu18.04
44

55
RUN pip --no-cache-dir --no-cache install \
66
scikit-learn==0.23.1 \

training/distributed_training/tensorflow/data_parallel/bert/tensorflow2_smdataparallel_bert_demo.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
"\n",
99
"HSMDataParallel is a new capability in Amazon SageMaker to train deep learning models faster and cheaper. SMDataParallel is a distributed data parallel training framework for TensorFlow, PyTorch, and MXNet.\n",
1010
"\n",
11-
"This notebook example shows how to use SMDataParallel with TensorFlow(version 2.3.1) on [Amazon SageMaker](https://aws.amazon.com/sagemaker/) to train a BERT model using [Amazon FSx for Lustre file-system](https://aws.amazon.com/fsx/lustre/) as data source.\n",
11+
"This notebook example shows how to use SMDataParallel with TensorFlow(version 2.4.1) on [Amazon SageMaker](https://aws.amazon.com/sagemaker/) to train a BERT model using [Amazon FSx for Lustre file-system](https://aws.amazon.com/fsx/lustre/) as data source.\n",
1212
"\n",
1313
"The outline of steps is as follows:\n",
1414
"\n",

training/distributed_training/tensorflow/data_parallel/maskrcnn/Dockerfile

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
ARG region
22

3-
FROM 763104351884.dkr.ecr.us-west-2.amazonaws.com/tensorflow-training:2.3.1-gpu-py37-cu110-ubuntu18.04
3+
FROM 763104351884.dkr.ecr.us-east-1.amazonaws.com/tensorflow-training:2.4.1-gpu-py37-cu110-ubuntu18.04
44

55
RUN pip --no-cache-dir --no-cache install \
66
Cython \

training/distributed_training/tensorflow/data_parallel/maskrcnn/tensorflow2_smdataparallel_maskrcnn_demo.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
"\n",
99
"SMDataParallel is a new capability in Amazon SageMaker to train deep learning models faster and cheaper. SMDataParallel is a distributed data parallel training framework for TensorFlow, PyTorch, and MXNet.\n",
1010
"\n",
11-
"This notebook example shows how to use SMDataParallel with TensorFlow(version 2.3.1) on [Amazon SageMaker](https://aws.amazon.com/sagemaker/) to train a MaskRCNN model on [COCO 2017 dataset](https://cocodataset.org/#home) using [Amazon FSx for Lustre file-system](https://aws.amazon.com/fsx/lustre/) as data source.\n",
11+
"This notebook example shows how to use SMDataParallel with TensorFlow(version 2.4.1) on [Amazon SageMaker](https://aws.amazon.com/sagemaker/) to train a MaskRCNN model on [COCO 2017 dataset](https://cocodataset.org/#home) using [Amazon FSx for Lustre file-system](https://aws.amazon.com/fsx/lustre/) as data source.\n",
1212
"\n",
1313
"The outline of steps is as follows:\n",
1414
"\n",

0 commit comments

Comments
 (0)