|
13 | 13 | "In this notebook, you will use a BERT example training script with SMP.\n",
|
14 | 14 | "The example script is based on [Nvidia Deep Learning Examples](https://github.com/NVIDIA/DeepLearningExamples/tree/master/PyTorch/LanguageModeling/BERT) and requires you to download the datasets and upload them to Amazon Simple Storage Service (Amazon S3) as explained in the instructions below. This is a large dataset, and so depending on your connection speed, this process can take hours to complete. \n",
|
15 | 15 | "\n",
|
16 |
| - "This notebook depends on the following files:\n", |
| 16 | + "This notebook depends on the following files. You can find all files in the [bert directory](https://github.com/aws/amazon-sagemaker-examples/tree/master/training/distributed_training/pytorch/model_parallel/bert) in the model parllel section of the Amazon SageMaker Examples notebooks repo.\n", |
17 | 17 | "\n",
|
18 | 18 | "* `bert_example/sagemaker_smp_pretrain.py`: This is an entrypoint script that is passed to the Pytorch estimator in the notebook instructions. This script is responsible for end to end training of the BERT model with SMP. The script has additional comments at places where the SMP API is used.\n",
|
19 | 19 | "\n",
|
|
25 | 25 | "\n",
|
26 | 26 | "* `bert_example/utils.py`: This contains different helper utility functions used in end to end training of the BERT model (`bert_example/sagemaker_smp_pretrain.py`).\n",
|
27 | 27 | "\n",
|
28 |
| - "* `bert_example/file_utils.py`: Contains different file utility functions used in model definition (*bert_example/modeling.py*).\n", |
29 |
| - "\n", |
30 |
| - "*Getting Started*: The bert directory needs to be zipped and uploaded to a Sagemaker notebook instance. Unzip on the notebook instance and follow the instructions in the notebook.\n" |
| 28 | + "* `bert_example/file_utils.py`: Contains different file utility functions used in model definition (`bert_example/modeling.py`).\n" |
31 | 29 | ]
|
32 | 30 | },
|
33 | 31 | {
|
|
92 | 90 | "cell_type": "markdown",
|
93 | 91 | "metadata": {},
|
94 | 92 | "source": [
|
95 |
| - "## Prepare/Identify your Training Data in Amazon S3" |
96 |
| - ] |
97 |
| - }, |
98 |
| - { |
99 |
| - "cell_type": "markdown", |
100 |
| - "metadata": {}, |
101 |
| - "source": [ |
| 93 | + "## Prepare/Identify your Training Data in Amazon S3\n", |
| 94 | + "\n", |
102 | 95 | "If you don't already have the BERT dataset in an S3 bucket, please see the instructions in [Nvidia BERT Example](https://github.com/NVIDIA/DeepLearningExamples/blob/master/PyTorch/LanguageModeling/BERT/README.md) to download the dataset and upload it to a s3 bucket. See the prerequisites at the beginning of this notebook for more information.\n",
|
103 | 96 | "\n",
|
104 | 97 | "Uncomment and use the following cell to specify the Amazon S3 bucket and prefix that contains your training data. For example, if your training data is in s3://your-bucket/training, enter `'your-bucket'` for s3_bucket and `'training'` for prefix. Note that your output data will be stored in the same bucket, under the `output/` prefix."
|
|
193 | 186 | "metadata": {},
|
194 | 187 | "outputs": [],
|
195 | 188 | "source": [
|
196 |
| - "mpioptions = \"-verbose --mca orte_base_help_aggregate 0 \"\n", |
197 |
| - "mpioptions += \"--mca btl_vader_single_copy_mechanism none\"\n", |
198 |
| - "parameters = {\"optimize\": \"speed\", \"microbatches\": 12, \"partitions\": 2, \"ddp\": True, \"pipeline\": \"interleaved\", \"overlapping_allreduce\": True, \"placement_strategy\": \"cluster\", \"memory_weight\": 0.3}\n", |
| 189 | + "mpi_options = \"-verbose --mca orte_base_help_aggregate 0 \"\n", |
| 190 | + "mpi_options += \"--mca btl_vader_single_copy_mechanism none\"\n", |
| 191 | + "smp_parameters = {\"optimize\": \"speed\", \"microbatches\": 12, \"partitions\": 2, \"ddp\": True, \"pipeline\": \"interleaved\", \"overlapping_allreduce\": True, \"placement_strategy\": \"cluster\", \"memory_weight\": 0.3}\n", |
199 | 192 | "timeout = 60 * 60\n",
|
200 | 193 | "metric_definitions = [{\"Name\": \"base_metric\", \"Regex\": \"<><><><><><>\"}]\n",
|
201 | 194 | "\n",
|
|
235 | 228 | "metadata": {},
|
236 | 229 | "outputs": [],
|
237 | 230 | "source": [
|
238 |
| - "pytorch_estimator = PyTorch(\"sagemaker_smp_pretrain.py\",\n", |
| 231 | + "pytorch_estimator = PyTorch(\"sagemaker_rbk_pretrain.py\",\n", |
239 | 232 | " role=role,\n",
|
240 | 233 | " instance_type=\"ml.p3.16xlarge\",\n",
|
241 | 234 | " volume_size=200,\n",
|
|
247 | 240 | " \"smdistributed\": {\n",
|
248 | 241 | " \"modelparallel\": {\n",
|
249 | 242 | " \"enabled\": True,\n",
|
250 |
| - " \"parameters\": parameters\n", |
| 243 | + " \"parameters\": smp_parameters\n", |
251 | 244 | " }\n",
|
252 | 245 | " },\n",
|
253 | 246 | " \"mpi\": {\n",
|
254 | 247 | " \"enabled\": True,\n",
|
255 | 248 | " \"process_per_host\": 8,\n",
|
256 |
| - " \"custom_mpi_options\": mpioptions,\n", |
| 249 | + " \"custom_mpi_options\": mpi_options,\n", |
257 | 250 | " }\n",
|
258 | 251 | " },\n",
|
259 | 252 | " source_dir='bert_example',\n",
|
|
0 commit comments