Skip to content

Commit 874471e

Browse files
committed
Updated the TOC and removed references to credentials file, "GA", "launch"
1 parent 43f953f commit 874471e

File tree

1 file changed

+15
-15
lines changed

1 file changed

+15
-15
lines changed

im-xgboost/xgboost-multiclass-classification.ipynb

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -9,9 +9,11 @@
99
"1. [Introduction](#Introduction)\n",
1010
"2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing)\n",
1111
" 1. [Permissions and environment variables](#Permissions-and-environment-variables)\n",
12-
" 2. [Data ingestion](#Data ingestion)\n",
13-
" 3. [Data conversion](#Data conversion)\n",
12+
" 2. [Data ingestion](#Data-ingestion)\n",
13+
" 3. [Data conversion](#Data-conversion)\n",
1414
"3. [Training the XGBoost model](#Training-the-XGBoost-model)\n",
15+
" 1. [Training on a single instance](#Training-on-a-single-instance)\n",
16+
" 2. [Training on multiple instances](#Training-on-multiple-instances)\n",
1517
"4. [Set up hosting for the model](#Set-up-hosting-for-the-model)\n",
1618
" 1. [Import model into hosting](#Import-model-into-hosting)\n",
1719
" 2. [Create endpoint configuration](#Create-endpoint-configuration)\n",
@@ -39,11 +41,10 @@
3941
"\n",
4042
"### Permissions and environment variables\n",
4143
"\n",
42-
"Here we set up the linkage and authentication to AWS services. There are three parts to this:\n",
44+
"Here we set up the linkage and authentication to AWS services.\n",
4345
"\n",
44-
"1. The credentials and region for the account that's running training. Upload the credentials in the normal AWS credentials file format using the jupyter upload feature. \n",
45-
"2. The roles used to give learning and hosting access to your data. See the documentation for how to specify these.\n",
46-
"3. The S3 bucket that you want to use for training and model data."
46+
"1. The roles used to give learning and hosting access to your data. See the documentation for how to specify these.\n",
47+
"2. The S3 bucket that you want to use for training and model data."
4748
]
4849
},
4950
{
@@ -186,7 +187,9 @@
186187
"source": [
187188
"## Training the XGBoost model\n",
188189
"\n",
189-
"Once we have the data available in the correct format for training, the next step is to actually train the model using the data. After setting training parameters, we kick off training, and poll for status until training is completed. In the following the single machine and distributed versions of the algorithm are presented. "
190+
"Once we have the data available in the correct format for training, the next step is to actually train the model using the data. After setting training parameters, we kick off training, and poll for status until training is completed. In the following the single machine and distributed versions of the algorithm are presented. \n",
191+
"\n",
192+
"### Training on a single instance"
190193
]
191194
},
192195
{
@@ -277,6 +280,8 @@
277280
"cell_type": "markdown",
278281
"metadata": {},
279282
"source": [
283+
"### Training on multiple instances\n",
284+
"\n",
280285
"You can also run the training job distributed over multiple instances. For larger datasets with multiple partitions, this can significantly boost the training speed. Here we'll still use the small/toy MNIST dataset to demo this feature. "
281286
]
282287
},
@@ -365,12 +370,10 @@
365370
"metadata": {},
366371
"source": [
367372
"# Set up hosting for the model\n",
368-
"In order to set up hosting, we have to import the model from training to hosting. A common question would be, why wouldn't we automatically go from training to hosting? As we worked through examples of what customers were looking to do with hosting, we realized that the Amazon ML model of hosting was unlikely to be sufficient for all customers.\n",
369-
"\n",
370-
"As a result, we have introduced some flexibility with respect to model deployment, with the goal of additional model deployment targets after launch. In the short term, that introduces some complexity, but we are actively working on making that easier for customers, even before GA.\n",
373+
"In order to set up hosting, we have to import the model from training to hosting. \n",
371374
"\n",
372375
"### Import model into hosting\n",
373-
"Next, you register the model with hosting. This allows you the flexibility of importing models trained elsewhere, as well as the choice of not importing models if the target of model creation is AWS Lambda, AWS Greengrass, Amazon Redshift, Amazon Athena, or other deployment target."
376+
"Next, you register the model with hosting. This allows you the flexibility of importing models trained elsewhere."
374377
]
375378
},
376379
{
@@ -408,9 +411,7 @@
408411
"metadata": {},
409412
"source": [
410413
"### Create endpoint configuration\n",
411-
"At launch, we will support configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way.\n",
412-
"\n",
413-
"In addition, the endpoint configuration describes the instance type required for model deployment, and at launch will describe the autoscaling configuration."
414+
"SageMaker supports configuring REST endpoints in hosting with multiple models, e.g. for A/B testing purposes. In order to support this, customers create an endpoint configuration, that describes the distribution of traffic across the models, whether split, shadowed, or sampled in some way. In addition, the endpoint configuration describes the instance type required for model deployment and the autoscaling configuration."
414415
]
415416
},
416417
{
@@ -487,7 +488,6 @@
487488
"metadata": {},
488489
"outputs": [],
489490
"source": [
490-
"import boto3\n",
491491
"runtime_client = boto3.client('sagemaker-runtime')"
492492
]
493493
},

0 commit comments

Comments
 (0)