Skip to content

Commit bd3e6ca

Browse files
committed
Updated: Minor messaging in several notebooks
1 parent d8bf72a commit bd3e6ca

File tree

5 files changed

+28
-37
lines changed

5 files changed

+28
-37
lines changed

advanced_functionality/kmeans_bring_your_own_model/kmeans_bring_your_own_model.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
"---\n",
2727
"## Background\n",
2828
"\n",
29-
"Amazon SageMaker includes functionality to support a hosted notebook environment, distributed, managed training, and real-time, autoscaling hosting. We think it works best when all three of these services are used together, but they can also be used independently. Some use cases may only require hosting. Maybe the model was trained prior to Amazon SageMaker existing, in a different service.\n",
29+
"Amazon SageMaker includes functionality to support a hosted notebook environment, distributed, managed training, and real-time hosting. We think it works best when all three of these services are used together, but they can also be used independently. Some use cases may only require hosting. Maybe the model was trained prior to Amazon SageMaker existing, in a different service.\n",
3030
"\n",
3131
"This notebook shows how to use a pre-existing model with an Amazon SageMaker Algorithm container to quickly create a hosted endpoint for that model.\n",
3232
"\n",

advanced_functionality/xgboost_bring_your_own_model/xgboost_bring_your_own_model.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -26,7 +26,7 @@
2626
"---\n",
2727
"## Background\n",
2828
"\n",
29-
"Amazon SageMaker includes functionality to support a hosted notebook environment, distributed, serverless training, and real-time hosting. We think it works best when all three of these services are used together, but they can also be used independently. Some use cases may only require hosting. Maybe the model was trained prior to Amazon SageMaker existing, in a different service.\n",
29+
"Amazon SageMaker includes functionality to support a hosted notebook environment, distributed, managed training, and real-time hosting. We think it works best when all three of these services are used together, but they can also be used independently. Some use cases may only require hosting. Maybe the model was trained prior to Amazon SageMaker existing, in a different service.\n",
3030
"\n",
3131
"This notebook shows how to use a pre-existing scikit-learn model with the Amazon SageMaker XGBoost Algorithm container to quickly create a hosted endpoint for that model.\n",
3232
"\n",

introduction_to_applying_machine_learning/breast_cancer_prediction/Breast Cancer Prediction.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -370,7 +370,7 @@
370370
"cell_type": "markdown",
371371
"metadata": {},
372372
"source": [
373-
"Now let's kick off our training job in SageMaker's distributed, serverless training, using the parameters we just created. Because training is serverless, we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training."
373+
"Now let's kick off our training job in SageMaker's distributed, managed training, using the parameters we just created. Because training is managed, we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training."
374374
]
375375
},
376376
{
@@ -441,7 +441,7 @@
441441
"source": [
442442
"Once we've setup a model, we can configure what our hosting endpoints should be. Here we specify:\n",
443443
"1. EC2 instance type to use for hosting\n",
444-
"1. Lower and upper bounds for number of instances\n",
444+
"1. Initial number of instances\n",
445445
"1. Our hosting model name"
446446
]
447447
},

under_development/ensemble_modeling/EnsembleLearnerCensusIncome.ipynb

Lines changed: 20 additions & 29 deletions
Original file line numberDiff line numberDiff line change
@@ -17,10 +17,10 @@
1717
"1. [Prepration](#Preparation)\n",
1818
"1. [Data](#Data)\n",
1919
" 1. [Exploration and Transformation](#Exploration) \n",
20-
"1. [Training Xgboost model using Sagemaker](#Training)\n",
20+
"1. [Training Xgboost model using SageMaker](#Training)\n",
2121
"1. [Hosting the model](#Hosting)\n",
2222
"1. [Evaluating the model on test samples](#Evaluation)\n",
23-
"1. [Training a second Logistic Regression model using Sagemaker](#Linear-Model)\n",
23+
"1. [Training a second Logistic Regression model using SageMaker](#Linear-Model)\n",
2424
"1. [Hosting the Second model](#Hosting:Linear-Learner)\n",
2525
"1. [Evaluating the model on test samples](#Prediction:Linear-Learner)\n",
2626
"1. [Combining the model results](#Ensemble)\n",
@@ -35,12 +35,12 @@
3535
"\n",
3636
"This notebook presents an illustrative example to predict if a person makes over 50K a year based on information about their education, work-experience, geneder etc.\n",
3737
"\n",
38-
"* Preparing your _Sagemaker_ notebook\n",
39-
"* Loading a dataset from S3 using Sagemaker\n",
40-
"* Investigating and transforming the data so that it can be fed to _Sagemaker_ algorithms\n",
41-
"* Estimating a model using Sagemaker's XGBoost (eXtreme Gradient Boosting) algorithm\n",
42-
"* Hosting the model on Sagemaker to make on-going predictions\n",
43-
"* Estimating a second model using Sagemaker's -linear learner method\n",
38+
"* Preparing your _SageMaker_ notebook\n",
39+
"* Loading a dataset from S3 using SageMaker\n",
40+
"* Investigating and transforming the data so that it can be fed to _SageMaker_ algorithms\n",
41+
"* Estimating a model using SageMaker's XGBoost (eXtreme Gradient Boosting) algorithm\n",
42+
"* Hosting the model on SageMaker to make on-going predictions\n",
43+
"* Estimating a second model using SageMaker's Linear Learner method\n",
4444
"* Combining the predictions from both the models and evluating the combined prediction\n",
4545
"* Generating final predictions on the test data set\n",
4646
"\n",
@@ -85,15 +85,6 @@
8585
"Now let's bring in the Python libraries that we'll use throughout the analysis"
8686
]
8787
},
88-
{
89-
"cell_type": "code",
90-
"execution_count": null,
91-
"metadata": {},
92-
"outputs": [],
93-
"source": [
94-
"!conda install -y -c conda-forge scikit-learn"
95-
]
96-
},
9788
{
9889
"cell_type": "code",
9990
"execution_count": null,
@@ -245,7 +236,7 @@
245236
"\n",
246237
"## Training\n",
247238
"\n",
248-
"As our first training algorithm we pick `xgboost` algorithm. `xgboost` is an extremely popular, open-source package for gradient boosted trees. It is computationally powerful, fully featured, and has been successfully used in many machine learning competitions. Let's start with a simple `xgboost` model, trained using `Sagemaker's` serverless, distributed training framework.\n",
239+
"As our first training algorithm we pick `xgboost` algorithm. `xgboost` is an extremely popular, open-source package for gradient boosted trees. It is computationally powerful, fully featured, and has been successfully used in many machine learning competitions. Let's start with a simple `xgboost` model, trained using `SageMaker's` managed, distributed training framework.\n",
249240
"\n",
250241
"First we'll need to specify training parameters. This includes:\n",
251242
"1. The role to use\n",
@@ -266,7 +257,7 @@
266257
"For csv input, right now we assume the input is separated by delimiter(automatically detect the separator by Python’s builtin sniffer tool), without a header line and also label is in the first column.\n",
267258
"Scoring Output Format: csv.\n",
268259
"\n",
269-
"* Since our data is in CSV format, we will convert our dataset to the way Sagemaker's XGboost supports.\n",
260+
"* Since our data is in CSV format, we will convert our dataset to the way SageMaker's XGboost supports.\n",
270261
"* We will keep the target field in first column and remaining features in the next few columns\n",
271262
"* We will remove the header line\n",
272263
"* We will also split the data into a separate training and validation sets\n",
@@ -411,7 +402,7 @@
411402
"cell_type": "markdown",
412403
"metadata": {},
413404
"source": [
414-
"Now let's kick off our training job in SageMaker's distributed, serverless training, using the parameters we just created. Because training is serverless, we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training."
405+
"Now let's kick off our training job in SageMaker's distributed, managed training, using the parameters we just created. Because training is managed, we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training."
415406
]
416407
},
417408
{
@@ -496,7 +487,7 @@
496487
"source": [
497488
"Once we've setup a model, we can configure what our hosting endpoints should be. Here we specify:\n",
498489
"1. EC2 instance type to use for hosting\n",
499-
"1. Lower and upper bounds for number of instances\n",
490+
"1. Initial number of instances\n",
500491
"1. Our hosting model name"
501492
]
502493
},
@@ -690,7 +681,7 @@
690681
"source": [
691682
"---\n",
692683
"## Linear-Model\n",
693-
"### Train a second model using Sagemaker's Linear Learner"
684+
"### Train a second model using SageMaker's Linear Learner"
694685
]
695686
},
696687
{
@@ -699,7 +690,7 @@
699690
"metadata": {},
700691
"outputs": [],
701692
"source": [
702-
"prefix = 'sagemaker/linear' ##subfolder inside the data bucket to be used for linear learner\n",
693+
"prefix = 'sagemaker/linear' ##subfolder inside the data bucket to be used for Linear Learner\n",
703694
"\n",
704695
"data_train = pd.read_csv(\"formatted_train.csv\", sep=',', header=None) \n",
705696
"data_test = pd.read_csv(\"formatted_test.csv\", sep=',', header=None) \n",
@@ -871,7 +862,7 @@
871862
"cell_type": "markdown",
872863
"metadata": {},
873864
"source": [
874-
"Now let's kick off our training job in SageMaker's distributed, serverless training, using the parameters we just created. Because training is serverless, we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training."
865+
"Now let's kick off our training job in SageMaker's distributed, managed training, using the parameters we just created. Because training is managed, we don't have to wait for our job to finish to continue, but for this case, let's setup a while loop so we can monitor the status of our training."
875866
]
876867
},
877868
{
@@ -936,7 +927,7 @@
936927
"source": [
937928
"Once we've setup a model, we can configure what our hosting endpoints should be. Here we specify:\n",
938929
"1. EC2 instance type to use for hosting\n",
939-
"1. Lower and upper bounds for number of instances\n",
930+
"1. Initial number of instances\n",
940931
"1. Our hosting model name"
941932
]
942933
},
@@ -1001,7 +992,7 @@
1001992
"metadata": {},
1002993
"source": [
1003994
"### Prediction:Linear-Learner\n",
1004-
"#### Predict using Sagemaker's linear learner and evaluate the performance\n",
995+
"#### Predict using SageMaker's Linear Learner and evaluate the performance\n",
1005996
"\n",
1006997
"Now that we have our hosted endpoint, we can generate statistical predictions from it. Let's predict on our test dataset to understand how accurate our model is on unseen samples using AUC metric."
1007998
]
@@ -1202,9 +1193,9 @@
12021193
"## Extensions\n",
12031194
"\n",
12041195
"This example analyzed a relatively small dataset, but utilized SageMaker features such as,\n",
1205-
"* serverless single-machine training of XGboost model \n",
1206-
"* serverless training of Linear Learner\n",
1207-
"* highly available, autoscaling model hosting, \n",
1196+
"* managed single-machine training of XGboost model \n",
1197+
"* managed training of Linear Learner\n",
1198+
"* highly available, real-time model hosting, \n",
12081199
"* doing a batch prediction using the hosted model\n",
12091200
"* Doing an ensemble of Xgboost and Linear Learner\n",
12101201
"\n",

under_development/ensemble_modeling/README.md

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,10 +3,10 @@
33
This example notebook shows how to use mutiple models from SageMaker for prediction and combine then into an ensemble prediction.
44

55
It demonstrates the following:
6-
* Basic setup for using Sagemaker.
6+
* Basic setup for using SageMaker.
77
* converting datasets to protobuf format used by the Amazon SageMaker algorithms and uploading to user provided S3 bucket.
8-
* Training Sagemaker's Xgboost algorithm on the data set.
9-
* Training Sagemaker's linear-learner on the data set.
8+
* Training SageMaker's XGBoost algorithm on the data set.
9+
* Training SageMaker's Linear Learner on the data set.
1010
* Hosting the trained models.
1111
* Scoring using the trained models.
12-
* Combining predictions from the trained models in an ensemble.
12+
* Combining predictions from the trained models in an ensemble.

0 commit comments

Comments
 (0)