Skip to content

Commit a1df4dc

Browse files
author
Venkatesan
committed
Fixed all issues and increased verbosity
1 parent 9005589 commit a1df4dc

File tree

2 files changed

+83
-24
lines changed

2 files changed

+83
-24
lines changed

sagemaker-python-sdk/mxnet_mnist_byom/mxnet_mnist.ipynb

Lines changed: 6 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -12,10 +12,13 @@
1212
" 2. [Data Setup](#Data-setup)\n",
1313
"3. [Training the network locally](#Training)\n",
1414
"4. [Set up hosting for the model](#Set-up-hosting-for-the-model)\n",
15-
" 1. [Export from mxnet](#Export-the-model-from-mxnet)\n",
15+
" 1. [Export from MXNet](#Export-the-model-from-mxnet)\n",
1616
" 2. [Import model into SageMaker](#Import-model-into-SageMaker)\n",
1717
" 3. [Create endpoint](#Create-endpoint) \n",
18-
"5. [Validate the ebdpoint for use](#Validate-the-endpoint-for-use)"
18+
"5. [Validate the endpoint for use](#Validate-the-endpoint-for-use)\n",
19+
"\n",
20+
"\n",
21+
"__Note__: Compare this with the [tensorflow bring your own model example](../tensorflow_iris_byom/tensorflow_BYOM_iris.ipynb)"
1922
]
2023
},
2124
{
@@ -141,7 +144,7 @@
141144
"\n",
142145
"### Export the model from mxnet\n",
143146
"\n",
144-
"In order to set up hosting, we have to import the model from training to hosting. We will begin by exporting the model from mxnet and saving it down. Analogous to the [tensorflow example](../tensorflow_iris_byom/tensorflow_BYOM_iris.ipynb), some structure needs to be followed. The exported model has to be converted into a form that is readable by ``sagemaker.mxnet.model.MXNetModel``. The following code describes exporting the model in a form that does the same:"
147+
"In order to set up hosting, we have to import the model from training to hosting. We will begin by exporting the model from MXNet and saving it down. Analogous to the [TensorFlow example](../tensorflow_iris_byom/tensorflow_BYOM_iris.ipynb), some structure needs to be followed. The exported model has to be converted into a form that is readable by ``sagemaker.mxnet.model.MXNetModel``. The following code describes exporting the model in a form that does the same:"
145148
]
146149
},
147150
{
@@ -276,8 +279,6 @@
276279
},
277280
"outputs": [],
278281
"source": [
279-
"import sagemaker\n",
280-
"\n",
281282
"sagemaker.Session().delete_endpoint(predictor.endpoint)"
282283
]
283284
},

sagemaker-python-sdk/tensorflow_iris_byom/tensorflow_BYOM_iris.ipynb

Lines changed: 77 additions & 19 deletions
Original file line numberDiff line numberDiff line change
@@ -4,11 +4,38 @@
44
"cell_type": "markdown",
55
"metadata": {},
66
"source": [
7-
"# Bring your own Model - IRIS classifier example.\n",
7+
"# TensorFlow BYOM: Train locally and deploy on SageMaker.\n",
88
"\n",
9-
"This notebook can be compared to [tensorflow_iris_dnn_classifier_using_estimators.ipynb](https://github.com/awslabs/sagemaker-examples/blob/master/sagemaker-python-sdk/tensorflow_iris_dnn_classifier_using_estimators/tensorflow_iris_dnn_classifier_using_estimators.ipynb). It seeks to solve the same problem, but instead of training by using Amazon SageMaker's distributed, managed training functionality, it relies on the user to train locally or bring a pre-trained model, and then setup a real-time hosted endpoint in SageMaker. To do that, we'll rely on the same set of functions for training. \n",
109
"\n",
11-
"Consider the following mdoel definition for IRIS classification. This mdoe uses the ``tensorflow.estimator.DNNClassifier`` which is a pre-defined enstimator module for its model definition. "
10+
"1. [Introduction](#Introduction)\n",
11+
"2. [Prerequisites and Preprocessing](#Prequisites-and-Preprocessing)\n",
12+
" 1. [Permissions and environment variables](#Permissions-and-environment-variables)\n",
13+
" 2. [Model definitions](#Model-definitions)\n",
14+
" 3. [Data Setup](#Data-setup)\n",
15+
"3. [Training the network locally](#Training)\n",
16+
"4. [Set up hosting for the model](#Set-up-hosting-for-the-model)\n",
17+
" 1. [Export from TensorFlow](#Export-the-model-from-tensorflow)\n",
18+
" 2. [Import model into SageMaker](#Import-model-into-SageMaker)\n",
19+
" 3. [Create endpoint](#Create-endpoint) \n",
20+
"5. [Validate the endpoint for use](#Validate-the-endpoint-for-use)\n",
21+
"\n",
22+
"__Note__: Compare this with the [tensorflow bring your own model example](../tensorflow_iris_byom/tensorflow_BYOM_iris.ipynb)"
23+
]
24+
},
25+
{
26+
"cell_type": "markdown",
27+
"metadata": {},
28+
"source": [
29+
"## Introduction \n",
30+
"\n",
31+
"This notebook can be compared to [Iris classification example notebook](../tensorflow_iris_dnn_classifier_using_estimators/tensorflow_iris_dnn_classifier_using_estimators.ipynb) in terms of its functionality. We will do the same classification task, but we will train the same network locally in the box from where this notebook is being run. We then setup a real-time hosted endpoint in SageMaker.\n",
32+
"\n",
33+
"Consider the following model definition for IRIS classification. This mdoe uses the ``tensorflow.estimator.DNNClassifier`` which is a pre-defined enstimator module for its model definition. The model definition is the same as the one used in the [Iris classification example notebook](../tensorflow_iris_dnn_classifier_using_estimators/tensorflow_iris_dnn_classifier_using_estimators.ipynb)\n",
34+
"\n",
35+
"## Prequisites and Preprocessing\n",
36+
"### Permissions and environment variables\n",
37+
"\n",
38+
"Here we set up the linkage and authentication to AWS services. In this notebook we only need the roles used to give learning and hosting access to your data. The Sagemaker SDK will use S3 defualt buckets when needed. Supply the role in the variable below."
1239
]
1340
},
1441
{
@@ -19,13 +46,24 @@
1946
},
2047
"outputs": [],
2148
"source": [
22-
"role = <<Your_Sagemaker_Role>>"
49+
"role = '<your SageMaker execution role here>'"
50+
]
51+
},
52+
{
53+
"cell_type": "markdown",
54+
"metadata": {},
55+
"source": [
56+
"### Model Definitions\n",
57+
"\n",
58+
"We use the [``tensorflow.estimator.DNNClassifier``](https://www.tensorflow.org/api_docs/python/tf/estimator/DNNClassifier) estimator to set up our network. We also need to write some methods for serving inputs during hosting and training. These methods are all found below."
2359
]
2460
},
2561
{
2662
"cell_type": "code",
2763
"execution_count": null,
28-
"metadata": {},
64+
"metadata": {
65+
"collapsed": true
66+
},
2967
"outputs": [],
3068
"source": [
3169
"import os\n",
@@ -81,7 +119,9 @@
81119
{
82120
"cell_type": "code",
83121
"execution_count": null,
84-
"metadata": {},
122+
"metadata": {
123+
"collapsed": true
124+
},
85125
"outputs": [],
86126
"source": [
87127
"classifier = estimator_fn(run_config = None, params = None)"
@@ -91,7 +131,9 @@
91131
"cell_type": "markdown",
92132
"metadata": {},
93133
"source": [
94-
"Download and make the iris dataset from TensorFlow's repository."
134+
"### Data setup\n",
135+
"\n",
136+
"Next, we need to pull the data from tensorflow repository and make them ready for training. The following will code block should do that."
95137
]
96138
},
97139
{
@@ -127,7 +169,7 @@
127169
"cell_type": "markdown",
128170
"metadata": {},
129171
"source": [
130-
"Create the input streamer object."
172+
"Create the data input streamer object."
131173
]
132174
},
133175
{
@@ -145,7 +187,9 @@
145187
"cell_type": "markdown",
146188
"metadata": {},
147189
"source": [
148-
"Train using TensorFlow's ``tensorflow.Estimator.train`` method. The model is trained locally in the box."
190+
"### Training\n",
191+
"\n",
192+
"It is time to train the network. Since we are training the network locally, we can make use of TensorFlow's ``tensorflow.Estimator.train`` method. The model is trained locally in the box."
149193
]
150194
},
151195
{
@@ -163,6 +207,12 @@
163207
"cell_type": "markdown",
164208
"metadata": {},
165209
"source": [
210+
"## Set up hosting for the model\n",
211+
"\n",
212+
"### Export the model from mxnet\n",
213+
"\n",
214+
"In order to set up hosting, we have to import the model from training to hosting. We will begin by exporting the model from TensorFlow and saving it down. Analogous to the [MXNet example](../mxnet_mnist_byom/mxnet_mnist.ipynb), some structure needs to be followed. The exported model has to be converted into a form that is readable by ``sagemaker.mxnet.model.MXNetModel``. The following code describes exporting the model in a form that does the same:\n",
215+
"\n",
166216
"There is a small difference between a SageMaker model and a TensorFlow model. The conversion is easy and fairly trivial. Simply move the tensorflow exported model into a directory ``export\\Servo\\`` and tar the entire directory. SageMaker will recognize this as a loadable TensorFlow model."
167217
]
168218
},
@@ -186,7 +236,9 @@
186236
"cell_type": "markdown",
187237
"metadata": {},
188238
"source": [
189-
"Open a new sagemaker session and upload the model into the default S3 bucket under the directory ``model``."
239+
"### Import model into SageMaker\n",
240+
"\n",
241+
"Open a new sagemaker session and upload the model on to the default S3 bucket. We can use the ``sagemaker.Session.upload_data`` method to do this. We need the location of where we exported the model from MXNet and where in our default bucket we want to store the model(``/model``). The default S3 bucket can be found using the ``sagemaker.Session.default_bucket`` method."
190242
]
191243
},
192244
{
@@ -207,7 +259,7 @@
207259
"cell_type": "markdown",
208260
"metadata": {},
209261
"source": [
210-
"Use the ``sagemaker.tensorflow.model.TensorflowModel`` class directly to setup a trained model in a sagemaker session."
262+
"Use the ``sagemaker.mxnet.model.TensorFlowModel`` to import the model into SageMaker that can be deployed. We need the location of the S3 bucket where we have the model, the role for authentication and the entry_point where the model defintion is stored (``iris_dnn_classifier.py``). The import call is the following:"
211263
]
212264
},
213265
{
@@ -228,13 +280,17 @@
228280
"cell_type": "markdown",
229281
"metadata": {},
230282
"source": [
231-
"Deploy the newly created SageMaker model to an endpoint."
283+
"### Create endpoint\n",
284+
"\n",
285+
"Now the model is ready to be deployed at a SageMaker endpoint. We can use the ``sagemaker.mxnet.model.TensorFlowModel.deploy`` method to do this. Unless you have created or prefer other instances, we recommend using 1 ``'ml.c4.xlarge'`` instance for this training. These are supplied as arguments. "
232286
]
233287
},
234288
{
235289
"cell_type": "code",
236290
"execution_count": null,
237-
"metadata": {},
291+
"metadata": {
292+
"collapsed": true
293+
},
238294
"outputs": [],
239295
"source": [
240296
"%%time\n",
@@ -246,15 +302,15 @@
246302
"cell_type": "markdown",
247303
"metadata": {},
248304
"source": [
249-
"Run a sample prediction on a sample to ensure that it works. Expect result ``1`` for this particular sample."
305+
"### Validate the endpoint for use\n",
306+
"\n",
307+
"We can now use this endpoint to classify. Run a sample prediction on a sample to ensure that it works. Expect result ``1`` for this particular sample."
250308
]
251309
},
252310
{
253311
"cell_type": "code",
254312
"execution_count": null,
255-
"metadata": {
256-
"collapsed": true
257-
},
313+
"metadata": {},
258314
"outputs": [],
259315
"source": [
260316
"predict_samples = {}\n",
@@ -266,7 +322,7 @@
266322
"cell_type": "markdown",
267323
"metadata": {},
268324
"source": [
269-
"Delete all temporary directories so that we are not affecting the next run."
325+
"Delete all temporary directories so that we are not affecting the next run. Also, optionally delete the end points."
270326
]
271327
},
272328
{
@@ -279,7 +335,9 @@
279335
"source": [
280336
"os.remove('model.tar.gz')\n",
281337
"import shutil\n",
282-
"shutil.rmtree('export')"
338+
"shutil.rmtree('export')\n",
339+
"\n",
340+
"sagemaker.Session().delete_endpoint(predictor.endpoint)"
283341
]
284342
}
285343
],

0 commit comments

Comments
 (0)