|
| 1 | +{ |
| 2 | + "cells": [ |
| 3 | + { |
| 4 | + "cell_type": "markdown", |
| 5 | + "metadata": {}, |
| 6 | + "source": [ |
| 7 | + "# Tensorflow MNIST local training \n", |
| 8 | + "\n", |
| 9 | + "## Pre-requisites\n", |
| 10 | + "\n", |
| 11 | + "This notebook shows how to use the SageMaker Python SDK to run your code in a local container before deploying to SageMaker's managed training or hosting environments. This can speed up iterative testing and debugging while using the same familiar Python SDK interface. Just change your estimator's `train_instance_type` to `local` (or `local_gpu` if you're using an ml.p2 or ml.p3 notebook instance).\n", |
| 12 | + "\n", |
| 13 | + "In order to use this feature you'll need to install docker-compose (and nvidia-docker if training with a GPU).\n", |
| 14 | + "\n", |
| 15 | + "**Note, you can only run a single local notebook at one time.**" |
| 16 | + ] |
| 17 | + }, |
| 18 | + { |
| 19 | + "cell_type": "code", |
| 20 | + "execution_count": null, |
| 21 | + "metadata": {}, |
| 22 | + "outputs": [], |
| 23 | + "source": [ |
| 24 | + "!/bin/bash ./setup.sh" |
| 25 | + ] |
| 26 | + }, |
| 27 | + { |
| 28 | + "cell_type": "markdown", |
| 29 | + "metadata": {}, |
| 30 | + "source": [ |
| 31 | + "## Overview\n", |
| 32 | + "\n", |
| 33 | + "The **SageMaker Python SDK** helps you deploy your models for training and hosting in optimized, productions ready containers in SageMaker. The SageMaker Python SDK is easy to use, modular, extensible and compatible with TensorFlow and MXNet. This tutorial focuses on how to create a convolutional neural network model to train the [MNIST dataset](http://yann.lecun.com/exdb/mnist/) using **TensorFlow in local mode**.\n", |
| 34 | + "\n", |
| 35 | + "### Set up the environment Set up the environment" |
| 36 | + ] |
| 37 | + }, |
| 38 | + { |
| 39 | + "cell_type": "code", |
| 40 | + "execution_count": null, |
| 41 | + "metadata": {}, |
| 42 | + "outputs": [], |
| 43 | + "source": [ |
| 44 | + "import os\n", |
| 45 | + "import subprocess\n", |
| 46 | + "import sagemaker\n", |
| 47 | + "from sagemaker import get_execution_role\n", |
| 48 | + "\n", |
| 49 | + "sagemaker_session = sagemaker.Session()\n", |
| 50 | + "\n", |
| 51 | + "instance_type = 'local'\n", |
| 52 | + "\n", |
| 53 | + "if subprocess.call('nvidia-smi') == 0:\n", |
| 54 | + " ## Set type to GPU if one is present\n", |
| 55 | + " instance_type = 'local_gpu'\n", |
| 56 | + " \n", |
| 57 | + "print(\"Instance type = \" + instance_type)\n", |
| 58 | + "\n", |
| 59 | + "role = get_execution_role()" |
| 60 | + ] |
| 61 | + }, |
| 62 | + { |
| 63 | + "cell_type": "markdown", |
| 64 | + "metadata": {}, |
| 65 | + "source": [ |
| 66 | + "### Download the MNIST dataset" |
| 67 | + ] |
| 68 | + }, |
| 69 | + { |
| 70 | + "cell_type": "code", |
| 71 | + "execution_count": null, |
| 72 | + "metadata": { |
| 73 | + "scrolled": false |
| 74 | + }, |
| 75 | + "outputs": [], |
| 76 | + "source": [ |
| 77 | + "import utils\n", |
| 78 | + "from tensorflow.contrib.learn.python.learn.datasets import mnist\n", |
| 79 | + "import tensorflow as tf\n", |
| 80 | + "\n", |
| 81 | + "data_sets = mnist.read_data_sets('data', dtype=tf.uint8, reshape=False, validation_size=5000)\n", |
| 82 | + "\n", |
| 83 | + "utils.convert_to(data_sets.train, 'train', 'data')\n", |
| 84 | + "utils.convert_to(data_sets.validation, 'validation', 'data')\n", |
| 85 | + "utils.convert_to(data_sets.test, 'test', 'data')" |
| 86 | + ] |
| 87 | + }, |
| 88 | + { |
| 89 | + "cell_type": "markdown", |
| 90 | + "metadata": {}, |
| 91 | + "source": [ |
| 92 | + "### Upload the data\n", |
| 93 | + "We use the ```sagemaker.Session.upload_data``` function to upload our datasets to an S3 location. The return value inputs identifies the location -- we will use this later when we start the training job." |
| 94 | + ] |
| 95 | + }, |
| 96 | + { |
| 97 | + "cell_type": "code", |
| 98 | + "execution_count": null, |
| 99 | + "metadata": {}, |
| 100 | + "outputs": [], |
| 101 | + "source": [ |
| 102 | + "inputs = sagemaker_session.upload_data(path='data', key_prefix='data/mnist')" |
| 103 | + ] |
| 104 | + }, |
| 105 | + { |
| 106 | + "cell_type": "markdown", |
| 107 | + "metadata": {}, |
| 108 | + "source": [ |
| 109 | + "# Construct a script for training \n", |
| 110 | + "Here is the full code for the network model:" |
| 111 | + ] |
| 112 | + }, |
| 113 | + { |
| 114 | + "cell_type": "code", |
| 115 | + "execution_count": null, |
| 116 | + "metadata": { |
| 117 | + "scrolled": false |
| 118 | + }, |
| 119 | + "outputs": [], |
| 120 | + "source": [ |
| 121 | + "!cat 'mnist.py'" |
| 122 | + ] |
| 123 | + }, |
| 124 | + { |
| 125 | + "cell_type": "markdown", |
| 126 | + "metadata": {}, |
| 127 | + "source": [ |
| 128 | + "The script here is and adaptation of the [TensorFlow MNIST example](https://github.com/tensorflow/models/tree/master/official/mnist). It provides a ```model_fn(features, labels, mode)```, which is used for training, evaluation and inference. \n", |
| 129 | + "\n", |
| 130 | + "## A regular ```model_fn```\n", |
| 131 | + "\n", |
| 132 | + "A regular **```model_fn```** follows the pattern:\n", |
| 133 | + "1. [defines a neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L96)\n", |
| 134 | + "- [applies the ```features``` in the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L178)\n", |
| 135 | + "- [if the ```mode``` is ```PREDICT```, returns the output from the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L186)\n", |
| 136 | + "- [calculates the loss function comparing the output with the ```labels```](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L188)\n", |
| 137 | + "- [creates an optimizer and minimizes the loss function to improve the neural network](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L193)\n", |
| 138 | + "- [returns the output, optimizer and loss function](https://github.com/tensorflow/models/blob/master/official/mnist/mnist.py#L205)\n", |
| 139 | + "\n", |
| 140 | + "## Writing a ```model_fn``` for distributed training\n", |
| 141 | + "When distributed training happens, the same neural network will be sent to the multiple training instances. Each instance will predict a batch of the dataset, calculate loss and minimize the optimizer. One entire loop of this process is called **training step**.\n", |
| 142 | + "\n", |
| 143 | + "### Syncronizing training steps\n", |
| 144 | + "A [global step](https://www.tensorflow.org/api_docs/python/tf/train/global_step) is a global variable shared between the instances. It necessary for distributed training, so the optimizer will keep track of the number of **training steps** between runs: \n", |
| 145 | + "\n", |
| 146 | + "```python\n", |
| 147 | + "train_op = optimizer.minimize(loss, tf.train.get_or_create_global_step())\n", |
| 148 | + "```\n", |
| 149 | + "\n", |
| 150 | + "That is the only required change for distributed training!" |
| 151 | + ] |
| 152 | + }, |
| 153 | + { |
| 154 | + "cell_type": "markdown", |
| 155 | + "metadata": {}, |
| 156 | + "source": [ |
| 157 | + "## Create a training job using the sagemaker.TensorFlow estimator\n", |
| 158 | + "\n", |
| 159 | + "The `TensorFlow` class allows us to run our training function on SageMaker. We need to configure it with our training script, an IAM role, the number of training instances, and the training instance type. Here is the the only difference from [tensorflow_distributed_mnist.ipynb](./tensorflow_distributed_mnist.ipynb) is that instead of ``train_instance_type='ml.c4.xlarge'``, we set it to ``train_instance_type='local'``. For local training with GPU, we could set this to \"local_gpu\". In this case, `instance_type` was set above based on your whether you're running a GPU instance.\n", |
| 160 | + "\n", |
| 161 | + "After we've constructed our `TensorFlow` object, we fit it using the data we uploaded to S3. Even though we're in local mode, using S3 as our data source makes sense because it maintains consistency with how SageMaker's distributed, managed training ingests data." |
| 162 | + ] |
| 163 | + }, |
| 164 | + { |
| 165 | + "cell_type": "code", |
| 166 | + "execution_count": null, |
| 167 | + "metadata": { |
| 168 | + "scrolled": false |
| 169 | + }, |
| 170 | + "outputs": [], |
| 171 | + "source": [ |
| 172 | + "from sagemaker.tensorflow import TensorFlow\n", |
| 173 | + "\n", |
| 174 | + "mnist_estimator = TensorFlow(entry_point='mnist.py',\n", |
| 175 | + " role=role,\n", |
| 176 | + " training_steps=10, \n", |
| 177 | + " evaluation_steps=10,\n", |
| 178 | + " train_instance_count=1,\n", |
| 179 | + " train_instance_type=instance_type)\n", |
| 180 | + "\n", |
| 181 | + "mnist_estimator.fit(inputs)" |
| 182 | + ] |
| 183 | + }, |
| 184 | + { |
| 185 | + "cell_type": "markdown", |
| 186 | + "metadata": {}, |
| 187 | + "source": [ |
| 188 | + "The **```fit```** method will create a training job in two **ml.c4.xlarge** instances. The logs above will show the instances doing training, evaluation, and incrementing the number of **training steps**. \n", |
| 189 | + "\n", |
| 190 | + "In the end of the training, the training job will generate a saved model for TF serving." |
| 191 | + ] |
| 192 | + }, |
| 193 | + { |
| 194 | + "cell_type": "markdown", |
| 195 | + "metadata": { |
| 196 | + "collapsed": true |
| 197 | + }, |
| 198 | + "source": [ |
| 199 | + "# Deploy the trained model to prepare for predictions\n", |
| 200 | + "\n", |
| 201 | + "The deploy() method creates an endpoint (in this case locally) which serves prediction requests in real-time." |
| 202 | + ] |
| 203 | + }, |
| 204 | + { |
| 205 | + "cell_type": "code", |
| 206 | + "execution_count": null, |
| 207 | + "metadata": {}, |
| 208 | + "outputs": [], |
| 209 | + "source": [ |
| 210 | + "mnist_predictor = mnist_estimator.deploy(initial_instance_count=1,\n", |
| 211 | + " instance_type=instance_type)" |
| 212 | + ] |
| 213 | + }, |
| 214 | + { |
| 215 | + "cell_type": "markdown", |
| 216 | + "metadata": {}, |
| 217 | + "source": [ |
| 218 | + "# Invoking the endpoint" |
| 219 | + ] |
| 220 | + }, |
| 221 | + { |
| 222 | + "cell_type": "code", |
| 223 | + "execution_count": null, |
| 224 | + "metadata": {}, |
| 225 | + "outputs": [], |
| 226 | + "source": [ |
| 227 | + "import numpy as np\n", |
| 228 | + "from tensorflow.examples.tutorials.mnist import input_data\n", |
| 229 | + "\n", |
| 230 | + "mnist = input_data.read_data_sets(\"/tmp/data/\", one_hot=True)\n", |
| 231 | + "\n", |
| 232 | + "for i in range(10):\n", |
| 233 | + " data = mnist.test.images[i].tolist()\n", |
| 234 | + " tensor_proto = tf.make_tensor_proto(values=np.asarray(data), shape=[1, len(data)], dtype=tf.float32)\n", |
| 235 | + " predict_response = mnist_predictor.predict(tensor_proto)\n", |
| 236 | + " \n", |
| 237 | + " print(\"========================================\")\n", |
| 238 | + " label = np.argmax(mnist.test.labels[i])\n", |
| 239 | + " print(\"label is {}\".format(label))\n", |
| 240 | + " prediction = predict_response['outputs']['classes']['int64Val'][0]\n", |
| 241 | + " print(\"prediction is {}\".format(prediction))" |
| 242 | + ] |
| 243 | + }, |
| 244 | + { |
| 245 | + "cell_type": "markdown", |
| 246 | + "metadata": {}, |
| 247 | + "source": [ |
| 248 | + "# Clean-up\n", |
| 249 | + "\n", |
| 250 | + "Deleting the local endpoint when you're finished is important since you can only run one local endpoint at a time." |
| 251 | + ] |
| 252 | + }, |
| 253 | + { |
| 254 | + "cell_type": "code", |
| 255 | + "execution_count": null, |
| 256 | + "metadata": {}, |
| 257 | + "outputs": [], |
| 258 | + "source": [ |
| 259 | + "mnist_estimator.delete_endpoint()" |
| 260 | + ] |
| 261 | + } |
| 262 | + ], |
| 263 | + "metadata": { |
| 264 | + "kernelspec": { |
| 265 | + "display_name": "conda_tensorflow_p27", |
| 266 | + "language": "python", |
| 267 | + "name": "conda_tensorflow_p27" |
| 268 | + }, |
| 269 | + "language_info": { |
| 270 | + "codemirror_mode": { |
| 271 | + "name": "ipython", |
| 272 | + "version": 2 |
| 273 | + }, |
| 274 | + "file_extension": ".py", |
| 275 | + "mimetype": "text/x-python", |
| 276 | + "name": "python", |
| 277 | + "nbconvert_exporter": "python", |
| 278 | + "pygments_lexer": "ipython2", |
| 279 | + "version": "2.7.14" |
| 280 | + }, |
| 281 | + "notice": "Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License." |
| 282 | + }, |
| 283 | + "nbformat": 4, |
| 284 | + "nbformat_minor": 2 |
| 285 | +} |
0 commit comments