Skip to content

Commit a3230ca

Browse files
Add EI documentation within README (#161)
1 parent 95472b5 commit a3230ca

File tree

1 file changed

+41
-5
lines changed

1 file changed

+41
-5
lines changed

README.rst

Lines changed: 41 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -14,12 +14,11 @@ SDK <https://github.com/aws/sagemaker-python-sdk>`__.
1414
For notebook examples: `SageMaker Notebook
1515
Examples <https://github.com/awslabs/amazon-sagemaker-examples>`__.
1616

17+
-----------------
1718
Table of Contents
1819
-----------------
19-
20-
#. `Getting Started <#getting-started>`__
21-
#. `Building your Image <#building-your-image>`__
22-
#. `Running the tests <#running-the-tests>`__
20+
.. contents::
21+
:local:
2322

2423
Getting Started
2524
---------------
@@ -143,7 +142,7 @@ Then run:
143142
::
144143

145144
# Example
146-
docker build -t preprod-tensorflow:1.6.0-cpu-py2 --build-arg py_version=2
145+
docker build -t preprod-tensorflow:1.6.0-cpu-py2 --build-arg py_version=2 \
147146
--build-arg framework_installable=tensorflow-1.6.0-cp27-cp27mu-manylinux1_x86_64.whl -f Dockerfile.cpu .
148147

149148
The dockerfiles for 1.4 and 1.5 build from source instead, so when building those, you don't need to download the wheel beforehand:
@@ -168,6 +167,43 @@ The dockerfiles for 1.4 and 1.5 build from source instead, so when building thos
168167
# GPU
169168
docker build -t preprod-tensorflow:1.4.1-gpu-py2 -f Dockerfile.gpu .
170169

170+
Amazon Elastic Inference with TensorFlow serving in SageMaker
171+
-------------------------------------------------------------
172+
`Amazon Elastic Inference <https://aws.amazon.com/machine-learning/elastic-inference/>`__ allows you to to attach
173+
low-cost GPU-powered acceleration to Amazon EC2 and Amazon SageMaker instances to reduce the cost running deep
174+
learning inference by up to 75%. Currently, Amazon Elastic Inference supports TensorFlow, Apache MXNet, and ONNX
175+
models, with more frameworks coming soon.
176+
177+
Support for using TensorFlow serving with Amazon Elastic Inference in SageMaker is supported in the public SageMaker TensorFlow containers.
178+
179+
* For information on how to use the Python SDK to create an endpoint with Amazon Elastic Inference and TensorFlow serving in SageMaker, see `Deploying from an Estimator <https://github.com/aws/sagemaker-python-sdk/blob/master/src/sagemaker/tensorflow/deploying_tensorflow_serving.rst#deploying-from-an-estimator>`__.
180+
* For information on how Amazon Elastic Inference works, see `How EI Works <https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html#ei-how-it-works>`__.
181+
* For more information in regards to using Amazon Elastic Inference in SageMaker, see `Amazon SageMaker Elastic Inference <https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html>`__.
182+
* For notebook examples on how to use Amazon Elastic Inference with TensorFlow serving through the Python SDK in SageMaker, see `EI Sample Notebooks <https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html#ei-intro-sample-nb>`__.
183+
184+
Building the SageMaker Elastic Inference TensorFlow serving container
185+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
186+
Amazon Elastic Inference is designed to be used with AWS enhanced versions of TensorFlow serving or Apache MXNet. These enhanced
187+
versions of the frameworks are automatically built into containers when you use the Amazon SageMaker Python SDK, or you can
188+
download them as binary files and import them into your own Docker containers. The enhanced TensorFlow serving binaries are available on Amazon S3 at https://s3.console.aws.amazon.com/s3/buckets/amazonei-tensorflow.
189+
190+
The SageMaker TensorFlow containers with Amazon Elastic Inference support were built from the
191+
`EI Dockerfile <https://github.com/aws/sagemaker-tensorflow-container/blob/master/docker/1.12.0/final/py2/Dockerfile.ei>`__ starting at TensorFlow 1.12.0 and above.
192+
193+
The instructions for building the SageMaker TensorFlow containers with Amazon Elastic Inference support are similar to the steps `above <https://github.com/aws/sagemaker-tensorflow-container#final-images>`__.
194+
195+
The only difference is the addition of the ``tensorflow_model_server`` build-arg, in which the enhanced version of TensorFlow serving would be passed in.
196+
197+
::
198+
199+
# Example
200+
docker build -t preprod-tensorflow-ei:1.12.0-cpu-py2 --build-arg py_version=2 \
201+
--build-arg tensorflow_model_server AmazonEI_TensorFlow_Serving_v1.12_v1 \
202+
--build-arg framework_installable=tensorflow-1.12.0-cp27-cp27mu-manylinux1_x86_64.whl -f Dockerfile.cpu .
203+
204+
205+
* For information about downloading the enhanced versions of TensorFlow serving, see `Using TensorFlow Models with Amazon EI <https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/ei-tensorflow.html>`__.
206+
* For information on which versions of TensorFlow serving is supported for Elastic Inference within SageMaker, see `TensorFlow SageMaker Estimators and Models <https://github.com/aws/sagemaker-python-sdk/tree/master/src/sagemaker/tensorflow#tensorflow-sagemaker-estimators-and-models>`__.
171207

172208
Running the tests
173209
-----------------

0 commit comments

Comments
 (0)