Skip to content

Commit 4732545

Browse files
Move EI docs from TFS Rest API docs to TFS Python Docs (#637)
1 parent 69c45ce commit 4732545

File tree

2 files changed

+9
-9
lines changed

2 files changed

+9
-9
lines changed

src/sagemaker/tensorflow/deploying_python.rst

Lines changed: 9 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -25,6 +25,15 @@ like this:
2525
2626
The code block above deploys a SageMaker Endpoint with one instance of the type 'ml.c4.xlarge'.
2727

28+
TensorFlow serving on SageMaker has support for `Elastic Inference <https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html>`_, which allows for inference acceleration to a hosted endpoint for a fraction of the cost of using a full GPU instance. In order to attach an Elastic Inference accelerator to your endpoint provide the accelerator type to ``accelerator_type`` to your ``deploy`` call.
29+
30+
.. code:: python
31+
32+
predictor = estimator.deploy(initial_instance_count=1,
33+
instance_type='ml.c5.xlarge',
34+
accelerator_type='ml.eia1.medium'
35+
endpoint_type='tensorflow-serving-elastic-inference')
36+
2837
What happens when deploy is called
2938
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
3039

src/sagemaker/tensorflow/deploying_tensorflow_serving.rst

Lines changed: 0 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -34,15 +34,6 @@ estimator object to create a SageMaker Endpoint:
3434
3535
The code block above deploys a SageMaker Endpoint with one instance of the type 'ml.c5.xlarge'.
3636

37-
TensorFlow serving on SageMaker has support for `Elastic Inference <https://docs.aws.amazon.com/sagemaker/latest/dg/ei.html>`_, which allows for inference acceleration to a hosted endpoint for a fraction of the cost of using a full GPU instance. In order to attach an Elastic Inference accelerator to your endpoint provide the accelerator type to ``accelerator_type`` to your ``deploy`` call.
38-
39-
.. code:: python
40-
41-
predictor = estimator.deploy(initial_instance_count=1,
42-
instance_type='ml.c5.xlarge',
43-
accelerator_type='ml.eia1.medium'
44-
endpoint_type='tensorflow-serving-elastic-inference')
45-
4637
What happens when deploy is called
4738
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
4839

0 commit comments

Comments
 (0)