Skip to content

Commit 5980332

Browse files
author
Jonathan Esterhazy
committed
support CustomAttributes in local mode; doc fixes
1 parent 9bf0d3e commit 5980332

File tree

5 files changed

+19
-5
lines changed

5 files changed

+19
-5
lines changed

CHANGELOG.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -5,6 +5,7 @@ CHANGELOG
55
1.14.2-dev
66
==========
77

8+
* bug-fix: support ``CustomAttributes`` argument in local mode ``invoke_endpoint`` requests
89
* enhancement: add ``content_type`` parameter to ``sagemaker.tensorflow.serving.Predictor``
910
* doc-fix: add TensorFlow Serving Container docs
1011
* doc-fix: fix rendering error in README.rst

src/sagemaker/local/local_session.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -164,14 +164,19 @@ def __init__(self, config=None):
164164
self.config = config
165165
self.serving_port = get_config_value('local.serving_port', config) or 8080
166166

167-
def invoke_endpoint(self, Body, EndpointName, ContentType, Accept): # pylint: disable=unused-argument
167+
def invoke_endpoint(self, Body, EndpointName, # pylint: disable=unused-argument
168+
ContentType, Accept, CustomAttributes):
168169
url = "http://localhost:%s/invocations" % self.serving_port
169170
headers = {
170171
'Content-type': ContentType
171172
}
173+
172174
if Accept is not None:
173175
headers['Accept'] = Accept
174176

177+
if CustomAttributes is not None:
178+
headers['X-Amzn-SageMaker-Custom-Attributes'] = CustomAttributes
179+
175180
r = self.http.request('POST', url, body=Body, preload_content=False,
176181
headers=headers)
177182

src/sagemaker/tensorflow/README.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -617,7 +617,7 @@ Note that TensorBoard is not supported when passing wait=False to ``fit``.
617617
Deploying TensorFlow Serving models
618618
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
619619

620-
After a TensorFlow estimator has been fit, it saves a TensorFlow ``SavedModel`` in
620+
After a TensorFlow estimator has been fit, it saves a TensorFlow SavedModel in
621621
the S3 location defined by ``output_path``. You can call ``deploy`` on a TensorFlow
622622
estimator to create a SageMaker Endpoint.
623623

src/sagemaker/tensorflow/deploying_python.rst

Lines changed: 3 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -94,8 +94,9 @@ The following code adds a prediction request to the previous code example:
9494
9595
The ``predictor.predict`` method call takes one parameter, the input ``data`` for which you want the SageMaker Endpoint
9696
to provide inference. ``predict`` will serialize the input data, and send it in as request to the SageMaker Endpoint by
97-
an ``InvokeEndpoint`` SageMaker operation. ``InvokeEndpoint`` operation requests can be made by ``predictor.predict``, by
98-
boto3 ``sageMaker.runtime`` client or by AWS CLI.
97+
an ``InvokeEndpoint`` SageMaker operation. ``InvokeEndpoint`` operation requests can be made by ``predictor.predict``,
98+
by boto3 `SageMakerRuntime <https://boto3.amazonaws.com/v1/documentation/api/latest/reference/services/sagemaker-runtime.html>`_
99+
client or by AWS CLI.
99100

100101
The SageMaker Endpoint web server will process the request, make an inference using the deployed model, and return a response.
101102
The ``result`` returned by ``predict`` is

src/sagemaker/tensorflow/deploying_tensorflow_serving.rst

Lines changed: 8 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -267,7 +267,7 @@ TensorFlow Serving Endpoints allow you to deploy multiple models to the same End
267267
To use this feature, you will need to:
268268

269269
#. create a multi-model archive file
270-
#. create a SageMaker and deploy it to an Endpoint
270+
#. create a SageMaker Model and deploy it to an Endpoint
271271
#. create Predictor instances that direct requests to a specific model
272272

273273
Creating a multi-model archive file
@@ -386,6 +386,13 @@ additional ``Predictor`` instances. Here's how:
386386
# get a predictor for 'model2'
387387
model2_predictor = Predictor(endpoint, model_name='model2')
388388
389+
# note: that will for actual SageMaker endpoints, but if you are using
390+
# local mode you need to create the new Predictor this way:
391+
#
392+
# model2_predictor = Predictor(endpoint, model_name='model2'
393+
# sagemaker_session=predictor.sagemaker_session)
394+
395+
389396
# result is prediction from 'model2'
390397
result = model2_predictor.predict(...)
391398

0 commit comments

Comments
 (0)