You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: src/sagemaker/mxnet/README.rst
+55-23Lines changed: 55 additions & 23 deletions
Original file line number
Diff line number
Diff line change
@@ -538,29 +538,39 @@ For more information on how to enable MXNet to interact with Amazon Elastic Infe
538
538
Model serving
539
539
^^^^^^^^^^^^^
540
540
541
-
After the SageMaker model server has loaded your model, by calling either the default ``model_fn`` or the implementation in your training script, SageMaker will serve your model. Model serving is the process of responding to inference requests, received by SageMaker InvokeEndpoint API calls. The SageMaker MXNet model server breaks request handling into three steps:
541
+
After the SageMaker model server has loaded your model, by calling either the default ``model_fn`` or the implementation in your script, SageMaker will serve your model.
542
+
Model serving is the process of responding to inference requests, received by SageMaker InvokeEndpoint API calls.
543
+
Defining how to handle these requests can be done in one of two ways:
542
544
545
+
- using ``input_fn``, ``predict_fn``, and ``output_fn``, some of which may be your own implementations
546
+
- writing your own ``transform_fn`` for handling input processing, prediction, and output processing
543
547
544
-
- input processing,
545
-
- prediction, and
546
-
- output processing.
548
+
Using ``input_fn``, ``predict_fn``, and ``output_fn``
In a similar way to previous steps, you configure these steps by defining functions in your Python source file.
551
+
The SageMaker MXNet model server breaks request handling into three steps:
549
552
550
-
Each step involves invoking a python function, with information about the request and the return-value from the previous function in the chain. Inside the SageMaker MXNet model server, the process looks like:
553
+
- input processing
554
+
- prediction
555
+
- output processing
556
+
557
+
Just like with ``model_fn``, you configure these steps by defining functions in your Python source file.
558
+
559
+
Each step has its own Python function, which takes in information about the request and the return value from the previous function in the chain.
560
+
Inside the SageMaker MXNet model server, the process looks like:
551
561
552
562
.. code:: python
553
563
554
564
# Deserialize the Invoke request body into an object we can perform prediction on
The above code-sample shows the three function definitions:
573
+
The above codesample shows the three function definitions that correlate to the three steps mentioned above:
564
574
565
575
- ``input_fn``: Takes request data and deserializes the data into an
566
576
object for prediction.
@@ -569,7 +579,10 @@ The above code-sample shows the three function definitions:
569
579
- ``output_fn``: Takes the result of prediction and serializes this
570
580
according to the response content type.
571
581
572
-
The SageMaker MXNet model server provides default implementations of these functions. These work with common-content types, and Gluon API and Module API model objects. You can provide your own implementations for these functions in your training script. If you omit any definition then the SageMaker MXNet model server will use its default implementation for that function.
582
+
The SageMaker MXNet model server provides default implementations of these functions.
583
+
These work with common content types, and Gluon API and Module API model objects.
584
+
You can also provide your own implementations for these functions in your training script.
585
+
If you omit any definition then the SageMaker MXNet model server will use its default implementation for that function.
573
586
574
587
If you rely solely on the SageMaker MXNet model server defaults, you get the following functionality:
575
588
@@ -581,36 +594,36 @@ If you rely solely on the SageMaker MXNet model server defaults, you get the fol
581
594
In the following sections we describe the default implementations of input_fn, predict_fn, and output_fn. We describe the input arguments and expected return types of each, so you can define your own implementations.
582
595
583
596
Input processing
584
-
''''''''''''''''
597
+
""""""""""""""""
585
598
586
599
When an InvokeEndpoint operation is made against an Endpoint running a SageMaker MXNet model server, the model server receives two pieces of information:
587
600
588
-
- The request Content-Type, for example "application/json"
589
-
- The request data body, a byte array
601
+
- The request's content type, for example "application/json"
602
+
- The request data body as a byte array
590
603
591
-
The SageMaker MXNet model server will invoke an ``input_fn`` function in your training script, passing in this information. If you define an ``input_fn`` function definition, it should return an object that can be passed to ``predict_fn`` and have the following signature:
604
+
The SageMaker MXNet model server will invoke ``input_fn``, passing in this information. If you define an ``input_fn`` function definition, it should return an object that can be passed to ``predict_fn`` and have the following signature:
Where ``request_body`` is a byte buffer, ``request_content_type`` is a Python string, and model is the result of invoking ``model_fn``.
610
+
Where ``request_body`` is a byte buffer and ``request_content_type`` is the content type of the request.
598
611
599
612
The SageMaker MXNet model server provides a default implementation of ``input_fn``. This function deserializes JSON or CSV encoded data into an MXNet ``NDArrayIter`` `(external API docs) <https://mxnet.incubator.apache.org/api/python/io.html#mxnet.io.NDArrayIter>`__ multi-dimensional array iterator. This works with the default ``predict_fn`` implementation, which expects an ``NDArrayIter`` as input.
600
613
601
-
Default json deserialization requires ``request_body`` contain a single json list. Sending multiple json objects within the same ``request_body`` is not supported. The list must have a dimensionality compatible with the MXNet ``net`` or ``Module`` object. Specifically, after the list is loaded, it's either padded or split to fit the first dimension of the model input shape. The list's shape must be identical to the model's input shape, for all dimensions after the first.
614
+
Default JSON deserialization requires ``request_body`` contain a single json list. Sending multiple json objects within the same ``request_body`` is not supported. The list must have a dimensionality compatible with the MXNet ``net`` or ``Module`` object. Specifically, after the list is loaded, it's either padded or split to fit the first dimension of the model input shape. The list's shape must be identical to the model's input shape, for all dimensions after the first.
602
615
603
-
Default csv deserialization requires ``request_body`` contain one or more lines of CSV numerical data. The data is loaded into a two-dimensional array, where each line break defines the boundaries of the first dimension. This two-dimensional array is then re-shaped to be compatible with the shape expected by the model object. Specifically, the first dimension is kept unchanged, but the second dimension is reshaped to be consistent with the shape of all dimensions in the model, following the first dimension.
616
+
Default CSV deserialization requires ``request_body`` contain one or more lines of CSV numerical data. The data is loaded into a two-dimensional array, where each line break defines the boundaries of the first dimension. This two-dimensional array is then re-shaped to be compatible with the shape expected by the model object. Specifically, the first dimension is kept unchanged, but the second dimension is reshaped to be consistent with the shape of all dimensions in the model, following the first dimension.
604
617
605
618
If you provide your own implementation of input_fn, you should abide by the ``input_fn`` signature. If you want to use this with the default
606
-
``predict_fn``, then you should return an NDArrayIter. The NDArrayIter should have a shape identical to the shape of the model being predicted on. The example below shows a custom ``input_fn`` for preparing pickled numpy arrays.
619
+
``predict_fn``, then you should return an ``NDArrayIter``. The ``NDArrayIter`` should have a shape identical to the shape of the model being predicted on. The example below shows a custom ``input_fn`` for preparing pickled numpy arrays.
"""An input_fn that loads a pickled numpy array"""
615
628
if request_content_type =='application/python-pickle':
616
629
array = np.load(StringIO(request_body))
@@ -622,7 +635,7 @@ If you provide your own implementation of input_fn, you should abide by the ``in
622
635
pass
623
636
624
637
Prediction
625
-
''''''''''
638
+
""""""""""
626
639
627
640
After the inference request has been deserialized by ``input_fn``, the SageMaker MXNet model server invokes ``predict_fn``. As with ``input_fn``, you can define your own ``predict_fn`` or use the SageMaker Mxnet default.
628
641
@@ -655,9 +668,9 @@ If you implement your own prediction function, you should take care to ensure th
655
668
``output_fn``, this should be an ``NDArrayIter``.
656
669
657
670
Output processing
658
-
'''''''''''''''''
671
+
"""""""""""""""""
659
672
660
-
After invoking ``predict_fn``, the model server invokes ``output_fn``, passing in the return-value from ``predict_fn`` and the InvokeEndpoint requested response content-type.
673
+
After invoking ``predict_fn``, the model server invokes ``output_fn``, passing in the returnvalue from ``predict_fn`` and the InvokeEndpoint requested response contenttype.
661
674
662
675
The ``output_fn`` has the following signature:
663
676
@@ -666,10 +679,29 @@ The ``output_fn`` has the following signature:
666
679
defoutput_fn(prediction, content_type)
667
680
668
681
Where ``prediction`` is the result of invoking ``predict_fn`` and
669
-
``content_type`` is the InvokeEndpoint requested response content-type. The function should return a byte array of data serialized to content_type.
682
+
``content_type`` is the InvokeEndpoint requested response contenttype. The function should return a byte array of data serialized to the expected content type.
670
683
671
684
The default implementation expects ``prediction`` to be an ``NDArray`` and can serialize the result to either JSON or CSV. It accepts response content types of "application/json" and "text/csv".
672
685
686
+
Using ``transform_fn``
687
+
''''''''''''''''''''''
688
+
689
+
If you would rather not structure your code around the three methods described above, you can also define your own ``transform_fn`` to handle inference requests. This function has the following signature:
Where ``model`` is the model objected loaded by ``model_fn``, ``request_body`` is the data from the inference request, ``content_type`` is the content type of the request, and ``accept_type`` is the request content type for the response.
696
+
697
+
This one function should handle processing the input, performing a prediction, and processing the output.
698
+
The return object should be one of the following:
699
+
700
+
- a tuple with two items: the response data and ``accept_type`` (the content type of the response data), or
701
+
- a Flask response object: http://flask.pocoo.org/docs/1.0/api/#response-objects
702
+
703
+
You can find examples of hosting scripts using this structure in the example notebooks, such as the `mxnet_gluon_sentiment <https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_gluon_sentiment/sentiment.py#L344-L387>`__ notebook.
704
+
673
705
Working with existing model data and training jobs
0 commit comments