You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/using_mxnet.rst
+63-24Lines changed: 63 additions & 24 deletions
Original file line number
Diff line number
Diff line change
@@ -532,29 +532,39 @@ For more information on how to enable MXNet to interact with Amazon Elastic Infe
532
532
Model serving
533
533
^^^^^^^^^^^^^
534
534
535
-
After the SageMaker model server has loaded your model, by calling either the default ``model_fn`` or the implementation in your training script, SageMaker will serve your model. Model serving is the process of responding to inference requests, received by SageMaker InvokeEndpoint API calls. The SageMaker MXNet model server breaks request handling into three steps:
535
+
After the SageMaker model server loads your model by calling either the default ``model_fn`` or the implementation in your script, SageMaker serves your model.
536
+
Model serving is the process of responding to inference requests received by SageMaker ``InvokeEndpoint`` API calls.
537
+
Defining how to handle these requests can be done in one of two ways:
536
538
539
+
- using ``input_fn``, ``predict_fn``, and ``output_fn``, some of which may be your own implementations
540
+
- writing your own ``transform_fn`` for handling input processing, prediction, and output processing
537
541
538
-
- input processing,
539
-
- prediction, and
540
-
- output processing.
542
+
Using ``input_fn``, ``predict_fn``, and ``output_fn``
In a similar way to previous steps, you configure these steps by defining functions in your Python source file.
545
+
The SageMaker MXNet model server breaks request handling into three steps:
543
546
544
-
Each step involves invoking a python function, with information about the request and the return-value from the previous function in the chain. Inside the SageMaker MXNet model server, the process looks like:
547
+
- input processing
548
+
- prediction
549
+
- output processing
550
+
551
+
Just like with ``model_fn``, you configure these steps by defining functions in your Python source file.
552
+
553
+
Each step has its own Python function, which takes in information about the request and the return value from the previous function in the chain.
554
+
Inside the SageMaker MXNet model server, the process looks like:
545
555
546
556
.. code:: python
547
557
548
558
# Deserialize the Invoke request body into an object we can perform prediction on
The above code-sample shows the three function definitions:
567
+
The above codesample shows the three function definitions that correlate to the three steps mentioned above:
558
568
559
569
- ``input_fn``: Takes request data and deserializes the data into an
560
570
object for prediction.
@@ -563,7 +573,15 @@ The above code-sample shows the three function definitions:
563
573
- ``output_fn``: Takes the result of prediction and serializes this
564
574
according to the response content type.
565
575
566
-
The SageMaker MXNet model server provides default implementations of these functions. These work with common-content types, and Gluon API and Module API model objects. You can provide your own implementations for these functions in your training script. If you omit any definition then the SageMaker MXNet model server will use its default implementation for that function.
576
+
The SageMaker MXNet model server provides default implementations of these functions.
577
+
These work with both Gluon API and Module API model objects.
You can also provide your own implementations for these functions in your training script.
584
+
If you omit any definition then the SageMaker MXNet model server will use its default implementation for that function.
567
585
568
586
If you rely solely on the SageMaker MXNet model server defaults, you get the following functionality:
569
587
@@ -575,36 +593,36 @@ If you rely solely on the SageMaker MXNet model server defaults, you get the fol
575
593
In the following sections we describe the default implementations of input_fn, predict_fn, and output_fn. We describe the input arguments and expected return types of each, so you can define your own implementations.
576
594
577
595
Input processing
578
-
''''''''''''''''
596
+
""""""""""""""""
579
597
580
598
When an InvokeEndpoint operation is made against an Endpoint running a SageMaker MXNet model server, the model server receives two pieces of information:
581
599
582
-
- The request Content-Type, for example "application/json"
583
-
- The request data body, a byte array
600
+
- The request's content type, for example "application/json"
601
+
- The request data body as a byte array
584
602
585
-
The SageMaker MXNet model server will invoke an ``input_fn`` function in your training script, passing in this information. If you define an ``input_fn`` function definition, it should return an object that can be passed to ``predict_fn`` and have the following signature:
603
+
The SageMaker MXNet model server will invoke ``input_fn``, passing in this information. If you define an ``input_fn`` function definition, it should return an object that can be passed to ``predict_fn`` and have the following signature:
Where ``request_body`` is a byte buffer, ``request_content_type`` is a Python string, and model is the result of invoking ``model_fn``.
609
+
Where ``request_body`` is a byte buffer and ``request_content_type`` is the content type of the request.
592
610
593
611
The SageMaker MXNet model server provides a default implementation of ``input_fn``. This function deserializes JSON or CSV encoded data into an MXNet ``NDArrayIter`` `(external API docs) <https://mxnet.incubator.apache.org/api/python/io.html#mxnet.io.NDArrayIter>`__ multi-dimensional array iterator. This works with the default ``predict_fn`` implementation, which expects an ``NDArrayIter`` as input.
594
612
595
-
Default json deserialization requires ``request_body`` contain a single json list. Sending multiple json objects within the same ``request_body`` is not supported. The list must have a dimensionality compatible with the MXNet ``net`` or ``Module`` object. Specifically, after the list is loaded, it's either padded or split to fit the first dimension of the model input shape. The list's shape must be identical to the model's input shape, for all dimensions after the first.
613
+
Default JSON deserialization requires ``request_body`` contain a single json list. Sending multiple json objects within the same ``request_body`` is not supported. The list must have a dimensionality compatible with the MXNet ``net`` or ``Module`` object. Specifically, after the list is loaded, it's either padded or split to fit the first dimension of the model input shape. The list's shape must be identical to the model's input shape, for all dimensions after the first.
596
614
597
-
Default csv deserialization requires ``request_body`` contain one or more lines of CSV numerical data. The data is loaded into a two-dimensional array, where each line break defines the boundaries of the first dimension. This two-dimensional array is then re-shaped to be compatible with the shape expected by the model object. Specifically, the first dimension is kept unchanged, but the second dimension is reshaped to be consistent with the shape of all dimensions in the model, following the first dimension.
615
+
Default CSV deserialization requires ``request_body`` contain one or more lines of CSV numerical data. The data is loaded into a two-dimensional array, where each line break defines the boundaries of the first dimension. This two-dimensional array is then re-shaped to be compatible with the shape expected by the model object. Specifically, the first dimension is kept unchanged, but the second dimension is reshaped to be consistent with the shape of all dimensions in the model, following the first dimension.
598
616
599
617
If you provide your own implementation of input_fn, you should abide by the ``input_fn`` signature. If you want to use this with the default
600
-
``predict_fn``, then you should return an NDArrayIter. The NDArrayIter should have a shape identical to the shape of the model being predicted on. The example below shows a custom ``input_fn`` for preparing pickled numpy arrays.
618
+
``predict_fn``, then you should return an ``NDArrayIter``. The ``NDArrayIter`` should have a shape identical to the shape of the model being predicted on. The example below shows a custom ``input_fn`` for preparing pickled numpy arrays.
"""An input_fn that loads a pickled numpy array"""
609
627
if request_content_type =='application/python-pickle':
610
628
array = np.load(StringIO(request_body))
@@ -616,7 +634,7 @@ If you provide your own implementation of input_fn, you should abide by the ``in
616
634
pass
617
635
618
636
Prediction
619
-
''''''''''
637
+
""""""""""
620
638
621
639
After the inference request has been deserialized by ``input_fn``, the SageMaker MXNet model server invokes ``predict_fn``. As with ``input_fn``, you can define your own ``predict_fn`` or use the SageMaker Mxnet default.
622
640
@@ -649,21 +667,42 @@ If you implement your own prediction function, you should take care to ensure th
649
667
``output_fn``, this should be an ``NDArrayIter``.
650
668
651
669
Output processing
652
-
'''''''''''''''''
670
+
"""""""""""""""""
653
671
654
-
After invoking ``predict_fn``, the model server invokes ``output_fn``, passing in the return-value from ``predict_fn`` and the InvokeEndpoint requested response content-type.
672
+
After invoking ``predict_fn``, the model server invokes ``output_fn``, passing in the returnvalue from ``predict_fn`` and the InvokeEndpoint requested response contenttype.
655
673
656
674
The ``output_fn`` has the following signature:
657
675
658
676
.. code:: python
659
677
660
678
defoutput_fn(prediction, content_type)
661
679
662
-
Where ``prediction`` is the result of invoking ``predict_fn`` and
663
-
``content_type`` is the InvokeEndpoint requested response content-type. The function should return a byte array of data serialized to content_type.
680
+
Where ``prediction`` is the result of invoking ``predict_fn`` and ``content_type`` is the requested response content type for ``InvokeEndpoint``.
681
+
The function should return an array of bytes serialized to the expected content type.
664
682
665
683
The default implementation expects ``prediction`` to be an ``NDArray`` and can serialize the result to either JSON or CSV. It accepts response content types of "application/json" and "text/csv".
666
684
685
+
Using ``transform_fn``
686
+
''''''''''''''''''''''
687
+
688
+
If you would rather not structure your code around the three methods described above, you can instead define your own ``transform_fn`` to handle inference requests.
689
+
This will override any implementation of ``input_fn``, ``predict_fn``, or ``output_fn``.
Where ``model`` is the model objected loaded by ``model_fn``, ``request_body`` is the data from the inference request, ``content_type`` is the content type of the request, and ``accept_type`` is the request content type for the response.
697
+
698
+
This one function should handle processing the input, performing a prediction, and processing the output.
699
+
The return object should be one of the following:
700
+
701
+
- a tuple with two items: the response data and ``accept_type`` (the content type of the response data), or
702
+
- a Flask response object: http://flask.pocoo.org/docs/1.0/api/#response-objects
703
+
704
+
You can find examples of hosting scripts using this structure in the example notebooks, such as the `mxnet_gluon_sentiment <https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_gluon_sentiment/sentiment.py#L344-L387>`__ notebook.
705
+
667
706
Working with existing model data and training jobs
0 commit comments