@@ -32,13 +32,17 @@ Table of Contents
32
32
4. `TensorFlow SageMaker Estimators <#tensorflow-sagemaker-estimators >`__
33
33
5. `Chainer SageMaker Estimators <#chainer-sagemaker-estimators >`__
34
34
6. `PyTorch SageMaker Estimators <#pytorch-sagemaker-estimators >`__
35
- 7. `AWS SageMaker Estimators <#aws-sagemaker-estimators >`__
36
- 8. `BYO Docker Containers with SageMaker Estimators <#byo-docker-containers-with-sagemaker-estimators >`__
37
- 9. `SageMaker Automatic Model Tuning <#sagemaker-automatic-model-tuning >`__
38
- 10. `SageMaker Batch Transform <#sagemaker-batch-transform >`__
39
- 11. `Secure Training and Inference with VPC <#secure-training-and-inference-with-vpc >`__
40
- 12. `BYO Model <#byo-model >`__
41
- 13. `SageMaker Workflow <#sagemaker-workflow >`__
35
+ 7. `SageMaker SparkML Serving <#sagemaker-sparkml-serving >`__
36
+ 8. `AWS SageMaker Estimators <#aws-sagemaker-estimators >`__
37
+ 9. `Using SageMaker AlgorithmEstimators <#using-sagemaker-algorithmestimators >`__
38
+ 10. `Consuming SageMaker Model Packages <#consuming-sagemaker-model-packages >`__
39
+ 11. `BYO Docker Containers with SageMaker Estimators <#byo-docker-containers-with-sagemaker-estimators >`__
40
+ 12. `SageMaker Automatic Model Tuning <#sagemaker-automatic-model-tuning >`__
41
+ 13. `SageMaker Batch Transform <#sagemaker-batch-transform >`__
42
+ 14. `Secure Training and Inference with VPC <#secure-training-and-inference-with-vpc >`__
43
+ 15. `BYO Model <#byo-model >`__
44
+ 16. `Inference Pipelines <#inference-pipelines >`__
45
+ 17. `SageMaker Workflow <#sagemaker-workflow >`__
42
46
43
47
44
48
Installing the SageMaker Python SDK
@@ -342,7 +346,7 @@ Currently, the following algorithms support incremental training:
342
346
343
347
- Image Classification
344
348
- Object Detection
345
- - Semantics Segmentation
349
+ - Semantic Segmentation
346
350
347
351
348
352
MXNet SageMaker Estimators
@@ -374,7 +378,7 @@ For more information, see `TensorFlow SageMaker Estimators and Models`_.
374
378
375
379
376
380
Chainer SageMaker Estimators
377
- ------------------------------ -
381
+ ----------------------------
378
382
379
383
By using Chainer SageMaker `` Estimators`` , you can train and host Chainer models on Amazon SageMaker.
380
384
@@ -390,7 +394,7 @@ For more information about Chainer SageMaker ``Estimators``, see `Chainer SageM
390
394
391
395
392
396
PyTorch SageMaker Estimators
393
- ------------------------------ -
397
+ ----------------------------
394
398
395
399
With PyTorch SageMaker `` Estimators`` , you can train and host PyTorch models on Amazon SageMaker.
396
400
@@ -408,6 +412,39 @@ For more information about PyTorch SageMaker ``Estimators``, see `PyTorch SageMa
408
412
.. _PyTorch SageMaker Estimators and Models: src/ sagemaker/ pytorch/ README .rst
409
413
410
414
415
+ SageMaker SparkML Serving
416
+ ------------------------ -
417
+
418
+ With SageMaker SparkML Serving, you can now perform predictions against a SparkML Model in SageMaker.
419
+ In order to host a SparkML model in SageMaker, it should be serialized with `` MLeap`` library.
420
+
421
+ For more information on MLeap, see https:// github.com/ combust/ mleap .
422
+
423
+ Supported major version of Spark: 2.2 (MLeap version - 0.9 .6)
424
+
425
+ Here is an example on how to create an instance of `` SparkMLModel`` class and use `` deploy()`` method to create an
426
+ endpoint which can be used to perform prediction against your trained SparkML Model.
427
+
428
+ .. code:: python
429
+
430
+ sparkml_model = SparkMLModel(model_data = ' s3://path/to/model.tar.gz' , env = {' SAGEMAKER_SPARKML_SCHEMA' : schema})
431
+ model_name = ' sparkml-model'
432
+ endpoint_name = ' sparkml-endpoint'
433
+ predictor = sparkml_model.deploy(initial_instance_count = 1 , instance_type = ' ml.c4.xlarge' , endpoint_name = endpoint_name)
434
+
435
+ Once the model is deployed, we can invoke the endpoint with a `` CSV `` payload like this:
436
+
437
+ .. code:: python
438
+
439
+ payload = ' field_1,field_2,field_3,field_4,field_5'
440
+ predictor.predict(payload)
441
+
442
+
443
+ For more information about the different `` content- type `` and `` Accept`` formats as well as the structure of the
444
+ `` schema`` that SageMaker SparkML Serving recognizes, please see `SageMaker SparkML Serving Container` _.
445
+
446
+ .. _SageMaker SparkML Serving Container: https:// github.com/ aws/ sagemaker- sparkml- serving- container
447
+
411
448
AWS SageMaker Estimators
412
449
------------------------
413
450
Amazon SageMaker provides several built- in machine learning algorithms that you can use to solve a variety of problems.
@@ -421,6 +458,59 @@ For more information, see `AWS SageMaker Estimators and Models`_.
421
458
422
459
.. _AWS SageMaker Estimators and Models: src/ sagemaker/ amazon/ README .rst
423
460
461
+ Using SageMaker AlgorithmEstimators
462
+ ---------------------------------- -
463
+
464
+ With the SageMaker Algorithm entities, you can create training jobs with just an `` algorithm_arn`` instead of
465
+ a training image. There is a dedicated `` AlgorithmEstimator`` class that accepts `` algorithm_arn`` as a
466
+ parameter, the rest of the arguments are similar to the other Estimator classes. This class also allows you to
467
+ consume algorithms that you have subscribed to in the AWS Marketplace. The AlgorithmEstimator performs
468
+ client- side validation on your inputs based on the algorithm' s properties.
469
+
470
+ Here is an example:
471
+
472
+ .. code:: python
473
+
474
+ import sagemaker
475
+
476
+ algo = sagemaker.AlgorithmEstimator(
477
+ algorithm_arn = ' arn:aws:sagemaker:us-west-2:1234567:algorithm/some-algorithm' ,
478
+ role = ' SageMakerRole' ,
479
+ train_instance_count = 1 ,
480
+ train_instance_type = ' ml.c4.xlarge' )
481
+
482
+ train_input = algo.sagemaker_session.upload_data(path = ' /path/to/your/data' )
483
+
484
+ algo.fit({' training' : train_input})
485
+ algo.deploy(1 , ' ml.m4.xlarge' )
486
+
487
+ # When you are done using your endpoint
488
+ algo.delete_endpoint()
489
+
490
+
491
+ Consuming SageMaker Model Packages
492
+ ----------------------------------
493
+
494
+ SageMaker Model Packages are a way to specify and share information for how to create SageMaker Models.
495
+ With a SageMaker Model Package that you have created or subscribed to in the AWS Marketplace,
496
+ you can use the specified serving image and model data for Endpoints and Batch Transform jobs.
497
+
498
+ To work with a SageMaker Model Package, use the `` ModelPackage`` class .
499
+
500
+ Here is an example:
501
+
502
+ .. code:: python
503
+
504
+ import sagemaker
505
+
506
+ model = sagemaker.ModelPackage(
507
+ role = ' SageMakerRole' ,
508
+ model_package_arn = ' arn:aws:sagemaker:us-west-2:123456:model-package/my-model-package' )
509
+ model.deploy(1 , ' ml.m4.xlarge' , endpoint_name = ' my-endpoint' )
510
+
511
+ # When you are done using your endpoint
512
+ model.sagemaker_session.delete_endpoint(' my-endpoint' )
513
+
424
514
425
515
BYO Docker Containers with SageMaker Estimators
426
516
---------------------------------------------- -
@@ -435,7 +525,7 @@ Please refer to the full example in the examples repo:
435
525
git clone https:// github.com/ awslabs/ amazon- sagemaker- examples.git
436
526
437
527
438
- The example notebook is is located here:
528
+ The example notebook is located here:
439
529
`` advanced_functionality/ scikit_bring_your_own/ scikit_bring_your_own.ipynb``
440
530
441
531
@@ -709,11 +799,45 @@ This returns a predictor the same way an ``Estimator`` does when ``deploy()`` is
709
799
A full example is available in the `Amazon SageMaker examples repository < https:// github.com/ awslabs/ amazon- sagemaker- examples/ tree/ master/ advanced_functionality/ mxnet_mnist_byom> ` __.
710
800
711
801
802
+ Inference Pipelines
803
+ ------------------ -
804
+ You can create a Pipeline for realtime or batch inference comprising of one or multiple model containers. This will help
805
+ you to deploy an ML pipeline behind a single endpoint and you can have one API call perform pre- processing, model- scoring
806
+ and post- processing on your data before returning it back as the response.
807
+
808
+ For this, you have to create a `` PipelineModel`` which will take a list of `` Model`` objects. Calling `` deploy()`` on the
809
+ `` PipelineModel`` will provide you with an endpoint which can be invoked to perform the prediction on a data point against
810
+ the ML Pipeline.
811
+
812
+ .. code:: python
813
+
814
+ xgb_image = get_image_uri(sess.boto_region_name, ' xgboost' , repo_version = " latest" )
815
+ xgb_model = Model(model_data = ' s3://path/to/model.tar.gz' , image = xgb_image)
816
+ sparkml_model = SparkMLModel(model_data = ' s3://path/to/model.tar.gz' , env = {' SAGEMAKER_SPARKML_SCHEMA' : schema})
817
+
818
+ model_name = ' inference-pipeline-model'
819
+ endpoint_name = ' inference-pipeline-endpoint'
820
+ sm_model = PipelineModel(name = model_name, role = sagemaker_role, models = [sparkml_model, xgb_model])
821
+
822
+ This will define a `` PipelineModel`` consisting of SparkML model and an XGBoost model stacked sequentially. For more
823
+ information about how to train an XGBoost model, please refer to the XGBoost notebook here_.
824
+
825
+ .. _here: https:// docs.aws.amazon.com/ sagemaker/ latest/ dg/ xgboost.html# xgboost-sample-notebooks
826
+
827
+ .. code:: python
828
+
829
+ sm_model.deploy(initial_instance_count = 1 , instance_type = ' ml.c5.xlarge' , endpoint_name = endpoint_name)
830
+
831
+ This returns a predictor the same way an `` Estimator`` does when `` deploy()`` is called. Whenever you make an inference
832
+ request using this predictor, you should pass the data that the first container expects and the predictor will return the
833
+ output from the last container.
834
+
835
+
712
836
SageMaker Workflow
713
837
------------------
714
838
715
839
You can use Apache Airflow to author, schedule and monitor SageMaker workflow.
716
840
717
841
For more information, see `SageMaker Workflow in Apache Airflow` _.
718
842
719
- .. _SageMaker Workflow in Apache Airflow: src/ sagemaker/ workflow/ README .rst
843
+ .. _SageMaker Workflow in Apache Airflow: src/ sagemaker/ workflow/ README .rst
0 commit comments