Skip to content

Commit 425ecff

Browse files
authored
Merge branch 'master' into estimator-hyperparam-security-note
2 parents 6e1141a + 907f4ff commit 425ecff

File tree

80 files changed

+3296
-1374
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

80 files changed

+3296
-1374
lines changed

.github/PULL_REQUEST_TEMPLATE.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,7 +11,7 @@ _Put an `x` in the boxes that apply. You can also fill these out after creating
1111
#### General
1212

1313
- [ ] I have read the [CONTRIBUTING](https://github.com/aws/sagemaker-python-sdk/blob/master/CONTRIBUTING.md) doc
14-
- [ ] I certify that the changes I am introducing will be backword compatible, and I have discussed concerns about this, if any, with the Python SDK team
14+
- [ ] I certify that the changes I am introducing will be backward compatible, and I have discussed concerns about this, if any, with the Python SDK team
1515
- [ ] I used the commit message format described in [CONTRIBUTING](https://github.com/aws/sagemaker-python-sdk/blob/master/CONTRIBUTING.md#committing-your-change)
1616
- [ ] I have passed the region in to all S3 and STS clients that I've initialized as part of this change.
1717
- [ ] I have updated any necessary documentation, including [READMEs](https://github.com/aws/sagemaker-python-sdk/blob/master/README.rst) and [API docs](https://github.com/aws/sagemaker-python-sdk/tree/master/doc) (if appropriate)

CHANGELOG.md

Lines changed: 82 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,87 @@
11
# Changelog
22

3+
## v2.112.2 (2022-10-11)
4+
5+
### Bug Fixes and Other Changes
6+
7+
* Update Neo-TF2.x versions to TF2.9(.2)
8+
9+
### Documentation Changes
10+
11+
* fix typo in PR template
12+
13+
## v2.112.1 (2022-10-10)
14+
15+
### Bug Fixes and Other Changes
16+
17+
* fix(local-mode): loosen docker requirement to allow 6.0.0
18+
* CreateModelPackage API error for Scikit-learn and XGBoost frameworkss
19+
20+
## v2.112.0 (2022-10-09)
21+
22+
### Features
23+
24+
* added monitor batch transform step (pipeline)
25+
26+
### Bug Fixes and Other Changes
27+
28+
* Add PipelineVariable annotation to framework estimators
29+
30+
## v2.111.0 (2022-10-05)
31+
32+
### Features
33+
34+
* Edit test file for supporting TF 2.10 training
35+
36+
### Bug Fixes and Other Changes
37+
38+
* support kms key in processor pack local code
39+
* security issue by bumping apache-airflow from 2.3.4 to 2.4.0
40+
* instance count retrieval logic
41+
* Add regex for short-form sagemaker-xgboost tags
42+
* Upgrade attrs>=20.3.0,<23
43+
* Add PipelineVariable annotation to Amazon estimators
44+
45+
### Documentation Changes
46+
47+
* add context for pytorch
48+
49+
## v2.110.0 (2022-09-27)
50+
51+
### Features
52+
53+
* Support KeepAlivePeriodInSeconds for Training APIs
54+
* added ANALYSIS_CONFIG_SCHEMA_V1_0 in clarify
55+
* add model monitor image accounts for ap-southeast-3
56+
57+
### Bug Fixes and Other Changes
58+
59+
* huggingface release test
60+
* Fixing the logic to return instanceCount for heterogeneousClusters
61+
* Disable type hints in doc signature and add PipelineVariable annotations in docstring
62+
* estimator hyperparameters in script mode
63+
64+
### Documentation Changes
65+
66+
* Added link to example notebook for Pipelines local mode
67+
68+
## v2.109.0 (2022-09-09)
69+
70+
### Features
71+
72+
* add search filters
73+
74+
### Bug Fixes and Other Changes
75+
76+
* local pipeline step argument parsing bug
77+
* support fail_on_violation flag for check steps
78+
* fix links per app security scan
79+
* Add PipelineVariable annotation for all processor subclasses
80+
81+
### Documentation Changes
82+
83+
* the SageMaker model parallel library 1.11.0 release
84+
385
## v2.108.0 (2022-09-02)
486

587
### Features

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
2.108.1.dev0
1+
2.112.3.dev0

doc/conf.py

Lines changed: 7 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,13 @@
9696
# Example configuration for intersphinx: refer to the Python standard library.
9797
intersphinx_mapping = {"http://docs.python.org/": None}
9898

99+
# -- Options for autodoc ----------------------------------------------------
100+
# https://www.sphinx-doc.org/en/master/usage/extensions/autodoc.html#configuration
101+
102+
# Automatically extract typehints when specified and place them in
103+
# descriptions of the relevant function/method.
104+
autodoc_typehints = "description"
105+
99106
# autosummary
100107
autosummary_generate = True
101108

doc/frameworks/pytorch/using_pytorch.rst

Lines changed: 42 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -415,20 +415,25 @@ Before a model can be served, it must be loaded. The SageMaker PyTorch model ser
415415

416416
.. code:: python
417417
418-
def model_fn(model_dir)
418+
def model_fn(model_dir, context)
419+
420+
``context`` is an optional argument that contains additional serving information, such as the GPU ID and batch size.
421+
If specified in the function declaration, the context will be created and passed to the function by SageMaker.
422+
For more information about ``context``, see the `Serving Context class <https://github.com/pytorch/serve/blob/master/ts/context.py>`_.
419423

420424
SageMaker will inject the directory where your model files and sub-directories, saved by ``save``, have been mounted.
421425
Your model function should return a model object that can be used for model serving.
422426

423427
The following code-snippet shows an example ``model_fn`` implementation.
424-
It loads the model parameters from a ``model.pth`` file in the SageMaker model directory ``model_dir``.
428+
It loads the model parameters from a ``model.pth`` file in the SageMaker model directory ``model_dir``. As explained in the preceding example,
429+
``context`` is an optional argument that passes additional information.
425430

426431
.. code:: python
427432
428433
import torch
429434
import os
430435
431-
def model_fn(model_dir):
436+
def model_fn(model_dir, context):
432437
model = Your_Model()
433438
with open(os.path.join(model_dir, 'model.pth'), 'rb') as f:
434439
model.load_state_dict(torch.load(f))
@@ -482,13 +487,13 @@ function in the chain. Inside the SageMaker PyTorch model server, the process lo
482487
.. code:: python
483488
484489
# Deserialize the Invoke request body into an object we can perform prediction on
485-
input_object = input_fn(request_body, request_content_type)
490+
input_object = input_fn(request_body, request_content_type, context)
486491
487492
# Perform prediction on the deserialized object, with the loaded model
488-
prediction = predict_fn(input_object, model)
493+
prediction = predict_fn(input_object, model, context)
489494
490495
# Serialize the prediction result into the desired response content type
491-
output = output_fn(prediction, response_content_type)
496+
output = output_fn(prediction, response_content_type, context)
492497
493498
The above code sample shows the three function definitions:
494499

@@ -536,9 +541,13 @@ it should return an object that can be passed to ``predict_fn`` and have the fol
536541

537542
.. code:: python
538543
539-
def input_fn(request_body, request_content_type)
544+
def input_fn(request_body, request_content_type, context)
540545
541-
Where ``request_body`` is a byte buffer and ``request_content_type`` is a Python string
546+
Where ``request_body`` is a byte buffer and ``request_content_type`` is a Python string.
547+
548+
``context`` is an optional argument that contains additional serving information, such as the GPU ID and batch size.
549+
If specified in the function declaration, the context will be created and passed to the function by SageMaker.
550+
For more information about ``context``, see the `Serving Context class <https://github.com/pytorch/serve/blob/master/ts/context.py>`_.
542551

543552
The SageMaker PyTorch model server provides a default implementation of ``input_fn``.
544553
This function deserializes JSON, CSV, or NPY encoded data into a torch.Tensor.
@@ -586,16 +595,19 @@ The ``predict_fn`` function has the following signature:
586595

587596
.. code:: python
588597
589-
def predict_fn(input_object, model)
598+
def predict_fn(input_object, model, context)
590599
591600
Where ``input_object`` is the object returned from ``input_fn`` and
592601
``model`` is the model loaded by ``model_fn``.
602+
If you are using multiple GPUs, then specify the ``context`` argument, which contains information such as the GPU ID for a dynamically-selected GPU and the batch size.
603+
One of the examples below demonstrates how to configure ``predict_fn`` with the ``context`` argument to handle multiple GPUs. For more information about ``context``, see the `Serving Context class <https://github.com/pytorch/serve/blob/master/ts/context.py>`_.
604+
If you are using CPUs or a single GPU, then you do not need to specify the ``context`` argument.
593605

594606
The default implementation of ``predict_fn`` invokes the loaded model's ``__call__`` function on ``input_object``,
595607
and returns the resulting value. The return-type should be a torch.Tensor to be compatible with the default
596608
``output_fn``.
597609

598-
The example below shows an overridden ``predict_fn``:
610+
The following example shows an overridden ``predict_fn``:
599611

600612
.. code:: python
601613
@@ -609,6 +621,20 @@ The example below shows an overridden ``predict_fn``:
609621
with torch.no_grad():
610622
return model(input_data.to(device))
611623
624+
The following example is for use cases with multiple GPUs and shows an overridden ``predict_fn`` that uses the ``context`` argument to dynamically select a GPU device for making predictions:
625+
626+
.. code:: python
627+
628+
import torch
629+
import numpy as np
630+
631+
def predict_fn(input_data, model):
632+
device = torch.device("cuda:" + str(context.system_properties.get("gpu_id")) if torch.cuda.is_available() else "cpu")
633+
model.to(device)
634+
model.eval()
635+
with torch.no_grad():
636+
return model(input_data.to(device))
637+
612638
If you implement your own prediction function, you should take care to ensure that:
613639

614640
- The first argument is expected to be the return value from input_fn.
@@ -664,11 +690,14 @@ The ``output_fn`` has the following signature:
664690

665691
.. code:: python
666692
667-
def output_fn(prediction, content_type)
693+
def output_fn(prediction, content_type, context)
668694
669695
Where ``prediction`` is the result of invoking ``predict_fn`` and
670-
the content type for the response, as specified by the InvokeEndpoint request.
671-
The function should return a byte array of data serialized to content_type.
696+
the content type for the response, as specified by the InvokeEndpoint request. The function should return a byte array of data serialized to ``content_type``.
697+
698+
``context`` is an optional argument that contains additional serving information, such as the GPU ID and batch size.
699+
If specified in the function declaration, the context will be created and passed to the function by SageMaker.
700+
For more information about ``context``, see the `Serving Context class <https://github.com/pytorch/serve/blob/master/ts/context.py>`_.
672701

673702
The default implementation expects ``prediction`` to be a torch.Tensor and can serialize the result to JSON, CSV, or NPY.
674703
It accepts response content types of "application/json", "text/csv", and "application/x-npy".

doc/overview.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1565,6 +1565,8 @@ For detailed examples of running Docker in local mode, see:
15651565
- `TensorFlow local mode example notebook <https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/tensorflow_script_mode_using_shell_commands/tensorflow_script_mode_using_shell_commands.ipynb>`__.
15661566
- `MXNet local mode example notebook <https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/mxnet_gluon_mnist/mxnet_mnist_with_gluon_local_mode.ipynb>`__.
15671567
- `PyTorch local mode example notebook <https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker-python-sdk/pytorch_cnn_cifar10/pytorch_local_mode_cifar10.ipynb>`__.
1568+
- `Pipelines local mode example notebook <https://github.com/aws/amazon-sagemaker-examples/blob/main/sagemaker-pipelines/tabular/local-mode/sagemaker-pipelines-local-mode.ipynb>`__.
1569+
15681570
15691571
You can also find these notebooks in the **SageMaker Python SDK** section of the **SageMaker Examples** section in a notebook instance.
15701572
For information about using sample notebooks in a SageMaker notebook instance, see `Use Example Notebooks <https://docs.aws.amazon.com/sagemaker/latest/dg/howitworks-nbexamples.html>`__

doc/workflows/pipelines/sagemaker.workflow.pipelines.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -46,6 +46,8 @@ Entities
4646

4747
.. autoclass:: sagemaker.workflow.entities.Expression
4848

49+
.. autoclass:: sagemaker.workflow.entities.PipelineVariable
50+
4951
Execution Variables
5052
-------------------
5153

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,4 +1,4 @@
11
urllib3==1.26.8
22
docker-compose==1.29.2
3-
docker~=5.0.0
3+
docker>=5.0.2,<7.0.0
44
PyYAML==5.4.1

requirements/extras/test_requirements.txt

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -11,9 +11,9 @@ contextlib2==21.6.0
1111
awslogs==0.14.0
1212
black==22.3.0
1313
stopit==1.1.2
14-
apache-airflow==2.3.4
14+
apache-airflow==2.4.0
1515
apache-airflow-providers-amazon==4.0.0
16-
attrs==20.3.0
16+
attrs==22.1.0
1717
fabric==2.6.0
1818
requests==2.27.1
1919
sagemaker-experiments==0.1.35

setup.py

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -47,7 +47,7 @@ def read_requirements(filename):
4747

4848
# Declare minimal set for installation
4949
required_packages = [
50-
"attrs>=20.3.0,<22",
50+
"attrs>=20.3.0,<23",
5151
"boto3>=1.20.21,<2.0",
5252
"google-pasta",
5353
"numpy>=1.9.0,<2.0",
@@ -58,6 +58,7 @@ def read_requirements(filename):
5858
"packaging>=20.0",
5959
"pandas",
6060
"pathos",
61+
"schema",
6162
]
6263

6364
# Specific use case dependencies

0 commit comments

Comments
 (0)