Skip to content

Commit d4a8eeb

Browse files
authored
Merge branch 'master' into trcomp-tf-2.6.3
2 parents 68eacf2 + 7fa81b2 commit d4a8eeb

File tree

96 files changed

+2275
-442
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

96 files changed

+2275
-442
lines changed

CHANGELOG.md

Lines changed: 58 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,63 @@
11
# Changelog
22

3+
## v2.86.0 (2022-04-12)
4+
5+
### Features
6+
7+
* Adds Spark Processing Notebook to Notebook Tests
8+
9+
## v2.85.0 (2022-04-11)
10+
11+
### Features
12+
13+
* update lambda code on pipeline create/update/upsert for Lamb…
14+
* jumpstart model url
15+
* add serverless inference image_uri retrieve support
16+
17+
### Bug Fixes and Other Changes
18+
19+
* Add back the Fix for Pipeline variables related customer issues
20+
* Support file URIs in ProcessingStep's code parameter
21+
22+
## v2.84.0 (2022-04-07)
23+
24+
### Features
25+
26+
* dependabot integ - move all deps to requirements.txt
27+
* add xgboost framework version 1.5-1
28+
29+
## v2.83.0 (2022-04-04)
30+
31+
### Features
32+
33+
* Hugging Face Transformers 4.17 for TF 2.6
34+
35+
### Bug Fixes and Other Changes
36+
37+
* IOC image version select issue
38+
39+
## v2.82.2 (2022-04-01)
40+
41+
### Bug Fixes and Other Changes
42+
43+
* Revert "fix: Fix Pipeline variables related customer issues (#2959)"
44+
* Refactor repack_model script injection, fixes tar.gz error
45+
46+
## v2.82.1 (2022-03-31)
47+
48+
### Bug Fixes and Other Changes
49+
50+
* Update Inferentia Image URI Config
51+
* Fix Pipeline variables related customer issues
52+
* more logging info for static pipeline test data setup
53+
54+
## v2.82.0 (2022-03-30)
55+
56+
### Features
57+
58+
* pluggable instance fallback mechanism, add CapacityError
59+
* support passing Env Vars to local mode training
60+
361
## v2.81.1 (2022-03-29)
462

563
### Bug Fixes and Other Changes

MANIFEST.in

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
recursive-include src/sagemaker *.py
22

33
include src/sagemaker/image_uri_config/*.json
4+
recursive-include requirements *
45

56
include VERSION
67
include LICENSE.txt

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
2.81.2.dev0
1+
2.86.1.dev0

doc/_static/js/analytics.js

Lines changed: 0 additions & 2 deletions
This file was deleted.

doc/_static/js/datatable.js

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
$(document).ready( function () {
2+
$('table.datatable').DataTable();
3+
$('a.external').attr('target', '_blank');
4+
} );

doc/doc_utils/jumpstart_doc_utils.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -67,6 +67,11 @@ def create_jumpstart_model_table():
6767
We highly suggest pinning an exact model version however.\n
6868
"""
6969
)
70+
file_content.append(
71+
"""
72+
Each model id is linked to an external page that describes the model.\n
73+
"""
74+
)
7075
file_content.append("\n")
7176
file_content.append(".. list-table:: Available Models\n")
7277
file_content.append(" :widths: 50 20 20 20\n")
@@ -80,7 +85,7 @@ def create_jumpstart_model_table():
8085

8186
for model in sdk_manifest_top_versions_for_models.values():
8287
model_spec = get_jumpstart_sdk_spec(model["spec_key"])
83-
file_content.append(" * - {}\n".format(model["model_id"]))
88+
file_content.append(" * - `{} <{}>`_\n".format(model_spec["model_id"], model_spec["url"]))
8489
file_content.append(" - {}\n".format(model_spec["training_supported"]))
8590
file_content.append(" - {}\n".format(model["version"]))
8691
file_content.append(" - {}\n".format(model["min_version"]))

doc/frameworks/tensorflow/deploying_tensorflow_serving.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -272,8 +272,8 @@ More information on how to create ``export_outputs`` can be found in `specifying
272272
refer to TensorFlow's `Save and Restore <https://www.tensorflow.org/guide/saved_model>`_ documentation for other ways to control the
273273
inference-time behavior of your SavedModels.
274274

275-
Providing Python scripts for pre/pos-processing
276-
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
275+
Providing Python scripts for pre/post-processing
276+
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
277277

278278
You can add your customized Python code to process your input and output data.
279279
This customized Python code must be named ``inference.py`` and specified through the ``entry_point`` parameter:

doc/overview.rst

Lines changed: 8 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1226,28 +1226,28 @@ to configure or manage the underlying infrastructure. After you trained a model,
12261226
Serverless endpoint and then invoke the endpoint with the model to get inference results back. More information about
12271227
SageMaker Serverless Inference can be found in the `AWS documentation <https://docs.aws.amazon.com/sagemaker/latest/dg/serverless-endpoints.html>`__.
12281228

1229-
For using SageMaker Serverless Inference, if you plan to use any of the SageMaker-provided container or Bring Your Own Container
1230-
model, you will need to pass ``image_uri``. An example to use ``image_uri`` for creating MXNet model:
1229+
For using SageMaker Serverless Inference, you can either use SageMaker-provided container or Bring Your Own Container model.
1230+
A step by step example for using Serverless Inference with MXNet image :
1231+
1232+
Firstly, create MXNet model
12311233

12321234
.. code:: python
12331235
12341236
from sagemaker.mxnet import MXNetModel
1237+
from sagemaker.serverless import ServerlessInferenceConfig
12351238
import sagemaker
12361239
12371240
role = sagemaker.get_execution_role()
12381241
12391242
# create MXNet Model Class
1240-
mxnet_model = MXNetModel(
1243+
model = MXNetModel(
12411244
model_data="s3://my_bucket/pretrained_model/model.tar.gz", # path to your trained sagemaker model
12421245
role=role, # iam role with permissions to create an Endpoint
12431246
entry_point="inference.py",
1244-
image_uri="763104351884.dkr.ecr.us-west-2.amazonaws.com/mxnet-inference:1.4.1-cpu-py3" # image wanted to use
1247+
py_version="py3", # Python version
1248+
framework_version="1.6.0", # MXNet framework version
12451249
)
12461250
1247-
For more Amazon SageMaker provided algorithms and containers image paths, please check this page: `Amazon SageMaker provided
1248-
algorithms and Deep Learning Containers <https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html>`_.
1249-
After creating model using ``image_uri``, you can then follow the steps below to create serverless endpoint.
1250-
12511251
To deploy serverless endpoint, you will need to create a ``ServerlessInferenceConfig``.
12521252
If you create ``ServerlessInferenceConfig`` without specifying its arguments, the default ``MemorySizeInMB`` will be **2048** and
12531253
the default ``MaxConcurrency`` will be **5** :
@@ -1283,7 +1283,6 @@ Or directly using model's ``deploy()`` method to deploy a serverless endpoint:
12831283
# Deploys the model to a SageMaker serverless endpoint
12841284
serverless_predictor = model.deploy(serverless_inference_config=serverless_config)
12851285
1286-
12871286
After deployment is complete, you can use predictor's ``predict()`` method to invoke the serverless endpoint just like
12881287
real-time endpoints:
12891288

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,4 @@
1+
urllib3==1.26.8
2+
docker-compose==1.29.2
3+
docker~=5.0.0
4+
PyYAML==5.4.1
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
scipy==1.5.4
Lines changed: 20 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,20 @@
1+
tox==3.24.5
2+
flake8==4.0.1
3+
pytest==6.0.2
4+
pytest-cov==3.0.0
5+
pytest-rerunfailures==10.2
6+
pytest-timeout==2.1.0
7+
pytest-xdist==2.4.0
8+
coverage>=5.2, <6.2
9+
mock==4.0.3
10+
contextlib2==21.6.0
11+
awslogs==0.14.0
12+
black==22.3.0
13+
stopit==1.1.2
14+
apache-airflow==2.2.4
15+
apache-airflow-providers-amazon==3.0.0
16+
attrs==20.3.0
17+
fabric==2.6.0
18+
requests==2.27.1
19+
sagemaker-experiments==0.1.35
20+
Jinja2==3.0.3
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
doc8==0.10.1
2+
Pygments==2.11.2
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
pydocstyle==6.1.1
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
flake8==4.0.1
2+
flake8-future-import==0.4.6
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
mypy==0.942
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
pydocstyle==6.1.1
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
pylint==2.6.2
2+
astroid==2.4.2
Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,2 @@
1+
pyenchant==3.2.2
2+
pylint==2.6.2
Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
twine==3.8.0

setup.py

Lines changed: 18 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -31,6 +31,20 @@ def read_version():
3131
return read("VERSION").strip()
3232

3333

34+
def read_requirements(filename):
35+
"""Reads requirements file which lists package dependencies.
36+
37+
Args:
38+
filename: type(str) Relative file path of requirements.txt file
39+
40+
Returns:
41+
list of dependencies extracted from file
42+
"""
43+
with open(os.path.abspath(filename)) as fp:
44+
deps = [line.strip() for line in fp.readlines()]
45+
return deps
46+
47+
3448
# Declare minimal set for installation
3549
required_packages = [
3650
"attrs==20.3.0",
@@ -47,43 +61,15 @@ def read_version():
4761
]
4862

4963
# Specific use case dependencies
64+
# Keep format of *_requirements.txt to be tracked by dependabot
5065
extras = {
51-
"local": [
52-
"urllib3==1.26.8",
53-
"docker-compose==1.29.2",
54-
"docker~=5.0.0",
55-
"PyYAML==5.4.1", # PyYAML version has to match docker-compose requirements
56-
],
57-
"scipy": ["scipy==1.5.4"],
66+
"local": read_requirements("requirements/extras/local_requirements.txt"),
67+
"scipy": read_requirements("requirements/extras/scipy_requirements.txt"),
5868
}
5969
# Meta dependency groups
6070
extras["all"] = [item for group in extras.values() for item in group]
6171
# Tests specific dependencies (do not need to be included in 'all')
62-
extras["test"] = (
63-
[
64-
extras["all"],
65-
"tox==3.24.5",
66-
"flake8==4.0.1",
67-
"pytest==6.0.2",
68-
"pytest-cov==3.0.0",
69-
"pytest-rerunfailures==10.2",
70-
"pytest-timeout==2.1.0",
71-
"pytest-xdist==2.4.0",
72-
"coverage>=5.2, <6.2",
73-
"mock==4.0.3",
74-
"contextlib2==21.6.0",
75-
"awslogs==0.14.0",
76-
"black==22.1.0",
77-
"stopit==1.1.2",
78-
"apache-airflow==2.2.3",
79-
"apache-airflow-providers-amazon==3.0.0",
80-
"attrs==20.3.0",
81-
"fabric==2.6.0",
82-
"requests==2.27.1",
83-
"sagemaker-experiments==0.1.35",
84-
"Jinja2==3.0.3",
85-
],
86-
)
72+
extras["test"] = (extras["all"] + read_requirements("requirements/extras/test_requirements.txt"),)
8773

8874
setup(
8975
name="sagemaker",

src/sagemaker/chainer/model.py

Lines changed: 22 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -99,13 +99,10 @@ def __init__(
9999
file which should be executed as the entry point to model
100100
hosting. If ``source_dir`` is specified, then ``entry_point``
101101
must point to a file located at the root of ``source_dir``.
102-
image_uri (str): A Docker image URI (default: None). In serverless
103-
inferece, it is required. More image information can be found in
104-
`Amazon SageMaker provided algorithms and Deep Learning Containers
105-
<https://docs.aws.amazon.com/sagemaker/latest/dg/sagemaker-algo-docker-registry-paths.html>`_.
106-
In instance based inference, if not specified, a default image for
107-
Chainer will be used. If ``framework_version`` or ``py_version``
108-
are ``None``, then ``image_uri`` is required. If also ``None``,
102+
image_uri (str): A Docker image URI (default: None). If not specified,
103+
a default image for Chainer will be used.
104+
If ``framework_version`` or ``py_version``
105+
are ``None``, then ``image_uri`` is required. If ``image_uri`` is also ``None``,
109106
then a ``ValueError`` will be raised.
110107
framework_version (str): Chainer version you want to use for
111108
executing your model training code. Defaults to ``None``. Required
@@ -143,7 +140,9 @@ def __init__(
143140

144141
self.model_server_workers = model_server_workers
145142

146-
def prepare_container_def(self, instance_type=None, accelerator_type=None):
143+
def prepare_container_def(
144+
self, instance_type=None, accelerator_type=None, serverless_inference_config=None
145+
):
147146
"""Return a container definition with framework configuration set in model environment.
148147
149148
Args:
@@ -152,21 +151,27 @@ def prepare_container_def(self, instance_type=None, accelerator_type=None):
152151
accelerator_type (str): The Elastic Inference accelerator type to
153152
deploy to the instance for loading and making inferences to the
154153
model. For example, 'ml.eia1.medium'.
154+
serverless_inference_config (sagemaker.serverless.ServerlessInferenceConfig):
155+
Specifies configuration related to serverless endpoint. Instance type is
156+
not provided in serverless inference. So this is used to find image URIs.
155157
156158
Returns:
157159
dict[str, str]: A container definition object usable with the
158160
CreateModel API.
159161
"""
160162
deploy_image = self.image_uri
161163
if not deploy_image:
162-
if instance_type is None:
164+
if instance_type is None and serverless_inference_config is None:
163165
raise ValueError(
164166
"Must supply either an instance type (for choosing CPU vs GPU) or an image URI."
165167
)
166168

167169
region_name = self.sagemaker_session.boto_session.region_name
168170
deploy_image = self.serving_image_uri(
169-
region_name, instance_type, accelerator_type=accelerator_type
171+
region_name,
172+
instance_type,
173+
accelerator_type=accelerator_type,
174+
serverless_inference_config=serverless_inference_config,
170175
)
171176

172177
deploy_key_prefix = model_code_key_prefix(self.key_prefix, self.name, deploy_image)
@@ -178,13 +183,18 @@ def prepare_container_def(self, instance_type=None, accelerator_type=None):
178183
deploy_env[MODEL_SERVER_WORKERS_PARAM_NAME.upper()] = str(self.model_server_workers)
179184
return sagemaker.container_def(deploy_image, self.model_data, deploy_env)
180185

181-
def serving_image_uri(self, region_name, instance_type, accelerator_type=None):
186+
def serving_image_uri(
187+
self, region_name, instance_type, accelerator_type=None, serverless_inference_config=None
188+
):
182189
"""Create a URI for the serving image.
183190
184191
Args:
185192
region_name (str): AWS region where the image is uploaded.
186193
instance_type (str): SageMaker instance type. Used to determine device type
187194
(cpu/gpu/family-specific optimized).
195+
serverless_inference_config (sagemaker.serverless.ServerlessInferenceConfig):
196+
Specifies configuration related to serverless endpoint. Instance type is
197+
not provided in serverless inference. So this is used to determine device type.
188198
189199
Returns:
190200
str: The appropriate image URI based on the given parameters.
@@ -198,4 +208,5 @@ def serving_image_uri(self, region_name, instance_type, accelerator_type=None):
198208
instance_type=instance_type,
199209
accelerator_type=accelerator_type,
200210
image_scope="inference",
211+
serverless_inference_config=serverless_inference_config,
201212
)

0 commit comments

Comments
 (0)