Skip to content

Commit 152dec9

Browse files
Merge branch 'master' into fix-processing-image-uri-param
2 parents 03408a4 + c70e30c commit 152dec9

File tree

8 files changed

+41
-5
lines changed

8 files changed

+41
-5
lines changed

CHANGELOG.md

Lines changed: 27 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1,5 +1,32 @@
11
# Changelog
22

3+
## v2.98.0 (2022-07-05)
4+
5+
### Features
6+
7+
* Adding deepar image
8+
9+
### Documentation Changes
10+
11+
* edit to clarify how to use inference.py
12+
13+
## v2.97.0 (2022-06-28)
14+
15+
### Deprecations and Removals
16+
17+
* remove support for python 3.6
18+
19+
### Features
20+
21+
* update prebuilt models documentation
22+
23+
### Bug Fixes and Other Changes
24+
25+
* Skipping test_candidate_estimator_default_rerun_and_deploy
26+
* Update model name from 'compiled.pt' to 'model.pth' for neo
27+
* update pytest, skip hf integ temp
28+
* Add override_pipeline_parameter_var decorator to give grace period to update invalid pipeline var args
29+
330
## v2.96.0 (2022-06-20)
431

532
### Features

VERSION

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1 +1 @@
1-
2.96.1.dev0
1+
2.98.1.dev0

doc/frameworks/tensorflow/using_tf.rst

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -759,7 +759,7 @@ Create Python Scripts for Custom Input and Output Formats
759759
---------------------------------------------------------
760760

761761
You can add your customized Python code to process your input and output data.
762-
This customized Python code must be named ``inference.py`` and specified through the ``entry_point`` parameter:
762+
This customized Python code must be named ``inference.py`` and is specified through the ``entry_point`` parameter:
763763

764764
.. code::
765765
@@ -769,6 +769,8 @@ This customized Python code must be named ``inference.py`` and specified through
769769
model_data='s3://mybucket/model.tar.gz',
770770
role='MySageMakerRole')
771771
772+
In the example above, ``inference.py`` is assumed to be a file inside ``model.tar.gz``. If you want to use a local file instead, you must add the ``source_dir`` argument. See the documentation on `TensorFlowModel <https://sagemaker.readthedocs.io/en/stable/frameworks/tensorflow/sagemaker.tensorflow.html#sagemaker.tensorflow.model.TensorFlowModel>`_.
773+
772774
How to implement the pre- and/or post-processing handler(s)
773775
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
774776

requirements/extras/test_requirements.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
tox==3.24.5
22
flake8==4.0.1
3-
pytest==6.0.2
3+
pytest==6.2.5
44
pytest-cov==3.0.0
55
pytest-rerunfailures==10.2
66
pytest-timeout==2.1.0

src/sagemaker/image_uri_config/forecasting-deepar.json

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@
77
"ap-east-1": "286214385809",
88
"ap-northeast-1": "633353088612",
99
"ap-northeast-2": "204372634319",
10+
"ap-northeast-3": "867004704886",
1011
"ap-south-1": "991648021394",
1112
"ap-southeast-1": "475088953585",
1213
"ap-southeast-2": "514117268639",

tests/data/pytorch_neo/code/inference.py

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -71,8 +71,8 @@ def model_fn(model_dir):
7171
logger.info("model_fn")
7272
neopytorch.config(model_dir=model_dir, neo_runtime=True)
7373
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
74-
# The compiled model is saved as "compiled.pt"
75-
model = torch.jit.load(os.path.join(model_dir, "compiled.pt"), map_location=device)
74+
# The compiled model is saved as "model.pth"
75+
model = torch.jit.load(os.path.join(model_dir, "model.pth"), map_location=device)
7676

7777
# It is recommended to run warm-up inference during model load
7878
sample_input_path = os.path.join(model_dir, "sample_input.pkl")

tests/integ/test_auto_ml.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -293,6 +293,9 @@ def test_deploy_best_candidate(sagemaker_session, cpu_instance_type):
293293
tests.integ.test_region() in tests.integ.NO_AUTO_ML_REGIONS,
294294
reason="AutoML is not supported in the region yet.",
295295
)
296+
@pytest.mark.skip(
297+
reason="",
298+
)
296299
def test_candidate_estimator_default_rerun_and_deploy(sagemaker_session, cpu_instance_type):
297300
auto_ml_utils.create_auto_ml_job_if_not_exist(sagemaker_session)
298301

tests/integ/test_huggingface.py

Lines changed: 3 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -116,6 +116,9 @@ def test_huggingface_training(
116116
and integ.test_region() in integ.TRAINING_NO_P3_REGIONS,
117117
reason="no ml.p2 or ml.p3 instances in this region",
118118
)
119+
@pytest.mark.skip(
120+
reason="need to re enable it later t.corp:V609860141",
121+
)
119122
def test_huggingface_training_tf(
120123
sagemaker_session,
121124
gpu_instance_type,

0 commit comments

Comments
 (0)