Skip to content

Commit 638b756

Browse files
authored
Merge branch 'master' into min_df_fix
2 parents 84ccb0e + d003ef0 commit 638b756

File tree

70 files changed

+4543
-4362
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

70 files changed

+4543
-4362
lines changed

advanced_functionality/multi_model_bring_your_own/multi_model_endpoint_bring_your_own.ipynb

Lines changed: 5 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,10 @@
1111
"\n",
1212
"For the inference container to serve multiple models in a multi-model endpoint, it must implement [additional APIs](https://docs.aws.amazon.com/sagemaker/latest/dg/build-multi-model-build-container.html) in order to load, list, get, unload and invoke specific models. This notebook demonstrates how to build your own inference container that implements these APIs.\n",
1313
"\n",
14+
"**Note**: Because this notebook builds a Docker container, it does not run in Amazon SageMaker Studio.\n",
15+
"\n",
16+
"This notebook was tested with the `conda_mxnet_p36` kernel running SageMaker Python SDK version 2.15.3 on an Amazon SageMaker notebook instance.\n",
17+
"\n",
1418
"---\n",
1519
"\n",
1620
"### Contents\n",
@@ -553,7 +557,7 @@
553557
"name": "python",
554558
"nbconvert_exporter": "python",
555559
"pygments_lexer": "ipython3",
556-
"version": "3.6.5"
560+
"version": "3.7.6"
557561
}
558562
},
559563
"nbformat": 4,

autopilot/autopilot_customer_churn_high_level_with_evaluation.ipynb

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -22,7 +22,7 @@
2222
"\n",
2323
"---\n",
2424
"\n",
25-
"This notebook works with sagemaker python sdk >= 1.65.1.\n",
25+
"This notebook works with sagemaker python sdk 2.x\n",
2626
"\n",
2727
"## Contents\n",
2828
"\n",
@@ -794,17 +794,19 @@
794794
"metadata": {},
795795
"outputs": [],
796796
"source": [
797-
"from sagemaker.predictor import RealTimePredictor\n",
798-
"from sagemaker.content_types import CONTENT_TYPE_CSV\n",
797+
"from sagemaker.predictor import Predictor\n",
798+
"from sagemaker.serializers import CSVSerializer\n",
799+
"from sagemaker.deserializers import CSVDeserializer\n",
799800
"\n",
800801
"predictor = automl.deploy(initial_instance_count=1,\n",
801802
" instance_type='ml.m5.2xlarge',\n",
802803
" candidate=candidates[best_candidate_idx],\n",
803804
" inference_response_keys=inference_response_keys,\n",
804-
" predictor_cls=RealTimePredictor)\n",
805-
"predictor.content_type = CONTENT_TYPE_CSV\n",
805+
" predictor_cls=Predictor,\n",
806+
" serializer=CSVSerializer(),\n",
807+
" deserializer=CSVDeserializer())\n",
806808
"\n",
807-
"print(\"Created endpoint: {}\".format(predictor.endpoint))\n"
809+
"print(\"Created endpoint: {}\".format(predictor.endpoint_name))\n"
808810
]
809811
},
810812
{
@@ -829,11 +831,9 @@
829831
"metadata": {},
830832
"outputs": [],
831833
"source": [
832-
"from io import StringIO\n",
833-
"\n",
834-
"prediction = predictor.predict(test_data_no_target.to_csv(sep=',', header=False, index=False)).decode('utf-8')\n",
835-
"prediction_df = pd.read_csv(StringIO(prediction), header=None, names=inference_response_keys)\n",
836-
"custom_predicted_labels = prediction_df.iloc[:,1].values >= best_candidate_threshold\n",
834+
"prediction = predictor.predict(test_data_no_target.to_csv(sep=',', header=False, index=False))\n",
835+
"prediction_df = pd.DataFrame(prediction, columns=inference_response_keys)\n",
836+
"custom_predicted_labels = prediction_df.iloc[:,1].astype(float).values >= best_candidate_threshold\n",
837837
"prediction_df['custom_predicted_label'] = custom_predicted_labels\n",
838838
"prediction_df['custom_predicted_label'] = prediction_df['custom_predicted_label'].map({False: target_attribute_values[0], True: target_attribute_values[1]})\n",
839839
"prediction_df"
@@ -906,4 +906,4 @@
906906
},
907907
"nbformat": 4,
908908
"nbformat_minor": 4
909-
}
909+
}

autopilot/model-explainability/explaining_customer_churn_model.ipynb

Lines changed: 13 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -8,7 +8,7 @@
88
"\n",
99
"Kernel `Python 3 (Data Science)` works well with this notebook.\n",
1010
"\n",
11-
"_This notebook was created and tested on an ml.m5.large notebook instance._\n",
11+
"_This notebook was created and tested on an ml.m5.xlarge notebook instance._\n",
1212
"\n",
1313
"## Table of Contents\n",
1414
"\n",
@@ -101,9 +101,8 @@
101101
"source": [
102102
"import shap\n",
103103
"\n",
104-
"from kernel_explainer_wrapper import KernelExplainerWrapper\n",
104+
"from shap import KernelExplainer\n",
105105
"from shap import sample\n",
106-
"from shap.common import LogitLink, IdentityLink\n",
107106
"from scipy.special import expit\n",
108107
"\n",
109108
"# Initialize plugin to make plots interactive.\n",
@@ -235,7 +234,7 @@
235234
"metadata": {},
236235
"outputs": [],
237236
"source": [
238-
"churn_data = pd.read_csv('./Data sets/churn.txt')\n",
237+
"churn_data = pd.read_csv('../Data sets/churn.txt')\n",
239238
"data_without_target = churn_data.drop(columns=['Churn?'])\n",
240239
"\n",
241240
"background_data = sample(data_without_target, 50)"
@@ -252,7 +251,10 @@
252251
"cell_type": "markdown",
253252
"metadata": {},
254253
"source": [
255-
"Next, we create the `KernelExplainer`. Note that since it's a black box explainer, `KernelExplainer` only requires a handle to the predict (or predict_proba) function and does not require any other information about the model. For classification it is recommended to derive feature importance scores in the log-odds space since additivity is a more natural assumption there thus we use `LogitLink`. For regression `IdentityLink` should be used."
254+
"Next, we create the `KernelExplainer`. Note that since it's a black box explainer, `KernelExplainer` only requires a handle to the\n",
255+
"predict (or predict_proba) function and does not require any other information about the model. For classification it is recommended to\n",
256+
"derive feature importance scores in the log-odds space since additivity is a more natural assumption there thus we use `logit`. For\n",
257+
"regression `identity` should be used."
256258
]
257259
},
258260
{
@@ -263,17 +265,16 @@
263265
"source": [
264266
"# Derive link function \n",
265267
"problem_type = automl_job.describe_auto_ml_job(job_name=automl_job_name)['ResolvedAttributes']['ProblemType'] \n",
266-
"link_fn = IdentityLink if problem_type == 'Regression' else LogitLink \n",
268+
"link = \"identity\" if problem_type == 'Regression' else \"logit\"\n",
267269
"\n",
268-
"# the handle to predict_proba is passed to KernelExplainerWrapper since KernelSHAP requires the class probability\n",
269-
"explainer = KernelExplainerWrapper(automl_estimator.predict_proba, background_data, link=link_fn())"
270+
"# the handle to predict_proba is passed to KernelExplainer since KernelSHAP requires the class probability\n",
271+
"explainer = KernelExplainer(automl_estimator.predict_proba, background_data, link=link)"
270272
]
271273
},
272274
{
273275
"cell_type": "markdown",
274276
"metadata": {},
275277
"source": [
276-
"Currently, `shap.KernelExplainer` only supports numeric data. A version of SHAP that supports text will become available soon. A workaround is provided by our wrapper `KernelExplainerWrapper`. Once a new version of SHAP is released, `shap.KernelExplainer` should be used instead of `KernelExplainerWrapper`.\n",
277278
"\n",
278279
"By analyzing the background data `KernelExplainer` provides us with `explainer.expected_value` which is the model prediction with all features missing. Considering a customer for which we have no data at all (i.e. all features are missing) this should theoretically be the model prediction."
279280
]
@@ -326,7 +327,7 @@
326327
"outputs": [],
327328
"source": [
328329
"# Since shap_values are provided in the log-odds space, we convert them back to the probability space by using LogitLink\n",
329-
"shap.force_plot(explainer.expected_value, shap_values, x, link=link_fn())"
330+
"shap.force_plot(explainer.expected_value, shap_values, x, link=link)"
330331
]
331332
},
332333
{
@@ -348,7 +349,7 @@
348349
"source": [
349350
"with ManagedEndpoint(ep_name) as mep:\n",
350351
" shap_values = explainer.shap_values(x, nsamples='auto', l1_reg='num_features(5)')\n",
351-
"shap.force_plot(explainer.expected_value, shap_values, x, link=link_fn())"
352+
"shap.force_plot(explainer.expected_value, shap_values, x, link=link)"
352353
]
353354
},
354355
{
@@ -396,7 +397,7 @@
396397
"metadata": {},
397398
"outputs": [],
398399
"source": [
399-
"shap.force_plot(explainer.expected_value, shap_values, X, link=link_fn())"
400+
"shap.force_plot(explainer.expected_value, shap_values, X, link=link)"
400401
]
401402
},
402403
{

autopilot/model-explainability/kernel_explainer_wrapper.py

Lines changed: 0 additions & 57 deletions
This file was deleted.

aws_marketplace/creating_marketplace_products/Bring_Your_Own-Creating_Algorithm_and_Model_Package.ipynb

Lines changed: 3 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -258,8 +258,6 @@
258258
"\n",
259259
"# Get the region defined in the current configuration (default to us-west-2 if none defined)\n",
260260
"region=$(aws configure get region)\n",
261-
"# specifically setting to us-east-2 since during the pre-release period, we support only that region.\n",
262-
"region=${region:-us-east-2}\n",
263261
"\n",
264262
"fullname=\"${account}.dkr.ecr.${region}.amazonaws.com/${algorithm_name}:latest\"\n",
265263
"\n",
@@ -595,14 +593,6 @@
595593
"Now that you have verified that the algorithm code works for training, live inference and batch inference in the above sections, you can start packaging it up as an Amazon SageMaker Algorithm."
596594
]
597595
},
598-
{
599-
"cell_type": "markdown",
600-
"metadata": {},
601-
"source": [
602-
"#### Region Limitation\n",
603-
"Seller onboarding is limited to us-east-2 region (CMH) only. The client we are creating below will be hard-coded to talk to our us-east-2 endpoint only."
604-
]
605-
},
606596
{
607597
"cell_type": "code",
608598
"execution_count": null,
@@ -611,7 +601,7 @@
611601
"source": [
612602
"import boto3\n",
613603
"\n",
614-
"smmp = boto3.client('sagemaker', region_name='us-east-2', endpoint_url=\"https://sagemaker.us-east-2.amazonaws.com\")"
604+
"smmp = boto3.client('sagemaker')"
615605
]
616606
},
617607
{
@@ -807,21 +797,13 @@
807797
"A Model Package is a reusable model artifacts abstraction that packages all ingredients necessary for inference. It consists of an inference specification that defines the inference image to use along with an optional model weights location.\n"
808798
]
809799
},
810-
{
811-
"cell_type": "markdown",
812-
"metadata": {},
813-
"source": [
814-
"#### Region Limitation\n",
815-
"Seller onboarding is limited to us-east-2 region (CMH) only. The client we are creating below will be hard-coded to talk to our us-east-2 endpoint only. (Note: You may have previous done this step in Part 3. Repeating here to keep Part 4 self contained.)"
816-
]
817-
},
818800
{
819801
"cell_type": "code",
820802
"execution_count": null,
821803
"metadata": {},
822804
"outputs": [],
823805
"source": [
824-
"smmp = boto3.client('sagemaker', region_name='us-east-2', endpoint_url=\"https://sagemaker.us-east-2.amazonaws.com\")"
806+
"smmp = boto3.client('sagemaker')"
825807
]
826808
},
827809
{
@@ -982,7 +964,7 @@
982964
"name": "python",
983965
"nbconvert_exporter": "python",
984966
"pygments_lexer": "ipython3",
985-
"version": "3.6.5"
967+
"version": "3.6.10"
986968
}
987969
},
988970
"nbformat": 4,

aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/Algorithm/Sample_Notebook_Template/title_of_your_product-Algorithm.ipynb

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -13,6 +13,7 @@
1313
"\n",
1414
"This sample notebook shows you how to train a custom ML model using <font color='red'> For Seller to update:[Title_of_your_Algorithm](Provide link to your marketplace listing of your product)</font> from AWS Marketplace.\n",
1515
"\n",
16+
"> **Note**: This is a reference notebook and it cannot run unless you make changes suggested in the notebook.\n",
1617
"\n",
1718
"#### Pre-requisites:\n",
1819
"1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio.\n",
@@ -844,9 +845,9 @@
844845
],
845846
"metadata": {
846847
"kernelspec": {
847-
"display_name": "Python 3",
848+
"display_name": "conda_python3",
848849
"language": "python",
849-
"name": "python3"
850+
"name": "conda_python3"
850851
},
851852
"language_info": {
852853
"codemirror_mode": {
@@ -858,7 +859,7 @@
858859
"name": "python",
859860
"nbconvert_exporter": "python",
860861
"pygments_lexer": "ipython3",
861-
"version": "3.8.4"
862+
"version": "3.6.10"
862863
}
863864
},
864865
"nbformat": 4,

aws_marketplace/curating_aws_marketplace_listing_and_sample_notebook/ModelPackage/Sample_Notebook_Template/title_of_your_product-Model.ipynb

Lines changed: 4 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -11,6 +11,7 @@
1111
"\n",
1212
"This sample notebook shows you how to deploy <font color='red'> For Seller to update:[Title_of_your_ML Model](Provide link to your marketplace listing of your product)</font> using Amazon SageMaker.\n",
1313
"\n",
14+
"> **Note**: This is a reference notebook and it cannot run unless you make changes suggested in the notebook.\n",
1415
"\n",
1516
"#### Pre-requisites:\n",
1617
"1. **Note**: This notebook contains elements which render correctly in Jupyter interface. Open this notebook from an Amazon SageMaker Notebook Instance or Amazon SageMaker Studio.\n",
@@ -416,9 +417,9 @@
416417
],
417418
"metadata": {
418419
"kernelspec": {
419-
"display_name": "Python 3",
420+
"display_name": "conda_python3",
420421
"language": "python",
421-
"name": "python3"
422+
"name": "conda_python3"
422423
},
423424
"language_info": {
424425
"codemirror_mode": {
@@ -430,7 +431,7 @@
430431
"name": "python",
431432
"nbconvert_exporter": "python",
432433
"pygments_lexer": "ipython3",
433-
"version": "3.8.4"
434+
"version": "3.6.10"
434435
}
435436
},
436437
"nbformat": 4,

aws_marketplace/using_model_packages/auto_insurance/src/model_package_arns.py

Lines changed: 6 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -27,6 +27,8 @@ def get_vehicle_damage_detection_model_package_arn(current_region):
2727
def get_vehicle_recognition_model_package_arn(current_region):
2828
mapping = {
2929
"us-east-1" : "arn:aws:sagemaker:us-east-1:865070037744:model-package/vehicle-5bbb43353155de115c9fabdde5167c06",
30+
"us-east-2" : "arn:aws:sagemaker:us-east-2:057799348421:model-package/vehicle-5bbb43353155de115c9fabdde5167c06",
31+
3032
"ap-northeast-1" : "arn:aws:sagemaker:ap-northeast-1:977537786026:model-package/vehicle-5bbb43353155de115c9fabdde5167c06",
3133
"ap-northeast-2" : "arn:aws:sagemaker:ap-northeast-2:745090734665:model-package/vehicle-5bbb43353155de115c9fabdde5167c06",
3234
"ap-southeast-1" : "arn:aws:sagemaker:ap-southeast-1:192199979996:model-package/vehicle-5bbb43353155de115c9fabdde5167c06",
@@ -35,6 +37,9 @@ def get_vehicle_recognition_model_package_arn(current_region):
3537
"ap-south-1": "arn:aws:sagemaker:ap-south-1:077584701553:model-package/vehicle-5bbb43353155de115c9fabdde5167c06",
3638
"ca-central-1":"arn:aws:sagemaker:ca-central-1:470592106596:model-package/vehicle-5bbb43353155de115c9fabdde5167c06",
3739
"eu-west-1" : "arn:aws:sagemaker:eu-west-1:985815980388:model-package/vehicle-5bbb43353155de115c9fabdde5167c06",
38-
"eu-west-2" : "arn:aws:sagemaker:eu-west-2:856760150666:model-package/vehicle-5bbb43353155de115c9fabdde5167c06"
40+
"eu-west-2" : "arn:aws:sagemaker:eu-west-2:856760150666:model-package/vehicle-5bbb43353155de115c9fabdde5167c06",
41+
"us-west-2" : "arn:aws:sagemaker:us-west-2:594846645681:model-package/vehicle-5bbb43353155de115c9fabdde5167c06",
42+
"us-west-1" : "arn:aws:sagemaker:us-west-1:382657785993:model-package/vehicle-5bbb43353155de115c9fabdde5167c06"
43+
3944
}
4045
return mapping[current_region]

aws_marketplace/using_model_packages/generic_sample_notebook/A_generic_sample_notebook_to_perform_inference_on_ML_model_packages_from_AWS_Marketplace.ipynb

Lines changed: 3 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,7 @@
11
{
22
"cells": [
33
{
4+
"attachments": {},
45
"cell_type": "markdown",
56
"metadata": {},
67
"source": [
@@ -10,6 +11,7 @@
1011
"\n",
1112
"If such a sample notebook does not exist and you want to deploy and try an ML model package via code written in python language, this generic notebook can guide you on how to deploy and perform inference on an ML model package from AWS Marketplace.\n",
1213
"\n",
14+
"> **Note**: This is a reference notebook and it cannot run unless you make changes suggested in the notebook.\n",
1315
"\n",
1416
"> **Note**:If you are facing technical issues while trying an ML model package from AWS Marketplace and need help, please open a support ticket or write to the team on [email protected] for additional assistance.\n",
1517
"\n",
@@ -935,7 +937,7 @@
935937
"name": "python",
936938
"nbconvert_exporter": "python",
937939
"pygments_lexer": "ipython3",
938-
"version": "3.6.5"
940+
"version": "3.6.10"
939941
}
940942
},
941943
"nbformat": 4,

0 commit comments

Comments
 (0)