Skip to content

Commit d1b0932

Browse files
authored
Merge pull request #48 from awslabs/arpin_kmeans_markdown
Arpin kmeans markdown
2 parents 9316c98 + 19bf3eb commit d1b0932

File tree

5 files changed

+54
-60
lines changed

5 files changed

+54
-60
lines changed

.DS_Store

-6 KB
Binary file not shown.

README.md

Lines changed: 29 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -1,31 +1,51 @@
11
# Amazon SageMaker Examples
22

3-
This repository contains example notebooks that show how to apply machine learning and deep learning in Amazon SageMaker(https://aws.amazon.com/amazon-ai/).
3+
This repository contains example notebooks that show how to apply machine learning and deep learning in [Amazon SageMaker](https://aws.amazon.com/machine-learning/platforms/sagemaker).
44

55
## Examples
66

77
### Introduction to Applying Machine Learning
88

9-
- [XGBoost for Direct Marketing](xgboost_direct_marketing) targets potential customers that are most likely to convert based on customer and aggregate level metrics.
10-
- [PCA and k-means for Movie Clustering](pca_kmeans_movie_clustering) creates clusters of movies based on genre, ratings, and other characteristics.
9+
These examples provide a gentle introduction to machine learning concepts as they are applied in practical use cases across a variety of sectors.
10+
11+
- [Targeted Direct Marketing](introduction_to_applying_machine_learning/xgboost_direct_marketing) predicts potential customers that are most likely to convert based on customer and aggregate level metrics, using Amazon SageMaker's implementation of [XGBoost](https://github.com/dmlc/xgboost).
12+
- [Predicting Customer Churn](introduction_to_applying_machine_learning/xgboost_customer_churn) uses customer interaction and service usage data to find those most likely to churn, and then walks through the cost/benefit trade-offs of providing retention incentives. This uses Amazon SageMaker's implementation of [XGBoost](https://github.com/dmlc/xgboost) to create a highly predictive model.
13+
- [Time-series Forecasting](introduction_to_applying_machine_learning/linear_time_series_forecast) generates a forecast for topline product demand using Amazon SageMaker's Linear Learner algorithm.
14+
- [Cancer Prediction](introduction_to_applying_machine_learning/breast_cancer_prediction) predicts Breast Cancer based on features derived from images, using SageMaker's Linear Learner.
1115

1216
### Introduction to Amazon Algorithms
1317

18+
These examples provide quick walkthroughs to get you up and running with Amazon SageMaker's custom developed algorithms. Most of these algorithms can train on distributed hardware, scale incredibly well, and are faster and cheaper than popular alternatives.
19+
20+
- [k-means](introduction_to_amazon_algorithms/1P_kmeans_highlevel) is our introductory example for Amazon SageMaker. It walks through the process of clustering MNIST images of handwritten digits using Amazon SageMaker k-means.
21+
- [Factorization Machines](introduction_to_amazon_algorithms/factorization_machines_mnist) showcases Amazon SageMaker's implementation of the algorithm to predict whether a handwritten digit from the MNIST dataset is a 0 or not using a binary classifier.
22+
- [Latent Dirichlet Allocation (LDA)](introduction_to_amazon_algorithms/lda_topic_modeling) introduces topic modeling using Amazon SageMaker Latent Dirichlet Allocation (LDA) on a synthetic dataset.
23+
- [Linear Learner](introduction_to_amazon_algorithms/linear_learner_mnist) predicts whether a handwritten digit from the MNIST dataset is a 0 or not using a binary classifier from Amazon SageMaker Linear Learner.
24+
- [Neural Topic Model (NTM)](introduction_to_amazon_algorithms/ntm_synthetic) uses Amazon SageMaker Neural Topic Model (NTM) to uncover topics in documents from a synthetic data source, where topic distributions are known.
25+
- [Principal Components Analysis (PCA)](introduction_to_amazon_algorithms/pca_mnist) uses Amazon SageMaker PCA to calculate eigendigits from MNIST.
26+
- [Seq2Seq](introduction_to_amazon_algorithms/seq2seq) uses the Amazon SageMaker Seq2Seq algorithm that's built on top of [Sockeye](https://github.com/awslabs/sockeye), which is a sequence-to-sequence framework for Neural Machine Translation based on MXNet. Seq2Seq implements state-of-the-art encoder-decoder architectures which can also be used for tasks like Abstractive Summarization in addition to Machine Translation. This notebook shows translation from English to German text.
27+
- [XGBoost for regression](introduction_to_amazon_algorithms/xgboost_abalone) predicts the age of abalone ([Abalone dataset](https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/regression.html)) using regression from Amazon SageMaker's implementation of [XGBoost](https://github.com/dmlc/xgboost).
28+
- [XGBoost for multi-class classification](introduction_to_amazon_algorithms/xgboost_mnist) uses Amazon SageMaker's implementation of [XGBoost](https://github.com/dmlc/xgboost) to classifiy handwritten digits from the MNIST dataset as one of the ten digits using a multi-class classifier. Both single machine and distributed use-cases are presented.
29+
1430
### Scientific Details of Algorithms
1531

32+
These examples provide more thorough mathematical treatment on a select group of algorithms.
33+
34+
- [Latent Dirichlet Allocation (LDA)](scientific_details_of_algorithms/lda_topic_modeling) dives into Amazon SageMaker's spectral decomposition approach to LDA.
35+
1636
### Advanced Amazon SageMaker Functionality
1737

18-
- [Installing the R Kernel](install_r_kernel) shows how to install the R kernel into an Amazon SageMaker Notebook Instance.
19-
- [Bring Your Own Model for k-means](kmeans_bring_your_own_model) shows how to take a model that's been fit elsewhere and use Amazon SageMaker containers to host.
20-
- [Bring Your Own Algorithm with R](r_bring_your_own) shows how to bring your own algorithm container to Amazon SageMaker using the R language.
38+
- [Installing the R Kernel](advanced_functionality/install_r_kernel) shows how to install the R kernel into an Amazon SageMaker Notebook Instance.
39+
- [Bring Your Own Model for k-means](advanced_functionality/kmeans_bring_your_own_model) shows how to take a model that's been fit elsewhere and use Amazon SageMaker Algorithms containers to host it.
40+
- [Bring Your Own Algorithm with R](advanced_functionality/r_bring_your_own) shows how to bring your own algorithm container to Amazon SageMaker using the R language.
2141
- [Bring Your Own Tensorflow Model](sagemaker-python-sdk/tensorflow_iris_byom) shows how to bring a model trained anywhere into Amazon SageMaker
2242

2343
## FAQ
2444

25-
*Will these example work outside of Amazon SageMaker?*
45+
*Will these examples work outside of Amazon SageMaker?*
2646

2747
- Although most examples utilize key Amazon SageMaker functionality like distributed, managed training or real-time hosted endpoints, these notebooks can be run outside of Amazon SageMaker Notebook Instances with minimal modification (updating IAM role definition and installing the necessary libraries).
2848

29-
*How do I contribute my own example notebook?"
49+
*How do I contribute my own example notebook?*
3050

31-
- Although we're extremely excited to receive contributions from the community, we're still working on the best mechanism to take in examples from and external source. Please bear will us in the short-term if pull requests take longer than expected or are closed.
51+
- Although we're extremely excited to receive contributions from the community, we're still working on the best mechanism to take in examples from and external source. Please bear with us in the short-term if pull requests take longer than expected or are closed.

introduction_to_amazon_algorithms/README.md

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -3,6 +3,7 @@
33
This directory includes introductory examples to Amazon SageMaker Algorithms that we have developed so far. It seeks to provide guidance and examples on basic functionality rather than a detailed scientific review or an implementation on complex, real-world data.
44

55
Example Notebooks include:
6+
- *1P_kmeans_highlevel*: Our introduction to Amazon SageMaker which walks through the process of clustering MNIST images of handwritten digits.
67
- *factorization_machines_mnist*: Predicts whether a handwritten digit from the MNIST dataset is a 0 or not using a binary classifier from Amazon SageMaker Factorization Machines.
78
- *lda_topic_modeling*: Topic modeling using Amazon SageMaker Latent Dirichlet Allocation (LDA) on a synthetic dataset.
89
- *linear_mnist*: Predicts whether a handwritten digit from the MNIST dataset is a 0 or not using a binary classifier from Amazon SageMaker Linear Learner.

sagemaker-python-sdk/1P_kmeans_highlevel/kmeans_mnist.ipynb

Lines changed: 11 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -41,13 +41,10 @@
4141
"\n",
4242
"### Permissions and environment variables\n",
4343
"\n",
44-
"Here we set up the linkage and authentication to AWS services. There are three parts to this:\n",
44+
"Here we set up the linkage and authentication to AWS services. There are two parts to this:\n",
4545
"\n",
46-
"1. The credentials and region for the account that's running training. Upload the credentials in the normal AWS credentials file format using the jupyter upload feature.\n",
47-
"2. The roles used to give learning and hosting access to your data. See the documentation for how to specify these.\n",
48-
"3. The S3 bucket that you want to use for training and model data.\n",
49-
"\n",
50-
"_Note:_ Credentials for hosted notebooks will be automated before the final release."
46+
"1. The role(s) used to give learning and hosting access to your data. See the documentation for how to specify these.\n",
47+
"1. The S3 bucket name and locations that you want to use for training and model data."
5148
]
5249
},
5350
{
@@ -82,7 +79,9 @@
8279
"source": [
8380
"### Data ingestion\n",
8481
"\n",
85-
"Next, we read the dataset from the existing repository into memory, for preprocessing prior to training. This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets."
82+
"Next, we read the dataset from the existing repository into memory, for preprocessing prior to training. In this case we'll use the MNIST dataset, which contains 70K 28 x 28 pixel images of handwritten digits. For more details, please see [here](http://yann.lecun.com/exdb/mnist/).\n",
83+
"\n",
84+
"This processing could be done *in situ* by Amazon Athena, Apache Spark in Amazon EMR, Amazon Redshift, etc., assuming the dataset is present in the appropriate location. Then, the next step would be to transfer the data to S3 for use in training. For small datasets, such as this one, reading into memory isn't onerous, though it would be for larger datasets."
8685
]
8786
},
8887
{
@@ -137,7 +136,7 @@
137136
"source": [
138137
"## Training the K-Means model\n",
139138
"\n",
140-
"Once we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn't meant to show off the performance of the kmeans training algorithm - we will visit that in another example.\n",
139+
"Once we have the data preprocessed and available in the correct format for training, the next step is to actually train the model using the data. Since this data is relatively small, it isn't meant to show off the performance of the k-means training algorithm. But Amazon SageMaker's k-means has been tested on, and scales well with, multi-terabyte datasets.\n",
141140
"\n",
142141
"After setting training parameters, we kick off training, and poll for status until training is completed, which in this example, takes between 7 and 11 minutes."
143142
]
@@ -174,12 +173,7 @@
174173
"metadata": {},
175174
"source": [
176175
"## Set up hosting for the model\n",
177-
"In order to set up hosting, we have to import the model from training to hosting. A common question would be, why wouldn't we automatically go from training to hosting? As we worked through examples of what customers were looking to do with hosting, we realized that the Amazon ML model of hosting was unlikely to be sufficient for all customers.\n",
178-
"\n",
179-
"As a result, we have introduced some flexibility with respect to model deployment, with the goal of additional model deployment targets after launch. In the short term, that introduces some complexity, but we are actively working on making that easier for customers, even before GA.\n",
180-
"\n",
181-
"### Import model into hosting\n",
182-
"Next, you register the model with hosting. This allows you the flexibility of importing models trained elsewhere, as well as the choice of not importing models if the target of model creation is AWS Lambda, AWS Greengrass, Amazon Redshift, Amazon Athena, or other deployment target."
176+
"Now, we can deploy the model we just trained behind a real-time hosted endpoint. This next step can take, on average, 7 to 11 minutes to complete."
183177
]
184178
},
185179
{
@@ -199,7 +193,7 @@
199193
"metadata": {},
200194
"source": [
201195
"## Validate the model for use\n",
202-
"Finally, the customer can now validate the model for use. They can obtain the endpoint from the client library using the result from previous operations, and generate classifications from the trained model using that endpoint."
196+
"Finally, we'll validate the model for use. Let's generate a classification for a single observation from the trained model using the endpoint we just created."
203197
]
204198
},
205199
{
@@ -268,7 +262,8 @@
268262
"cell_type": "markdown",
269263
"metadata": {},
270264
"source": [
271-
"### (Optional) Delete the Endpoint"
265+
"### (Optional) Delete the Endpoint\n",
266+
"If you're ready to be done with this notebook, make sure run the cell below. This will remove the hosted endpoint you created and avoid any charges from a stray instance being left on."
272267
]
273268
},
274269
{
@@ -291,13 +286,6 @@
291286
"#import sagemaker\n",
292287
"#sagemaker.Session().delete_endpoint(kmeans_predictor.endpoint)"
293288
]
294-
},
295-
{
296-
"cell_type": "code",
297-
"execution_count": null,
298-
"metadata": {},
299-
"outputs": [],
300-
"source": []
301289
}
302290
],
303291
"metadata": {

0 commit comments

Comments
 (0)