Skip to content

Commit 2ceecb2

Browse files
committed
Modified license year, classify to cluster, and warning on running times
1 parent 9b40034 commit 2ceecb2

File tree

4 files changed

+30
-22
lines changed

4 files changed

+30
-22
lines changed

sagemaker-spark/pyspark_mnist/pyspark_mnist_custom_estimator.ipynb

Lines changed: 4 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"7. [More on SageMaker Spark](#More-on-SageMaker-Spark)\n",
1616
"\n",
1717
"## Introduction\n",
18-
"This notebook will show how to classify handwritten digits through the SageMaker PySpark library. \n",
18+
"This notebook will show how to cluster handwritten digits through the SageMaker PySpark library. \n",
1919
"\n",
2020
"We will manipulate data through Spark using a SparkSession, and then use the SageMaker Spark library to interact with SageMaker for training and inference. \n",
2121
"We will use a custom estimator to perform the classification task, and train and infer using that custom estimator.\n",
@@ -31,7 +31,7 @@
3131
"source": [
3232
"## Setup\n",
3333
"\n",
34-
"First, we import the necessary modules and create the SparkSession and `SparkSession` with the SageMaker-Spark dependencies attached. "
34+
"First, we import the necessary modules and create the `SparkSession` with the SageMaker-Spark dependencies attached. "
3535
]
3636
},
3737
{
@@ -194,7 +194,7 @@
194194
"cell_type": "markdown",
195195
"metadata": {},
196196
"source": [
197-
"Let's train this estimator by calling fit on it with the training data."
197+
"Let's train this estimator by calling fit on it with the training data. Please note the below code will take several minutes to run and create all the resources needed for this model. "
198198
]
199199
},
200200
{
@@ -335,7 +335,7 @@
335335
"pygments_lexer": "ipython3",
336336
"version": "3.6.4"
337337
},
338-
"notice": "Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
338+
"notice": "Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
339339
},
340340
"nbformat": 4,
341341
"nbformat_minor": 2

sagemaker-spark/pyspark_mnist/pyspark_mnist_kmeans.ipynb

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -16,7 +16,7 @@
1616
"10. [More on SageMaker Spark](#More-on-SageMaker-Spark)\n",
1717
"\n",
1818
"## Introduction\n",
19-
"This notebook will show how to classify handwritten digits through the SageMaker PySpark library. \n",
19+
"This notebook will show how to cluster handwritten digits through the SageMaker PySpark library. \n",
2020
"\n",
2121
"We will manipulate data through Spark using a SparkSession, and then use the SageMaker Spark library to interact with SageMaker for training and inference. \n",
2222
"We will first train on SageMaker using K-Means clustering on the MNIST dataset. Then, we will see how to re-use models from existing endpoints and from a model stored on S3 in order to only run inference. \n",
@@ -32,7 +32,7 @@
3232
"source": [
3333
"## Setup\n",
3434
"\n",
35-
"First, we import the necessary modules and create the SparkSession and `SparkSession` with the SageMaker-Spark dependencies attached. "
35+
"First, we import the necessary modules and create the `SparkSession` with the SageMaker-Spark dependencies attached. "
3636
]
3737
},
3838
{
@@ -540,7 +540,7 @@
540540
"pygments_lexer": "ipython3",
541541
"version": "3.6.4"
542542
},
543-
"notice": "Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
543+
"notice": "Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
544544
},
545545
"nbformat": 4,
546546
"nbformat_minor": 2

sagemaker-spark/pyspark_mnist/pyspark_mnist_pca_kmeans.ipynb

Lines changed: 11 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -15,7 +15,7 @@
1515
"7. [More on SageMaker Spark](#More-on-SageMaker-Spark)\n",
1616
"\n",
1717
"## Introduction\n",
18-
"This notebook will show how to classify handwritten digits through the SageMaker PySpark library. \n",
18+
"This notebook will show how to cluster handwritten digits through the SageMaker PySpark library. \n",
1919
"\n",
2020
"We will manipulate data through Spark using a SparkSession, and then use the SageMaker Spark library to interact with SageMaker for training and inference. \n",
2121
"We will create a pipeline consisting of a first step to reduce the dimensionality using SageMaker's PCA algorithm, followed by the final K-Means clustering step on SageMaker. \n",
@@ -31,7 +31,7 @@
3131
"source": [
3232
"## Setup\n",
3333
"\n",
34-
"First, we import the necessary modules and create the SparkSession and `SparkSession` with the SageMaker-Spark dependencies attached. "
34+
"First, we import the necessary modules and create the `SparkSession` with the SageMaker-Spark dependencies attached. "
3535
]
3636
},
3737
{
@@ -180,16 +180,11 @@
180180
"pipelineSM = Pipeline(stages=[pcaSageMakerEstimator, kMeansSageMakerEstimator])"
181181
]
182182
},
183-
{
184-
"cell_type": "markdown",
185-
"metadata": {},
186-
"source": []
187-
},
188183
{
189184
"cell_type": "markdown",
190185
"metadata": {},
191186
"source": [
192-
"Now that we've defined the `Pipeline`, we can call fit on the training data. "
187+
"Now that we've defined the `Pipeline`, we can call fit on the training data. Please note the below code will take several minutes to run and create all the resources needed for this pipeline. "
193188
]
194189
},
195190
{
@@ -231,6 +226,13 @@
231226
"![PCA and KMeans on SageMaker](img/sagemaker-spark-pca-kmeans-architecture.png)"
232227
]
233228
},
229+
{
230+
"cell_type": "markdown",
231+
"metadata": {},
232+
"source": [
233+
"Please note the below code will take several minutes to run and create the final K-Means endpoint needed for this pipeline. "
234+
]
235+
},
234236
{
235237
"cell_type": "code",
236238
"execution_count": null,
@@ -354,7 +356,7 @@
354356
"pygments_lexer": "ipython3",
355357
"version": "3.6.4"
356358
},
357-
"notice": "Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
359+
"notice": "Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
358360
},
359361
"nbformat": 4,
360362
"nbformat_minor": 2

sagemaker-spark/pyspark_mnist/pyspark_mnist_pca_mllib_kmeans.ipynb

Lines changed: 12 additions & 6 deletions
Original file line numberDiff line numberDiff line change
@@ -15,14 +15,18 @@
1515
"7. [More on SageMaker Spark](#More-on-SageMaker-Spark)\n",
1616
"\n",
1717
"## Introduction\n",
18-
"This notebook will show how to classify handwritten digits through the SageMaker PySpark library. \n",
18+
"This notebook will show how to cluster handwritten digits through the SageMaker PySpark library. \n",
1919
"\n",
2020
"We will manipulate data through Spark using a SparkSession, and then use the SageMaker Spark library to interact with SageMaker for training and inference. \n",
2121
"We will create a pipeline consisting of a first step to reduce the dimensionality using Spark MLLib PCA algorithm, followed by the final K-Means clustering step on SageMaker. \n",
2222
"\n",
2323
"You can visit SageMaker Spark's GitHub repository at https://github.com/aws/sagemaker-spark to learn more about SageMaker Spark.\n",
2424
"\n",
25-
"This notebook was created and tested on an ml.m4.xlarge notebook instance."
25+
"This notebook was created and tested on an ml.m4.xlarge notebook instance.\n",
26+
"\n",
27+
"## Why use Spark MLLib algorithms? \n",
28+
"\n",
29+
"The use of Spark MLLib PCA in this notebook is meant to showcase how you can use different pre-processting steps, ranging from data transformers to algorithms, with tools such as Spark MLLib that are well suited for data pre-processing. You can then use SageMaker algorithms and features through the SageMaker-Spark SDK. Here in our case, PCA is in charge of reducing the feature vector as a pre-processing step, and K-Means responsible for clustering the data. "
2630
]
2731
},
2832
{
@@ -31,7 +35,7 @@
3135
"source": [
3236
"## Setup\n",
3337
"\n",
34-
"First, we import the necessary modules and create the SparkSession and `SparkSession` with the SageMaker-Spark dependencies attached. "
38+
"First, we import the necessary modules and create the `SparkSession` with the SageMaker-Spark dependencies attached. "
3539
]
3640
},
3741
{
@@ -191,7 +195,7 @@
191195
"cell_type": "markdown",
192196
"metadata": {},
193197
"source": [
194-
"Now that we've defined the `Pipeline`, we can call fit on the training data. "
198+
"Now that we've defined the `Pipeline`, we can call fit on the training data. Please note the below code will take several minutes to run and create all the resources needed for this pipeline. "
195199
]
196200
},
197201
{
@@ -219,7 +223,9 @@
219223
"cell_type": "markdown",
220224
"metadata": {},
221225
"source": [
222-
"## Inference"
226+
"## Inference\n",
227+
"\n",
228+
"Let's use our test data on our pipeline by calling `transform`. Please note the below code will take several minutes to run and create the endpoints needed in order to serve this pipeline. "
223229
]
224230
},
225231
{
@@ -350,7 +356,7 @@
350356
"pygments_lexer": "ipython3",
351357
"version": "3.6.4"
352358
},
353-
"notice": "Copyright 2017 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
359+
"notice": "Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. Licensed under the Apache License, Version 2.0 (the \"License\"). You may not use this file except in compliance with the License. A copy of the License is located at http://aws.amazon.com/apache2.0/ or in the \"license\" file accompanying this file. This file is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
354360
},
355361
"nbformat": 4,
356362
"nbformat_minor": 2

0 commit comments

Comments
 (0)