You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To use it, clone the example Python files to your gateway node.
467
470
468
471
Prepare datasets
469
472
~~~~~~~~~~~~~~~~
470
473
471
-
To run the pipelines, you need to have the datasets in an S3 bucket in
472
-
your account. This bucket must be located in the region where you want
473
-
to run Amazon SageMaker jobs. If you don’t have a bucket, create one
474
+
To run the pipelines, you need to upload the data extraction pre-processing script to an S3 bucket. This bucket and all resources for this example must be located in the ``us-east-1`` Amazon Region. If you don’t have a bucket, create one
script to copy the datasets into your bucket. Change the bucket name in
480
-
the script to the one you created.
478
+
From the ``mnist-kmeans-sagemaker`` folder of the Kubeflow repository you cloned on your gateway node, run the following command to upload the ``kmeans_preprocessing.py`` file to your S3 bucket. Change ``<bucket-name>`` to the name of the S3 bucket you created.
0 commit comments