You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
To use it, clone the example Python files to your gateway node.
467
470
468
471
Prepare datasets
469
472
~~~~~~~~~~~~~~~~
470
473
471
-
To run the pipelines, you need to have the datasets in an S3 bucket in
472
-
your account. This bucket must be located in the region where you want
473
-
to run Amazon SageMaker jobs. If you don’t have a bucket, create one
474
+
To run the pipelines, you need to upload the data extraction pre-processing script to an S3 bucket. This bucket and all resources for this example must be located in the ``us-east-1`` Amazon Region. If you don’t have a bucket, create one
script to copy the datasets into your bucket. Change the bucket name in
480
-
the script to the one you created.
478
+
From the ``mnist-kmeans-sagemaker`` folder of the Kubeflow repository you cloned on your gateway node, run the following command to upload the ``kmeans_preprocessing.py`` file to your S3 bucket. Change ``<bucket-name>`` to the name of the S3 bucket you created.
- The S3 buckets with the dataset. Use the steps in Prepare datasets
546
-
to copy the data to a bucket in the same region as the cluster.
509
+
- **Bucket**: This is the name of the S3 bucket that you uploaded the ``kmeans_preprocessing.py`` file to.
547
510
548
511
You can adjust any of the input parameters using the KFP UI and trigger
549
512
your run again.
@@ -632,18 +595,18 @@ currently does not support specifying input parameters while creating
632
595
the run. You need to update your parameters in the Python pipeline file
633
596
before compiling. Replace ``<experiment-name>`` and ``<job-name>``
634
597
with any names. Replace ``<pipeline-id>`` with the ID of your submitted
635
-
pipeline.
598
+
pipeline. Replace ``<your-role-arn>`` with the ARN of ``kfp-example-pod-role``. Replace ``<your-bucket-name>`` with the name of the S3 bucket you created.
636
599
637
600
::
638
601
639
-
kfp run submit --experiment-name <experiment-name> --run-name <job-name> --pipeline-id <pipeline-id>
0 commit comments