You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@@ -67,43 +67,155 @@ For an in-depth look, please see the `Scikit-learn Data Processing and Model Eva
67
67
.. _Scikit-learn Data Processing and Model Evaluation: https://github.com/awslabs/amazon-sagemaker-examples/blob/master/sagemaker_processing/scikit_learn_data_processing_and_model_evaluation/scikit_learn_data_processing_and_model_evaluation.ipynb
68
68
69
69
70
-
Data Pre-Processing with Spark
71
-
==============================
70
+
Data Processing with Spark
71
+
============================================
72
+
SageMaker provides two classes for customers to run Spark applications: :class:`sagemaker.processing.PySparkProcessor` and :class:`sagemaker.processing.SparkJarProcessor`
72
73
73
-
You can use the :class:`sagemaker.processing.ScriptProcessor` class to run a script in a processing container, including your own container.
74
74
75
-
This example shows how you can run a processing job inside of a container that can run a Spark script called ``preprocess.py`` by invoking a command ``/opt/program/submit`` inside the container.
75
+
PySparkProcessor
76
+
---------------------
77
+
78
+
You can use the :class:`sagemaker.processing.PySparkProcessor` class to run PySpark scripts as processing jobs.
79
+
80
+
This example shows how you can take an existing PySpark script and run a processing job with the :class:`sagemaker.processing.PySparkProcessor` class and the pre-built SageMaker Spark container.
81
+
82
+
First you need to create a :class:`PySparkProcessor` object
83
+
84
+
.. code:: python
85
+
86
+
from sagemaker.processing import PySparkProcessor, ProcessingInput
87
+
88
+
spark_processor = PySparkProcessor(
89
+
base_job_name="sm-spark",
90
+
framework_version="2.4",
91
+
py_version="py37",
92
+
container_version="1",
93
+
role="[Your SageMaker-compatible IAM role]",
94
+
instance_count=2,
95
+
instance_type="ml.c5.xlarge",
96
+
max_runtime_in_seconds=1200,
97
+
image_uri="your-image-uri"
98
+
)
99
+
100
+
The ``framework_version`` is the spark version where the script will be running.
101
+
``py_version`` and ``container_version`` are two new parameters you can specify in the constructor. They give you more flexibility to select the container version to avoid any backward incompatibilities and unnecessary dependency upgrade.
102
+
103
+
If you just specify the ``framework_version``, Sagemaker will default to a python version and the latest container version. To pin to an exact version of the SageMaker Spark container you need to specify all the three parameters: ``framework_version``, ``py_version`` and ``container_version``.
104
+
105
+
You can also specify the ``image_uri`` and it will override all the three parameters.
106
+
107
+
Note that ``command`` option will not be supported on either :class:`PySparkProcessor` or :class:`SparkJarProcessor`. If you want to run the script on your own container, please use :class:`ScriptProcessor` instead.
108
+
109
+
Then you can run your existing spark script ``preprocessing.py`` in a processing job.
76
110
77
111
.. code:: python
78
112
79
-
from sagemaker.processing import ScriptProcessor, ProcessingInput
``submit_app`` is the local relative path or s3 path of your python script, it's ``preprocess.py`` in this case.
126
+
127
+
You can also specify any python or jar dependencies or files that your script depends on with ``submit_py_files``, ``submit_jars`` and ``submit_files``.
128
+
129
+
``submit_py_files`` is a list of .zip, .egg, or .py files to place on the PYTHONPATH for Python apps. ``submit_jars`` is a list of jars to include on the driver and executor classpaths. ``submit_files`` is list of files to be placed in the working directory of each executor. File paths of these files in executors can be accessed via SparkFiles.get(fileName).
130
+
131
+
Each item in the list can be either s3 path or local path, and if you have dependencies stored both in s3 and locally, you can put all of them in ``submit_py_files``, ``submit_jars``, and ``submit_files``
80
132
81
-
spark_processor = ScriptProcessor(
82
-
base_job_name="spark-preprocessor",
83
-
image_uri="<ECR repository URI to your Spark processing image>",
84
-
command=["/opt/program/submit"],
133
+
Just like using the ScriptProcessor, you can pass any arguments to your script by specifying ``arguments`` parameter. In this example, four arguments are passed to the script to get and upload data from/to s3.
134
+
135
+
To support Spark history server, you can specify the parameter ``spark_event_logs_s3_uri`` when you invoke run() method to continuously upload spark events to s3. Note that the performance will be slightly impacted if you decide to publish spark event to s3.
136
+
137
+
Spark History Server
138
+
---------------------
139
+
140
+
While script is running, or after script has run, you can view spark UI by running history server locally or in the notebook. By default, the s3 URI you provided in previous ``run()`` method will be used as spark event source, but you can also specify a different URI. Last but not the least, you can terminate the history server with ``terminate_history_server()``. Note that only one history server process will be running at a time.
141
+
142
+
Here's an example to start and terminate history server
143
+
144
+
.. code:: python
145
+
146
+
spark_processor.start_history_server()
147
+
spark_processor.terminate_history_server()
148
+
149
+
You don't always have to run the script first to start history server, you can also specify the s3 URI with spark event logs stored. For example
To successfully run the history server, first you need to make sure ``docker`` is installed in your machine. Then you need to configure your aws credentials with S3 read permission. Last but not the least, you need to either invoke ``run()`` method with ``spark_event_logs_s3_uri`` first, or specify the ``spark_event_logs_s3_uri`` in ``start_history_server()`` method, otherwise it will fail.
156
+
157
+
SparkJarProcessor
158
+
---------------------
159
+
160
+
Supposed that you have the jar file "preprocessing.jar" stored in the same directory as you are now, and the java package is ``com.path.to.your.class.PreProcessing.java``
:class:`SparkJarProcessor` is very similar to :class:`PySparkProcessor` except that the ``run()`` method takes only jar file path, configured by ``submit_app`` parameter, and ``submit_class`` parameter, which is equivalent to "--class" option for "spark-submit" command.
181
+
182
+
Configuration Override
183
+
----------------------
184
+
185
+
Overriding Spark configuration is crucial for a number of tasks such as tuning your Spark application or configuring the hive metastore. Using our Python SDK, you can easily override Spark/Hive/Hadoop configuration.
186
+
187
+
An example usage would be overriding Spark executor memory/cores as demonstrated in the following code snippet:
0 commit comments