You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: doc/amazon_sagemaker_processing.rst
+5-5Lines changed: 5 additions & 5 deletions
Original file line number
Diff line number
Diff line change
@@ -69,15 +69,15 @@ For an in-depth look, please see the `Scikit-learn Data Processing and Model Eva
69
69
70
70
Data Processing with Spark
71
71
============================================
72
-
SageMaker provides two classes for customers to run Spark applications: :class:`sagemaker.processing.PySparkProcessor` and :class:`sagemaker.processing.SparkJarProcessor`
72
+
SageMaker provides two classes for customers to run Spark applications: :class:`sagemaker.spark.processing.PySparkProcessor` and :class:`sagemaker.spark.processing.SparkJarProcessor`
73
73
74
74
75
75
PySparkProcessor
76
76
---------------------
77
77
78
-
You can use the :class:`sagemaker.processing.PySparkProcessor` class to run PySpark scripts as processing jobs.
78
+
You can use the :class:`sagemaker.spark.processing.PySparkProcessor` class to run PySpark scripts as processing jobs.
79
79
80
-
This example shows how you can take an existing PySpark script and run a processing job with the :class:`sagemaker.processing.PySparkProcessor` class and the pre-built SageMaker Spark container.
80
+
This example shows how you can take an existing PySpark script and run a processing job with the :class:`sagemaker.spark.processing.PySparkProcessor` class and the pre-built SageMaker Spark container.
81
81
82
82
First you need to create a :class:`PySparkProcessor` object
83
83
@@ -230,8 +230,8 @@ Processing class documentation
0 commit comments