@@ -45,8 +45,8 @@ def feature_processor(
45
45
46
46
If the decorated function is executed without arguments then the decorated function's arguments
47
47
are automatically loaded from the input data sources. Outputs are ingested to the output Feature
48
- Group. If arguments are provided to this function, then arguments are not automatically
49
- loaded (for testing).
48
+ Group. If arguments are provided to this function, then arguments are not automatically loaded
49
+ (for testing).
50
50
51
51
Decorated functions must conform to the expected signature. Parameters: one parameter of type
52
52
pyspark.sql.DataFrame for each DataSource in 'inputs'; followed by the optional parameters with
@@ -82,9 +82,9 @@ def transform(input_feature_group, input_csv):
82
82
inputs (Sequence[Union[FeatureGroupDataSource, CSVDataSource, ParquetDataSource,
83
83
BaseDataSource]]): A list of data sources.
84
84
output (str): A Feature Group ARN to write results of this function to.
85
- target_stores (Optional[list[str]], optional): A list containing at least one
86
- of 'OnlineStore' or 'OfflineStore'. If unspecified, data will be ingested to the
87
- enabled stores of the output feature group. Defaults to None.
85
+ target_stores (Optional[list[str]], optional): A list containing at least one of
86
+ 'OnlineStore' or 'OfflineStore'. If unspecified, data will be ingested to the enabled
87
+ stores of the output feature group. Defaults to None.
88
88
parameters (Optional[Dict[str, Union[str, Dict]]], optional): Parameters to be provided to
89
89
the decorated function, available as the 'params' argument. Useful for parameterized
90
90
functions. The params argument also contains the set of system provided parameters
0 commit comments