You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
<pclass="firstline">Lists pipelines. Returns a "NOT_FOUND" error if the list is empty. Returns a "FORBIDDEN" error if the caller doesn't have permission to access it.</p>
87
+
<pclass="firstline">Lists pipelines. Returns a "FORBIDDEN" error if the caller doesn't have permission to access it.</p>
<pre>Lists pipelines. Returns a "NOT_FOUND" error if the list is empty. Returns a "FORBIDDEN" error if the caller doesn't have permission to access it.
99
+
<pre>Lists pipelines. Returns a "FORBIDDEN" error if the caller doesn't have permission to access it.
100
100
101
101
Args:
102
102
parent: string, Required. The location name. For example: `projects/PROJECT_ID/locations/LOCATION_ID`. (required)
103
-
filter: string, An expression for filtering the results of the request. If unspecified, all pipelines will be returned. Multiple filters can be applied and must be comma separated. Fields eligible for filtering are: + `type`: The type of the pipeline (streaming or batch). Allowed values are `ALL`, `BATCH`, and `STREAMING`. + `executor_type`: The type of pipeline execution layer. This is always Dataflow for now, but more executors may be added later. Allowed values are `ALL` and `DATAFLOW`. + `status`: The activity status of the pipeline. Allowed values are `ALL`, `ACTIVE`, `ARCHIVED`, and `PAUSED`. For example, to limit results to active batch processing pipelines: type:BATCH,status:ACTIVE
103
+
filter: string, An expression for filtering the results of the request. If unspecified, all pipelines will be returned. Multiple filters can be applied and must be comma separated. Fields eligible for filtering are: + `type`: The type of the pipeline (streaming or batch). Allowed values are `ALL`, `BATCH`, and `STREAMING`. + `status`: The activity status of the pipeline. Allowed values are `ALL`, `ACTIVE`, `ARCHIVED`, and `PAUSED`. For example, to limit results to active batch processing pipelines: type:BATCH,status:ACTIVE
104
104
pageSize: integer, The maximum number of entities to return. The service may return fewer than this value, even if there are additional pages. If unspecified, the max limit is yet to be determined by the backend implementation.
105
105
pageToken: string, A page token, received from a previous `ListPipelines` call. Provide this to retrieve the subsequent page. When paginating, all other parameters provided to `ListPipelines` must match the call that provided the page token.
106
106
x__xgafv: string, V1 error format.
@@ -114,12 +114,12 @@ <h3>Method Details</h3>
114
114
{ # Response message for ListPipelines.
115
115
"nextPageToken": "A String", # A token, which can be sent as `page_token` to retrieve the next page. If this field is omitted, there are no subsequent pages.
116
116
"pipelines": [ # Results that matched the filter criteria and were accessible to the caller. Results are always in descending order of pipeline creation date.
117
-
{ # The main pipeline entity and all the needed metadata to launch and manage linked jobs.
117
+
{ # The main pipeline entity and all the necessary metadata for launching and managing linked jobs.
118
118
"createTime": "A String", # Output only. Immutable. The timestamp when the pipeline was initially created. Set by the Data Pipelines service.
119
119
"displayName": "A String", # Required. The display name of the pipeline. It can contain only letters ([A-Za-z]), numbers ([0-9]), hyphens (-), and underscores (_).
120
120
"jobCount": 42, # Output only. Number of jobs.
121
121
"lastUpdateTime": "A String", # Output only. Immutable. The timestamp when the pipeline was last modified. Set by the Data Pipelines service.
122
-
"name": "A String", # The pipeline name. For example: `projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID`. * `PROJECT_ID` can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see [Identifying projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects) * `LOCATION_ID` is the canonical ID for the pipeline's location. The list of available locations can be obtained by calling ListLocations. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in [App Engine regions](https://cloud.google.com/about/locations#region). * `PIPELINE_ID` is the ID of the pipeline. Must be unique for the selected project and location.
122
+
"name": "A String", # The pipeline name. For example: `projects/PROJECT_ID/locations/LOCATION_ID/pipelines/PIPELINE_ID`. * `PROJECT_ID` can contain letters ([A-Za-z]), numbers ([0-9]), hyphens (-), colons (:), and periods (.). For more information, see [Identifying projects](https://cloud.google.com/resource-manager/docs/creating-managing-projects#identifying_projects). * `LOCATION_ID` is the canonical ID for the pipeline's location. The list of available locations can be obtained by calling `google.cloud.location.Locations.ListLocations`. Note that the Data Pipelines service is not available in all regions. It depends on Cloud Scheduler, an App Engine application, so it's only available in [App Engine regions](https://cloud.google.com/about/locations#region). * `PIPELINE_ID` is the ID of the pipeline. Must be unique for the selected project and location.
123
123
"pipelineSources": { # Immutable. The sources of the pipeline (for example, Dataplex). The keys and values are set by the corresponding sources during pipeline creation.
0 commit comments