You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/dyn/dlp_v2.organizations.locations.discoveryConfigs.html
+6-6Lines changed: 6 additions & 6 deletions
Original file line number
Diff line number
Diff line change
@@ -116,7 +116,7 @@ <h3>Method Details</h3>
116
116
"actions": [ # Actions to execute at the completion of scanning.
117
117
{ # A task to execute when a data profile has been generated.
118
118
"exportData": { # If set, the detailed data profiles will be persisted to the location of your choice whenever updated. # Export data profiles into a provided location.
119
-
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in a new row in BigQuery.
119
+
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in new rows in BigQuery. Data is inserted using [streaming insert](https://cloud.google.com/blog/products/bigquery/life-of-a-bigquery-streaming-insert) and so data may be in the buffer for a period of time after the profile has finished. The Pub/Sub notification is sent before the streaming buffer is guaranteed to be written, so data may not be instantly visible to queries by the time your topic receives the Pub/Sub notification.
120
120
"datasetId": "A String", # Dataset ID of the table.
121
121
"projectId": "A String", # The Google Cloud Platform project ID of the project containing the table. If omitted, project ID is inferred from the API call.
122
122
"tableId": "A String", # Name of the table.
@@ -278,7 +278,7 @@ <h3>Method Details</h3>
278
278
"actions": [ # Actions to execute at the completion of scanning.
279
279
{ # A task to execute when a data profile has been generated.
280
280
"exportData": { # If set, the detailed data profiles will be persisted to the location of your choice whenever updated. # Export data profiles into a provided location.
281
-
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in a new row in BigQuery.
281
+
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in new rows in BigQuery. Data is inserted using [streaming insert](https://cloud.google.com/blog/products/bigquery/life-of-a-bigquery-streaming-insert) and so data may be in the buffer for a period of time after the profile has finished. The Pub/Sub notification is sent before the streaming buffer is guaranteed to be written, so data may not be instantly visible to queries by the time your topic receives the Pub/Sub notification.
282
282
"datasetId": "A String", # Dataset ID of the table.
283
283
"projectId": "A String", # The Google Cloud Platform project ID of the project containing the table. If omitted, project ID is inferred from the API call.
284
284
"tableId": "A String", # Name of the table.
@@ -464,7 +464,7 @@ <h3>Method Details</h3>
464
464
"actions": [ # Actions to execute at the completion of scanning.
465
465
{ # A task to execute when a data profile has been generated.
466
466
"exportData": { # If set, the detailed data profiles will be persisted to the location of your choice whenever updated. # Export data profiles into a provided location.
467
-
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in a new row in BigQuery.
467
+
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in new rows in BigQuery. Data is inserted using [streaming insert](https://cloud.google.com/blog/products/bigquery/life-of-a-bigquery-streaming-insert) and so data may be in the buffer for a period of time after the profile has finished. The Pub/Sub notification is sent before the streaming buffer is guaranteed to be written, so data may not be instantly visible to queries by the time your topic receives the Pub/Sub notification.
468
468
"datasetId": "A String", # Dataset ID of the table.
469
469
"projectId": "A String", # The Google Cloud Platform project ID of the project containing the table. If omitted, project ID is inferred from the API call.
470
470
"tableId": "A String", # Name of the table.
@@ -637,7 +637,7 @@ <h3>Method Details</h3>
637
637
"actions": [ # Actions to execute at the completion of scanning.
638
638
{ # A task to execute when a data profile has been generated.
639
639
"exportData": { # If set, the detailed data profiles will be persisted to the location of your choice whenever updated. # Export data profiles into a provided location.
640
-
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in a new row in BigQuery.
640
+
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in new rows in BigQuery. Data is inserted using [streaming insert](https://cloud.google.com/blog/products/bigquery/life-of-a-bigquery-streaming-insert) and so data may be in the buffer for a period of time after the profile has finished. The Pub/Sub notification is sent before the streaming buffer is guaranteed to be written, so data may not be instantly visible to queries by the time your topic receives the Pub/Sub notification.
641
641
"datasetId": "A String", # Dataset ID of the table.
642
642
"projectId": "A String", # The Google Cloud Platform project ID of the project containing the table. If omitted, project ID is inferred from the API call.
643
643
"tableId": "A String", # Name of the table.
@@ -818,7 +818,7 @@ <h3>Method Details</h3>
818
818
"actions": [ # Actions to execute at the completion of scanning.
819
819
{ # A task to execute when a data profile has been generated.
820
820
"exportData": { # If set, the detailed data profiles will be persisted to the location of your choice whenever updated. # Export data profiles into a provided location.
821
-
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in a new row in BigQuery.
821
+
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in new rows in BigQuery. Data is inserted using [streaming insert](https://cloud.google.com/blog/products/bigquery/life-of-a-bigquery-streaming-insert) and so data may be in the buffer for a period of time after the profile has finished. The Pub/Sub notification is sent before the streaming buffer is guaranteed to be written, so data may not be instantly visible to queries by the time your topic receives the Pub/Sub notification.
822
822
"datasetId": "A String", # Dataset ID of the table.
823
823
"projectId": "A String", # The Google Cloud Platform project ID of the project containing the table. If omitted, project ID is inferred from the API call.
824
824
"tableId": "A String", # Name of the table.
@@ -981,7 +981,7 @@ <h3>Method Details</h3>
981
981
"actions": [ # Actions to execute at the completion of scanning.
982
982
{ # A task to execute when a data profile has been generated.
983
983
"exportData": { # If set, the detailed data profiles will be persisted to the location of your choice whenever updated. # Export data profiles into a provided location.
984
-
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in a new row in BigQuery.
984
+
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in new rows in BigQuery. Data is inserted using [streaming insert](https://cloud.google.com/blog/products/bigquery/life-of-a-bigquery-streaming-insert) and so data may be in the buffer for a period of time after the profile has finished. The Pub/Sub notification is sent before the streaming buffer is guaranteed to be written, so data may not be instantly visible to queries by the time your topic receives the Pub/Sub notification.
985
985
"datasetId": "A String", # Dataset ID of the table.
986
986
"projectId": "A String", # The Google Cloud Platform project ID of the project containing the table. If omitted, project ID is inferred from the API call.
987
987
"tableId": "A String", # Name of the table.
Copy file name to clipboardExpand all lines: docs/dyn/dlp_v2.organizations.locations.dlpJobs.html
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -3244,6 +3244,7 @@ <h3>Method Details</h3>
3244
3244
},
3245
3245
},
3246
3246
],
3247
+
"numRowsProcessed": "A String", # Number of rows scanned post sampling and time filtering (Applicable for row based stores such as BigQuery).
3247
3248
"processedBytes": "A String", # Total size in bytes that were processed.
3248
3249
"totalEstimatedBytes": "A String", # Estimate of the number of bytes to process.
Copy file name to clipboardExpand all lines: docs/dyn/dlp_v2.organizations.locations.tableDataProfiles.html
+4-4Lines changed: 4 additions & 4 deletions
Original file line number
Diff line number
Diff line change
@@ -133,7 +133,7 @@ <h3>Method Details</h3>
133
133
"dataProfileActions": [ # Actions to execute at the completion of the job.
134
134
{ # A task to execute when a data profile has been generated.
135
135
"exportData": { # If set, the detailed data profiles will be persisted to the location of your choice whenever updated. # Export data profiles into a provided location.
136
-
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in a new row in BigQuery.
136
+
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in new rows in BigQuery. Data is inserted using [streaming insert](https://cloud.google.com/blog/products/bigquery/life-of-a-bigquery-streaming-insert) and so data may be in the buffer for a period of time after the profile has finished. The Pub/Sub notification is sent before the streaming buffer is guaranteed to be written, so data may not be instantly visible to queries by the time your topic receives the Pub/Sub notification.
137
137
"datasetId": "A String", # Dataset ID of the table.
138
138
"projectId": "A String", # The Google Cloud Platform project ID of the project containing the table. If omitted, project ID is inferred from the API call.
139
139
"tableId": "A String", # Name of the table.
@@ -170,7 +170,7 @@ <h3>Method Details</h3>
170
170
"actions": [ # Actions to execute at the completion of scanning.
171
171
{ # A task to execute when a data profile has been generated.
172
172
"exportData": { # If set, the detailed data profiles will be persisted to the location of your choice whenever updated. # Export data profiles into a provided location.
173
-
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in a new row in BigQuery.
173
+
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in new rows in BigQuery. Data is inserted using [streaming insert](https://cloud.google.com/blog/products/bigquery/life-of-a-bigquery-streaming-insert) and so data may be in the buffer for a period of time after the profile has finished. The Pub/Sub notification is sent before the streaming buffer is guaranteed to be written, so data may not be instantly visible to queries by the time your topic receives the Pub/Sub notification.
174
174
"datasetId": "A String", # Dataset ID of the table.
175
175
"projectId": "A String", # The Google Cloud Platform project ID of the project containing the table. If omitted, project ID is inferred from the API call.
176
176
"tableId": "A String", # Name of the table.
@@ -596,7 +596,7 @@ <h3>Method Details</h3>
596
596
"dataProfileActions": [ # Actions to execute at the completion of the job.
597
597
{ # A task to execute when a data profile has been generated.
598
598
"exportData": { # If set, the detailed data profiles will be persisted to the location of your choice whenever updated. # Export data profiles into a provided location.
599
-
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in a new row in BigQuery.
599
+
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in new rows in BigQuery. Data is inserted using [streaming insert](https://cloud.google.com/blog/products/bigquery/life-of-a-bigquery-streaming-insert) and so data may be in the buffer for a period of time after the profile has finished. The Pub/Sub notification is sent before the streaming buffer is guaranteed to be written, so data may not be instantly visible to queries by the time your topic receives the Pub/Sub notification.
600
600
"datasetId": "A String", # Dataset ID of the table.
601
601
"projectId": "A String", # The Google Cloud Platform project ID of the project containing the table. If omitted, project ID is inferred from the API call.
602
602
"tableId": "A String", # Name of the table.
@@ -633,7 +633,7 @@ <h3>Method Details</h3>
633
633
"actions": [ # Actions to execute at the completion of scanning.
634
634
{ # A task to execute when a data profile has been generated.
635
635
"exportData": { # If set, the detailed data profiles will be persisted to the location of your choice whenever updated. # Export data profiles into a provided location.
636
-
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in a new row in BigQuery.
636
+
"profileTable": { # Message defining the location of a BigQuery table. A table is uniquely identified by its project_id, dataset_id, and table_name. Within a query a table is often referenced with a string in the format of: `:.` or `..`. # Store all table and column profiles in an existing table or a new table in an existing dataset. Each re-generation will result in new rows in BigQuery. Data is inserted using [streaming insert](https://cloud.google.com/blog/products/bigquery/life-of-a-bigquery-streaming-insert) and so data may be in the buffer for a period of time after the profile has finished. The Pub/Sub notification is sent before the streaming buffer is guaranteed to be written, so data may not be instantly visible to queries by the time your topic receives the Pub/Sub notification.
637
637
"datasetId": "A String", # Dataset ID of the table.
638
638
"projectId": "A String", # The Google Cloud Platform project ID of the project containing the table. If omitted, project ID is inferred from the API call.
639
639
"tableId": "A String", # Name of the table.
0 commit comments