Skip to content

Commit 4801251

Browse files
feat(contentwarehouse): update the api
#### contentwarehouse:v1 The following keys were deleted: - schemas.CompositeDocQualitySignals.properties.oldnessInfo.$ref (Total Keys: 1) - schemas.GoogleCloudContentwarehouseV1ExportToCdwPipeline.properties.documentIds (Total Keys: 2) - schemas.GoogleCloudContentwarehouseV1ExportToCdwPipeline.properties.processorInfo.$ref (Total Keys: 1) - schemas.GoogleCloudContentwarehouseV1GcsIngestWithDocAiProcessorsPipeline.properties.classifySplitProcessorInfos.$ref (Total Keys: 1) - schemas.QualityTimebasedOldnessInfo (Total Keys: 3) - schemas.RepositoryWebrefSimplifiedCompositeDoc.properties.obsoleteAnchorsWithoutInterwiki.$ref (Total Keys: 1) The following keys were added: - schemas.AssistantApiCapabilitiesHomeAppCapabilities (Total Keys: 3) - schemas.AssistantApiSoftwareCapabilities.properties.homeAppCapabilities.$ref (Total Keys: 1) - schemas.AssistantApiSupportedFeatures.properties.confirmationBeforeReadingMultipleMessagesSupported.type (Total Keys: 1) - schemas.AssistantLogsProviderAnnotationLog.properties.lang.type (Total Keys: 1) - schemas.AssistantLogsProviderAnnotationLog.properties.localizedNames (Total Keys: 2) - schemas.AssistantPrefulfillmentRankerPrefulfillmentSignals.properties.isFullyGrounded.type (Total Keys: 1) - schemas.AssistantPrefulfillmentRankerPrefulfillmentSignals.properties.isPlayGenericMusic.type (Total Keys: 1) - schemas.CompressedQualitySignals.properties.productReviewPReviewPage (Total Keys: 2) - schemas.CompressedQualitySignals.properties.productReviewPUhqPage (Total Keys: 2) - schemas.GoogleCloudContentwarehouseV1CreateDocumentResponse.properties.longRunningOperations (Total Keys: 2) - schemas.GoogleCloudContentwarehouseV1ExportToCdwPipeline.properties.docAiDataset.type (Total Keys: 1) - schemas.GoogleCloudContentwarehouseV1ExportToCdwPipeline.properties.documents (Total Keys: 2) - schemas.GoogleCloudContentwarehouseV1ExportToCdwPipeline.properties.trainingSplitRatio (Total Keys: 2) - schemas.GoogleCloudContentwarehouseV1GcsIngestPipeline.properties.processorResultsFolderPath.type (Total Keys: 1) - schemas.GoogleCloudContentwarehouseV1GcsIngestWithDocAiProcessorsPipeline.properties.splitClassifyProcessorInfo.$ref (Total Keys: 1) - schemas.GoogleCloudContentwarehouseV1ProcessWithDocAi (Total Keys: 7) - schemas.GoogleCloudContentwarehouseV1RunPipelineRequest.properties.processWithDocAiPipeline.$ref (Total Keys: 1) - schemas.ImageRepositoryContentBasedVideoMetadata.properties.s3Asr.$ref (Total Keys: 1) - schemas.ImageSafesearchContentOCRAnnotation.properties.ocrAnnotationVersion.type (Total Keys: 1) - schemas.KnowledgeAnswersIntentQueryFunctionCallSignals.properties.responseMeaningSignals.$ref (Total Keys: 1) - schemas.KnowledgeAnswersIntentQueryResponseMeaningSignalsResponseMeaningSignals (Total Keys: 4) - schemas.NlpSemanticParsingLocalEvChargingStationConnectorConstraint (Total Keys: 3) - schemas.NlpSemanticParsingLocalLocationConstraint.properties.evcsConnectorConstraint.$ref (Total Keys: 1) - schemas.PerDocData.properties.s3AudioLanguage.$ref (Total Keys: 1) - schemas.S3AudioLanguageS3AudioLanguage (Total Keys: 5) - schemas.ShoppingWebentityShoppingAnnotationInferredImage.properties.inferredImageSource.type (Total Keys: 1) - schemas.SocialGraphApiProtoImageReference.properties.contentVersion (Total Keys: 2) - schemas.StorageGraphBfgLivegraphProvenanceMetadata.properties.directWriteRecordIds (Total Keys: 2)
1 parent 911b255 commit 4801251

File tree

3 files changed

+1098
-699
lines changed

3 files changed

+1098
-699
lines changed

docs/dyn/contentwarehouse_v1.projects.locations.documents.html

Lines changed: 21 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -2043,6 +2043,27 @@ <h3>Method Details</h3>
20432043
&quot;updateTime&quot;: &quot;A String&quot;, # Output only. The time when the document is last updated.
20442044
&quot;updater&quot;: &quot;A String&quot;, # The user who lastly updates the document.
20452045
},
2046+
&quot;longRunningOperations&quot;: [ # post-processing LROs
2047+
{ # This resource represents a long-running operation that is the result of a network API call.
2048+
&quot;done&quot;: True or False, # If the value is `false`, it means the operation is still in progress. If `true`, the operation is completed, and either `error` or `response` is available.
2049+
&quot;error&quot;: { # The `Status` type defines a logical error model that is suitable for different programming environments, including REST APIs and RPC APIs. It is used by [gRPC](https://github.com/grpc). Each `Status` message contains three pieces of data: error code, error message, and error details. You can find out more about this error model and how to work with it in the [API Design Guide](https://cloud.google.com/apis/design/errors). # The error result of the operation in case of failure or cancellation.
2050+
&quot;code&quot;: 42, # The status code, which should be an enum value of google.rpc.Code.
2051+
&quot;details&quot;: [ # A list of messages that carry the error details. There is a common set of message types for APIs to use.
2052+
{
2053+
&quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
2054+
},
2055+
],
2056+
&quot;message&quot;: &quot;A String&quot;, # A developer-facing error message, which should be in English. Any user-facing error message should be localized and sent in the google.rpc.Status.details field, or localized by the client.
2057+
},
2058+
&quot;metadata&quot;: { # Service-specific metadata associated with the operation. It typically contains progress information and common metadata such as create time. Some services might not provide such metadata. Any method that returns a long-running operation should document the metadata type, if any.
2059+
&quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
2060+
},
2061+
&quot;name&quot;: &quot;A String&quot;, # The server-assigned name, which is only unique within the same service that originally returns it. If you use the default HTTP mapping, the `name` should be a resource name ending with `operations/{unique_id}`.
2062+
&quot;response&quot;: { # The normal response of the operation in case of success. If the original method returns no data on success, such as `Delete`, the response is `google.protobuf.Empty`. If the original method is standard `Get`/`Create`/`Update`, the response should be the resource. For other methods, the response should have the type `XxxResponse`, where `Xxx` is the original method name. For example, if the original method name is `TakeSnapshot()`, the inferred response type is `TakeSnapshotResponse`.
2063+
&quot;a_key&quot;: &quot;&quot;, # Properties of the object. Contains field @type with type URL.
2064+
},
2065+
},
2066+
],
20462067
&quot;metadata&quot;: { # Additional information returned to client, such as debugging information. # Additional information for the API invocation, such as the request tracking id.
20472068
&quot;requestId&quot;: &quot;A String&quot;, # A unique id associated with this call. This id is logged for tracking purpose.
20482069
},

docs/dyn/contentwarehouse_v1.projects.locations.html

Lines changed: 23 additions & 13 deletions
Original file line numberDiff line numberDiff line change
@@ -169,28 +169,21 @@ <h3>Method Details</h3>
169169
The object takes the form of:
170170

171171
{ # Request message for DocumentService.RunPipeline.
172-
&quot;exportCdwPipeline&quot;: { # The configuration of exporting documents from the Document Warehouse to CDW pipeline. # Export docuemnts from Document Warehouseing to CDW for training purpose.
173-
&quot;documentIds&quot;: [ # The list of all the documents to be processed.
172+
&quot;exportCdwPipeline&quot;: { # The configuration of exporting documents from the Document Warehouse to CDW pipeline. # Export docuemnts from Document Warehouse to CDW for training purpose.
173+
&quot;docAiDataset&quot;: &quot;A String&quot;, # The CDW dataset resource name. Format: projects/{project}/locations/{location}/processors/{processor}/dataset
174+
&quot;documents&quot;: [ # The list of all the resource names of the documents to be processed. Format: projects/{project_number}/locations/{location}/documents/{document_id}.
174175
&quot;A String&quot;,
175176
],
176177
&quot;exportFolderPath&quot;: &quot;A String&quot;, # The Cloud Storage folder path used to store the exported documents before being sent to CDW. Format: gs:///.
177-
&quot;processorInfo&quot;: { # The DocAI processor information. # The CDW processor information.
178-
&quot;documentType&quot;: &quot;A String&quot;, # The processor will process the documents with this document type.
179-
&quot;processorName&quot;: &quot;A String&quot;, # The processor resource name. Format is `projects/{project}/locations/{location}/processors/{processor}`, or `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processorVersion}`
180-
&quot;schemaName&quot;: &quot;A String&quot;, # The Document schema resource name. All documents processed by this processor will use this schema. Format: projects/{project_number}/locations/{location}/documentSchemas/{document_schema_id}.
181-
},
178+
&quot;trainingSplitRatio&quot;: 3.14, # Ratio of training dataset split. When importing into Document AI Workbench, documents will be automatically split into training and test split category with the specified ratio.
182179
},
183180
&quot;gcsIngestPipeline&quot;: { # The configuration of the Cloud Storage ingestion pipeline. # Cloud Storage ingestion pipeline.
184181
&quot;inputPath&quot;: &quot;A String&quot;, # The input Cloud Storage folder. All files under this folder will be imported to Document Warehouse. Format: gs:///.
182+
&quot;processorResultsFolderPath&quot;: &quot;A String&quot;, # The Cloud Storage folder path used to store the raw results from processors. Format: gs:///.
185183
&quot;schemaName&quot;: &quot;A String&quot;, # The Document Warehouse schema resource name. All documents processed by this pipeline will use this schema. Format: projects/{project_number}/locations/{location}/documentSchemas/{document_schema_id}.
186184
},
187185
&quot;gcsIngestWithDocAiProcessorsPipeline&quot;: { # The configuration of the document classify/split and entity/kvp extraction pipeline. # Use DocAI processors to process documents in Cloud Storage and ingest them to Document Warehouse.
188-
&quot;classifySplitProcessorInfos&quot;: { # The DocAI processor information. # The classify or split processor information.
189-
&quot;documentType&quot;: &quot;A String&quot;, # The processor will process the documents with this document type.
190-
&quot;processorName&quot;: &quot;A String&quot;, # The processor resource name. Format is `projects/{project}/locations/{location}/processors/{processor}`, or `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processorVersion}`
191-
&quot;schemaName&quot;: &quot;A String&quot;, # The Document schema resource name. All documents processed by this processor will use this schema. Format: projects/{project_number}/locations/{location}/documentSchemas/{document_schema_id}.
192-
},
193-
&quot;extractProcessorInfos&quot;: [ # The entity or key-value pair extracting processor information.
186+
&quot;extractProcessorInfos&quot;: [ # The extract processors information. One matched extract processor will be used to process documents based on the classify processor result. If no classify processor is specificied, the first extract processor will be used.
194187
{ # The DocAI processor information.
195188
&quot;documentType&quot;: &quot;A String&quot;, # The processor will process the documents with this document type.
196189
&quot;processorName&quot;: &quot;A String&quot;, # The processor resource name. Format is `projects/{project}/locations/{location}/processors/{processor}`, or `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processorVersion}`
@@ -199,6 +192,23 @@ <h3>Method Details</h3>
199192
],
200193
&quot;inputPath&quot;: &quot;A String&quot;, # The input Cloud Storage folder. All files under this folder will be imported to Document Warehouse. Format: gs:///.
201194
&quot;processorResultsFolderPath&quot;: &quot;A String&quot;, # The Cloud Storage folder path used to store the raw results from processors. Format: gs:///.
195+
&quot;splitClassifyProcessorInfo&quot;: { # The DocAI processor information. # The split and classify processor information. The split and classify result will be used to find a matched extract processor.
196+
&quot;documentType&quot;: &quot;A String&quot;, # The processor will process the documents with this document type.
197+
&quot;processorName&quot;: &quot;A String&quot;, # The processor resource name. Format is `projects/{project}/locations/{location}/processors/{processor}`, or `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processorVersion}`
198+
&quot;schemaName&quot;: &quot;A String&quot;, # The Document schema resource name. All documents processed by this processor will use this schema. Format: projects/{project_number}/locations/{location}/documentSchemas/{document_schema_id}.
199+
},
200+
},
201+
&quot;processWithDocAiPipeline&quot;: { # The configuration of processing documents in Document Warehouse with DocAi processors pipeline. # Use a DocAI processor to process documents in Document Warehouse, and re-ingest the updated results into Document Warehouse.
202+
&quot;documents&quot;: [ # The list of all the resource names of the documents to be processed. Format: projects/{project_number}/locations/{location}/documents/{document_id}.
203+
&quot;A String&quot;,
204+
],
205+
&quot;exportFolderPath&quot;: &quot;A String&quot;, # The Cloud Storage folder path used to store the exported documents before being sent to CDW. Format: gs:///.
206+
&quot;processorInfo&quot;: { # The DocAI processor information. # The CDW processor information.
207+
&quot;documentType&quot;: &quot;A String&quot;, # The processor will process the documents with this document type.
208+
&quot;processorName&quot;: &quot;A String&quot;, # The processor resource name. Format is `projects/{project}/locations/{location}/processors/{processor}`, or `projects/{project}/locations/{location}/processors/{processor}/processorVersions/{processorVersion}`
209+
&quot;schemaName&quot;: &quot;A String&quot;, # The Document schema resource name. All documents processed by this processor will use this schema. Format: projects/{project_number}/locations/{location}/documentSchemas/{document_schema_id}.
210+
},
211+
&quot;processorResultsFolderPath&quot;: &quot;A String&quot;, # The Cloud Storage folder path used to store the raw results from processors. Format: gs:///.
202212
},
203213
}
204214

0 commit comments

Comments
 (0)