Skip to content

Commit 1ea109f

Browse files
feat(dialogflow): update the api
#### dialogflow:v2 The following keys were added: - schemas.GoogleCloudDialogflowV2SuggestConversationSummaryRequest.properties.assistQueryParams.$ref (Total Keys: 1) #### dialogflow:v2beta1 The following keys were added: - schemas.GoogleCloudDialogflowV2beta1BargeInConfig (Total Keys: 6) - schemas.GoogleCloudDialogflowV2beta1InputAudioConfig.properties.bargeInConfig.$ref (Total Keys: 1) - schemas.GoogleCloudDialogflowV2beta1SuggestConversationSummaryRequest.properties.assistQueryParams.$ref (Total Keys: 1) #### dialogflow:v3 The following keys were added: - schemas.GoogleCloudDialogflowCxV3AdvancedSettings.properties.audioExportGcsDestination.$ref (Total Keys: 1) - schemas.GoogleCloudDialogflowCxV3Agent.properties.textToSpeechSettings.$ref (Total Keys: 1) - schemas.GoogleCloudDialogflowCxV3GcsDestination (Total Keys: 3) - schemas.GoogleCloudDialogflowCxV3TextToSpeechSettings (Total Keys: 4) #### dialogflow:v3beta1 The following keys were added: - schemas.GoogleCloudDialogflowCxV3beta1AdvancedSettings.properties.audioExportGcsDestination.$ref (Total Keys: 1) - schemas.GoogleCloudDialogflowCxV3beta1Agent.properties.textToSpeechSettings.$ref (Total Keys: 1) - schemas.GoogleCloudDialogflowCxV3beta1GcsDestination (Total Keys: 3) - schemas.GoogleCloudDialogflowCxV3beta1TextToSpeechSettings (Total Keys: 4)
1 parent 7692294 commit 1ea109f

16 files changed

+371
-4
lines changed

docs/dyn/dialogflow_v2.projects.conversations.suggestions.html

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,11 @@ <h3>Method Details</h3>
9696
The object takes the form of:
9797

9898
{ # The request message for Conversations.SuggestConversationSummary.
99+
&quot;assistQueryParams&quot;: { # Represents the parameters of human assist query. # Parameters for a human assist query.
100+
&quot;documentsMetadataFilters&quot;: { # Key-value filters on the metadata of documents returned by article suggestion. If specified, article suggestion only returns suggested documents that match all filters in their Document.metadata. Multiple values for a metadata key should be concatenated by comma. For example, filters to match all documents that have &#x27;US&#x27; or &#x27;CA&#x27; in their market metadata values and &#x27;agent&#x27; in their user metadata values will be ``` documents_metadata_filters { key: &quot;market&quot; value: &quot;US,CA&quot; } documents_metadata_filters { key: &quot;user&quot; value: &quot;agent&quot; } ```
101+
&quot;a_key&quot;: &quot;A String&quot;,
102+
},
103+
},
99104
&quot;contextSize&quot;: 42, # Max number of messages prior to and including [latest_message] to use as context when compiling the suggestion. By default 500 and at most 1000.
100105
&quot;latestMessage&quot;: &quot;A String&quot;, # The name of the latest conversation message used as context for compiling suggestion. If empty, the latest message of the conversation will be used. Format: `projects//locations//conversations//messages/`.
101106
}

docs/dyn/dialogflow_v2.projects.locations.conversations.suggestions.html

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,11 @@ <h3>Method Details</h3>
9696
The object takes the form of:
9797

9898
{ # The request message for Conversations.SuggestConversationSummary.
99+
&quot;assistQueryParams&quot;: { # Represents the parameters of human assist query. # Parameters for a human assist query.
100+
&quot;documentsMetadataFilters&quot;: { # Key-value filters on the metadata of documents returned by article suggestion. If specified, article suggestion only returns suggested documents that match all filters in their Document.metadata. Multiple values for a metadata key should be concatenated by comma. For example, filters to match all documents that have &#x27;US&#x27; or &#x27;CA&#x27; in their market metadata values and &#x27;agent&#x27; in their user metadata values will be ``` documents_metadata_filters { key: &quot;market&quot; value: &quot;US,CA&quot; } documents_metadata_filters { key: &quot;user&quot; value: &quot;agent&quot; } ```
101+
&quot;a_key&quot;: &quot;A String&quot;,
102+
},
103+
},
99104
&quot;contextSize&quot;: 42, # Max number of messages prior to and including [latest_message] to use as context when compiling the suggestion. By default 500 and at most 1000.
100105
&quot;latestMessage&quot;: &quot;A String&quot;, # The name of the latest conversation message used as context for compiling suggestion. If empty, the latest message of the conversation will be used. Format: `projects//locations//conversations//messages/`.
101106
}

docs/dyn/dialogflow_v2beta1.projects.agent.environments.users.sessions.html

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -148,6 +148,10 @@ <h3>Method Details</h3>
148148
&quot;queryInput&quot;: { # Represents the query input. It can contain either: 1. An audio config which instructs the speech recognizer how to process the speech audio. 2. A conversational query in the form of text. 3. An event that specifies which intent to trigger. # Required. The input specification. It can be set to: 1. an audio config which instructs the speech recognizer how to process the speech audio, 2. a conversational query in the form of text, or 3. an event that specifies which intent to trigger.
149149
&quot;audioConfig&quot;: { # Instructs the speech recognizer on how to process the audio content. # Instructs the speech recognizer how to process the speech audio.
150150
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. Audio encoding of the audio content to process.
151+
&quot;bargeInConfig&quot;: { # Configuration of the barge-in behavior. Barge-in instructs the API to return a detected utterance at a proper time while the client is playing back the response audio from a previous request. When the client sees the utterance, it should stop the playback and immediately get ready for receiving the responses for the current request. The barge-in handling requires the client to start streaming audio input as soon as it starts playing back the audio from the previous response. The playback is modeled into two phases: * No barge-in phase: which goes first and during which speech detection should not be carried out. * Barge-in phase: which follows the no barge-in phase and during which the API starts speech detection and may inform the client that an utterance has been detected. Note that no-speech event is not expected in this phase. The client provides this configuration in terms of the durations of those two phases. The durations are measured in terms of the audio length fromt the the start of the input audio. The flow goes like below: --&gt; Time without speech detection | utterance only | utterance or no-speech event | | +-------------+ | +------------+ | +---------------+ ----------+ no barge-in +-|-+ barge-in +-|-+ normal period +----------- +-------------+ | +------------+ | +---------------+ No-speech event is a response with END_OF_UTTERANCE without any transcript following up. # Configuration of barge-in behavior during the streaming of input audio.
152+
&quot;noBargeInDuration&quot;: &quot;A String&quot;, # Duration that is not eligible for barge-in at the beginning of the input audio.
153+
&quot;totalDuration&quot;: &quot;A String&quot;, # Total duration for the playback at the beginning of the input audio.
154+
},
151155
&quot;disableNoSpeechRecognizedEvent&quot;: True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn&#x27;t return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent.
152156
&quot;enableWordInfo&quot;: True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn&#x27;t return any word-level information.
153157
&quot;languageCode&quot;: &quot;A String&quot;, # Required. The language of the supplied audio. Dialogflow does not do translations. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

docs/dyn/dialogflow_v2beta1.projects.agent.sessions.html

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -148,6 +148,10 @@ <h3>Method Details</h3>
148148
&quot;queryInput&quot;: { # Represents the query input. It can contain either: 1. An audio config which instructs the speech recognizer how to process the speech audio. 2. A conversational query in the form of text. 3. An event that specifies which intent to trigger. # Required. The input specification. It can be set to: 1. an audio config which instructs the speech recognizer how to process the speech audio, 2. a conversational query in the form of text, or 3. an event that specifies which intent to trigger.
149149
&quot;audioConfig&quot;: { # Instructs the speech recognizer on how to process the audio content. # Instructs the speech recognizer how to process the speech audio.
150150
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. Audio encoding of the audio content to process.
151+
&quot;bargeInConfig&quot;: { # Configuration of the barge-in behavior. Barge-in instructs the API to return a detected utterance at a proper time while the client is playing back the response audio from a previous request. When the client sees the utterance, it should stop the playback and immediately get ready for receiving the responses for the current request. The barge-in handling requires the client to start streaming audio input as soon as it starts playing back the audio from the previous response. The playback is modeled into two phases: * No barge-in phase: which goes first and during which speech detection should not be carried out. * Barge-in phase: which follows the no barge-in phase and during which the API starts speech detection and may inform the client that an utterance has been detected. Note that no-speech event is not expected in this phase. The client provides this configuration in terms of the durations of those two phases. The durations are measured in terms of the audio length fromt the the start of the input audio. The flow goes like below: --&gt; Time without speech detection | utterance only | utterance or no-speech event | | +-------------+ | +------------+ | +---------------+ ----------+ no barge-in +-|-+ barge-in +-|-+ normal period +----------- +-------------+ | +------------+ | +---------------+ No-speech event is a response with END_OF_UTTERANCE without any transcript following up. # Configuration of barge-in behavior during the streaming of input audio.
152+
&quot;noBargeInDuration&quot;: &quot;A String&quot;, # Duration that is not eligible for barge-in at the beginning of the input audio.
153+
&quot;totalDuration&quot;: &quot;A String&quot;, # Total duration for the playback at the beginning of the input audio.
154+
},
151155
&quot;disableNoSpeechRecognizedEvent&quot;: True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn&#x27;t return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent.
152156
&quot;enableWordInfo&quot;: True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn&#x27;t return any word-level information.
153157
&quot;languageCode&quot;: &quot;A String&quot;, # Required. The language of the supplied audio. Dialogflow does not do translations. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

docs/dyn/dialogflow_v2beta1.projects.conversations.participants.html

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -120,6 +120,10 @@ <h3>Method Details</h3>
120120
&quot;audio&quot;: &quot;A String&quot;, # Required. The natural language speech audio to be processed. A single request can contain up to 1 minute of speech audio data. The transcribed text cannot contain more than 256 bytes for virtual agent interactions.
121121
&quot;config&quot;: { # Instructs the speech recognizer on how to process the audio content. # Required. Instructs the speech recognizer how to process the speech audio.
122122
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. Audio encoding of the audio content to process.
123+
&quot;bargeInConfig&quot;: { # Configuration of the barge-in behavior. Barge-in instructs the API to return a detected utterance at a proper time while the client is playing back the response audio from a previous request. When the client sees the utterance, it should stop the playback and immediately get ready for receiving the responses for the current request. The barge-in handling requires the client to start streaming audio input as soon as it starts playing back the audio from the previous response. The playback is modeled into two phases: * No barge-in phase: which goes first and during which speech detection should not be carried out. * Barge-in phase: which follows the no barge-in phase and during which the API starts speech detection and may inform the client that an utterance has been detected. Note that no-speech event is not expected in this phase. The client provides this configuration in terms of the durations of those two phases. The durations are measured in terms of the audio length fromt the the start of the input audio. The flow goes like below: --&gt; Time without speech detection | utterance only | utterance or no-speech event | | +-------------+ | +------------+ | +---------------+ ----------+ no barge-in +-|-+ barge-in +-|-+ normal period +----------- +-------------+ | +------------+ | +---------------+ No-speech event is a response with END_OF_UTTERANCE without any transcript following up. # Configuration of barge-in behavior during the streaming of input audio.
124+
&quot;noBargeInDuration&quot;: &quot;A String&quot;, # Duration that is not eligible for barge-in at the beginning of the input audio.
125+
&quot;totalDuration&quot;: &quot;A String&quot;, # Total duration for the playback at the beginning of the input audio.
126+
},
123127
&quot;disableNoSpeechRecognizedEvent&quot;: True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn&#x27;t return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent.
124128
&quot;enableWordInfo&quot;: True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn&#x27;t return any word-level information.
125129
&quot;languageCode&quot;: &quot;A String&quot;, # Required. The language of the supplied audio. Dialogflow does not do translations. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

docs/dyn/dialogflow_v2beta1.projects.conversations.suggestions.html

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -96,6 +96,11 @@ <h3>Method Details</h3>
9696
The object takes the form of:
9797

9898
{ # The request message for Conversations.SuggestConversationSummary.
99+
&quot;assistQueryParams&quot;: { # Represents the parameters of human assist query. # Parameters for a human assist query.
100+
&quot;documentsMetadataFilters&quot;: { # Key-value filters on the metadata of documents returned by article suggestion. If specified, article suggestion only returns suggested documents that match all filters in their Document.metadata. Multiple values for a metadata key should be concatenated by comma. For example, filters to match all documents that have &#x27;US&#x27; or &#x27;CA&#x27; in their market metadata values and &#x27;agent&#x27; in their user metadata values will be ``` documents_metadata_filters { key: &quot;market&quot; value: &quot;US,CA&quot; } documents_metadata_filters { key: &quot;user&quot; value: &quot;agent&quot; } ```
101+
&quot;a_key&quot;: &quot;A String&quot;,
102+
},
103+
},
99104
&quot;contextSize&quot;: 42, # Max number of messages prior to and including [latest_message] to use as context when compiling the suggestion. By default 500 and at most 1000.
100105
&quot;latestMessage&quot;: &quot;A String&quot;, # The name of the latest conversation message used as context for compiling suggestion. If empty, the latest message of the conversation will be used. Format: `projects//locations//conversations//messages/`.
101106
}

docs/dyn/dialogflow_v2beta1.projects.locations.agent.environments.users.sessions.html

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -148,6 +148,10 @@ <h3>Method Details</h3>
148148
&quot;queryInput&quot;: { # Represents the query input. It can contain either: 1. An audio config which instructs the speech recognizer how to process the speech audio. 2. A conversational query in the form of text. 3. An event that specifies which intent to trigger. # Required. The input specification. It can be set to: 1. an audio config which instructs the speech recognizer how to process the speech audio, 2. a conversational query in the form of text, or 3. an event that specifies which intent to trigger.
149149
&quot;audioConfig&quot;: { # Instructs the speech recognizer on how to process the audio content. # Instructs the speech recognizer how to process the speech audio.
150150
&quot;audioEncoding&quot;: &quot;A String&quot;, # Required. Audio encoding of the audio content to process.
151+
&quot;bargeInConfig&quot;: { # Configuration of the barge-in behavior. Barge-in instructs the API to return a detected utterance at a proper time while the client is playing back the response audio from a previous request. When the client sees the utterance, it should stop the playback and immediately get ready for receiving the responses for the current request. The barge-in handling requires the client to start streaming audio input as soon as it starts playing back the audio from the previous response. The playback is modeled into two phases: * No barge-in phase: which goes first and during which speech detection should not be carried out. * Barge-in phase: which follows the no barge-in phase and during which the API starts speech detection and may inform the client that an utterance has been detected. Note that no-speech event is not expected in this phase. The client provides this configuration in terms of the durations of those two phases. The durations are measured in terms of the audio length fromt the the start of the input audio. The flow goes like below: --&gt; Time without speech detection | utterance only | utterance or no-speech event | | +-------------+ | +------------+ | +---------------+ ----------+ no barge-in +-|-+ barge-in +-|-+ normal period +----------- +-------------+ | +------------+ | +---------------+ No-speech event is a response with END_OF_UTTERANCE without any transcript following up. # Configuration of barge-in behavior during the streaming of input audio.
152+
&quot;noBargeInDuration&quot;: &quot;A String&quot;, # Duration that is not eligible for barge-in at the beginning of the input audio.
153+
&quot;totalDuration&quot;: &quot;A String&quot;, # Total duration for the playback at the beginning of the input audio.
154+
},
151155
&quot;disableNoSpeechRecognizedEvent&quot;: True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn&#x27;t return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent.
152156
&quot;enableWordInfo&quot;: True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn&#x27;t return any word-level information.
153157
&quot;languageCode&quot;: &quot;A String&quot;, # Required. The language of the supplied audio. Dialogflow does not do translations. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.

0 commit comments

Comments
 (0)