You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/dyn/dialogflow_v2.projects.conversations.participants.html
+29Lines changed: 29 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -116,6 +116,35 @@ <h3>Method Details</h3>
116
116
"a_key": "A String",
117
117
},
118
118
},
119
+
"audioInput": { # Represents the natural language speech audio to be processed. # The natural language speech audio to be processed.
120
+
"audio": "A String", # Required. The natural language speech audio to be processed. A single request can contain up to 2 minutes of speech audio data. The transcribed text cannot contain more than 256 bytes for virtual agent interactions.
121
+
"config": { # Instructs the speech recognizer how to process the audio content. # Required. Instructs the speech recognizer how to process the speech audio.
122
+
"audioEncoding": "A String", # Required. Audio encoding of the audio content to process.
123
+
"disableNoSpeechRecognizedEvent": True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn't return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent.
124
+
"enableAutomaticPunctuation": True or False, # Enable automatic punctuation option at the speech backend.
125
+
"enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.
126
+
"languageCode": "A String", # Required. The language of the supplied audio. Dialogflow does not do translations. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
127
+
"model": "A String", # Optional. Which Speech model to select for the given request. For more information, see [Speech models](https://cloud.google.com/dialogflow/es/docs/speech-models).
128
+
"modelVariant": "A String", # Which variant of the Speech model to use.
129
+
"optOutConformerModelMigration": True or False, # If `true`, the request will opt out for STT conformer model migration. This field will be deprecated once force migration takes place in June 2024. Please refer to [Dialogflow ES Speech model migration](https://cloud.google.com/dialogflow/es/docs/speech-model-migration).
130
+
"phraseHints": [ # A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See [the Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints) for more details. This field is deprecated. Please use [`speech_contexts`]() instead. If you specify both [`phrase_hints`]() and [`speech_contexts`](), Dialogflow will treat the [`phrase_hints`]() as a single additional [`SpeechContext`]().
131
+
"A String",
132
+
],
133
+
"phraseSets": [ # A collection of phrase set resources to use for speech adaptation.
134
+
"A String",
135
+
],
136
+
"sampleRateHertz": 42, # Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to [Cloud Speech API documentation](https://cloud.google.com/speech-to-text/docs/basics) for more details.
137
+
"singleUtterance": True or False, # If `false` (default), recognition does not cease until the client closes the stream. If `true`, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.
138
+
"speechContexts": [ # Context information to assist speech recognition. See [the Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints) for more details.
139
+
{ # Hints for the speech recognizer to help with recognition in a specific conversation state.
140
+
"boost": 3.14, # Optional. Boost for this context compared to other contexts: * If the boost is positive, Dialogflow will increase the probability that the phrases in this context are recognized over similar sounding phrases. * If the boost is unspecified or non-positive, Dialogflow will not apply any boost. Dialogflow recommends that you use boosts in the range (0, 20] and that you find a value that fits your use case with binary search.
141
+
"phrases": [ # Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. This list can be used to: * improve accuracy for words and phrases you expect the user to say, e.g. typical commands for your Dialogflow agent * add additional words to the speech recognizer vocabulary * ... See the [Cloud Speech documentation](https://cloud.google.com/speech-to-text/quotas) for usage limits.
142
+
"A String",
143
+
],
144
+
},
145
+
],
146
+
},
147
+
},
119
148
"cxParameters": { # Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null. Note: this field should only be used if you are connecting to a Dialogflow CX agent.
120
149
"a_key": "", # Properties of the object.
Copy file name to clipboardExpand all lines: docs/dyn/dialogflow_v2.projects.locations.conversations.participants.html
+29Lines changed: 29 additions & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -116,6 +116,35 @@ <h3>Method Details</h3>
116
116
"a_key": "A String",
117
117
},
118
118
},
119
+
"audioInput": { # Represents the natural language speech audio to be processed. # The natural language speech audio to be processed.
120
+
"audio": "A String", # Required. The natural language speech audio to be processed. A single request can contain up to 2 minutes of speech audio data. The transcribed text cannot contain more than 256 bytes for virtual agent interactions.
121
+
"config": { # Instructs the speech recognizer how to process the audio content. # Required. Instructs the speech recognizer how to process the speech audio.
122
+
"audioEncoding": "A String", # Required. Audio encoding of the audio content to process.
123
+
"disableNoSpeechRecognizedEvent": True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn't return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent.
124
+
"enableAutomaticPunctuation": True or False, # Enable automatic punctuation option at the speech backend.
125
+
"enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.
126
+
"languageCode": "A String", # Required. The language of the supplied audio. Dialogflow does not do translations. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
127
+
"model": "A String", # Optional. Which Speech model to select for the given request. For more information, see [Speech models](https://cloud.google.com/dialogflow/es/docs/speech-models).
128
+
"modelVariant": "A String", # Which variant of the Speech model to use.
129
+
"optOutConformerModelMigration": True or False, # If `true`, the request will opt out for STT conformer model migration. This field will be deprecated once force migration takes place in June 2024. Please refer to [Dialogflow ES Speech model migration](https://cloud.google.com/dialogflow/es/docs/speech-model-migration).
130
+
"phraseHints": [ # A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. See [the Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints) for more details. This field is deprecated. Please use [`speech_contexts`]() instead. If you specify both [`phrase_hints`]() and [`speech_contexts`](), Dialogflow will treat the [`phrase_hints`]() as a single additional [`SpeechContext`]().
131
+
"A String",
132
+
],
133
+
"phraseSets": [ # A collection of phrase set resources to use for speech adaptation.
134
+
"A String",
135
+
],
136
+
"sampleRateHertz": 42, # Required. Sample rate (in Hertz) of the audio content sent in the query. Refer to [Cloud Speech API documentation](https://cloud.google.com/speech-to-text/docs/basics) for more details.
137
+
"singleUtterance": True or False, # If `false` (default), recognition does not cease until the client closes the stream. If `true`, the recognizer will detect a single spoken utterance in input audio. Recognition ceases when it detects the audio's voice has stopped or paused. In this case, once a detected intent is received, the client should close the stream and start a new request with a new stream as needed. Note: This setting is relevant only for streaming methods. Note: When specified, InputAudioConfig.single_utterance takes precedence over StreamingDetectIntentRequest.single_utterance.
138
+
"speechContexts": [ # Context information to assist speech recognition. See [the Cloud Speech documentation](https://cloud.google.com/speech-to-text/docs/basics#phrase-hints) for more details.
139
+
{ # Hints for the speech recognizer to help with recognition in a specific conversation state.
140
+
"boost": 3.14, # Optional. Boost for this context compared to other contexts: * If the boost is positive, Dialogflow will increase the probability that the phrases in this context are recognized over similar sounding phrases. * If the boost is unspecified or non-positive, Dialogflow will not apply any boost. Dialogflow recommends that you use boosts in the range (0, 20] and that you find a value that fits your use case with binary search.
141
+
"phrases": [ # Optional. A list of strings containing words and phrases that the speech recognizer should recognize with higher likelihood. This list can be used to: * improve accuracy for words and phrases you expect the user to say, e.g. typical commands for your Dialogflow agent * add additional words to the speech recognizer vocabulary * ... See the [Cloud Speech documentation](https://cloud.google.com/speech-to-text/quotas) for usage limits.
142
+
"A String",
143
+
],
144
+
},
145
+
],
146
+
},
147
+
},
119
148
"cxParameters": { # Additional parameters to be put into Dialogflow CX session parameters. To remove a parameter from the session, clients should explicitly set the parameter value to null. Note: this field should only be used if you are connecting to a Dialogflow CX agent.
120
149
"a_key": "", # Properties of the object.
Copy file name to clipboardExpand all lines: docs/dyn/dialogflow_v3.projects.locations.agents.environments.sessions.html
+7-3Lines changed: 7 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -983,6 +983,7 @@ <h3>Method Details</h3>
983
983
{ # A data store connection. It represents a data store in Discovery Engine and the type of the contents it contains.
984
984
"dataStore": "A String", # The full name of the referenced data store. Formats: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{data_store}` `projects/{project}/locations/{location}/dataStores/{data_store}`
985
985
"dataStoreType": "A String", # The type of the connected data store.
986
+
"documentProcessingMode": "A String", # The document processing mode for the data store connection. Should only be set for PUBLIC_WEB and UNSTRUCTURED data stores. If not set it is considered as DOCUMENTS, as this is the legacy mode.
986
987
},
987
988
],
988
989
"enabled": True or False, # Whether Knowledge Connector is enabled or not.
@@ -1302,7 +1303,7 @@ <h3>Method Details</h3>
1302
1303
},
1303
1304
],
1304
1305
},
1305
-
"dataStoreConnectionSignals": { # Data store connection feature output signals. Might be only partially field if processing stop before the final answer. Reasons for this can be, but are not limited to: empty UCS search results, positive RAI check outcome, grounding failure, ... # Optional. Data store connection feature output signals. Filled only when data stores are involved in serving the query and DetectIntentRequest.populate_data_store_connection_signals is set to true in the request.
1306
+
"dataStoreConnectionSignals": { # Data store connection feature output signals. Might be only partially field if processing stop before the final answer. Reasons for this can be, but are not limited to: empty UCS search results, positive RAI check outcome, grounding failure, ... # Optional. Data store connection feature output signals. Filled only when data stores are involved in serving the query.
1306
1307
"answer": "A String", # Optional. The final compiled answer.
1307
1308
"answerGenerationModelCallSignals": { # Diagnostic info related to the answer generation model call. # Optional. Diagnostic info related to the answer generation model call.
1308
1309
"model": "A String", # Name of the generative model. For example, "gemini-ultra", "gemini-pro", "gemini-1.5-flash" etc. Defaults to "Other" if the model is unknown.
@@ -2435,6 +2436,7 @@ <h3>Method Details</h3>
2435
2436
{ # A data store connection. It represents a data store in Discovery Engine and the type of the contents it contains.
2436
2437
"dataStore": "A String", # The full name of the referenced data store. Formats: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{data_store}` `projects/{project}/locations/{location}/dataStores/{data_store}`
2437
2438
"dataStoreType": "A String", # The type of the connected data store.
2439
+
"documentProcessingMode": "A String", # The document processing mode for the data store connection. Should only be set for PUBLIC_WEB and UNSTRUCTURED data stores. If not set it is considered as DOCUMENTS, as this is the legacy mode.
2438
2440
},
2439
2441
],
2440
2442
"enabled": True or False, # Whether Knowledge Connector is enabled or not.
@@ -2754,7 +2756,7 @@ <h3>Method Details</h3>
2754
2756
},
2755
2757
],
2756
2758
},
2757
-
"dataStoreConnectionSignals": { # Data store connection feature output signals. Might be only partially field if processing stop before the final answer. Reasons for this can be, but are not limited to: empty UCS search results, positive RAI check outcome, grounding failure, ... # Optional. Data store connection feature output signals. Filled only when data stores are involved in serving the query and DetectIntentRequest.populate_data_store_connection_signals is set to true in the request.
2759
+
"dataStoreConnectionSignals": { # Data store connection feature output signals. Might be only partially field if processing stop before the final answer. Reasons for this can be, but are not limited to: empty UCS search results, positive RAI check outcome, grounding failure, ... # Optional. Data store connection feature output signals. Filled only when data stores are involved in serving the query.
2758
2760
"answer": "A String", # Optional. The final compiled answer.
2759
2761
"answerGenerationModelCallSignals": { # Diagnostic info related to the answer generation model call. # Optional. Diagnostic info related to the answer generation model call.
2760
2762
"model": "A String", # Name of the generative model. For example, "gemini-ultra", "gemini-pro", "gemini-1.5-flash" etc. Defaults to "Other" if the model is unknown.
@@ -3785,6 +3787,7 @@ <h3>Method Details</h3>
3785
3787
{ # A data store connection. It represents a data store in Discovery Engine and the type of the contents it contains.
3786
3788
"dataStore": "A String", # The full name of the referenced data store. Formats: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{data_store}` `projects/{project}/locations/{location}/dataStores/{data_store}`
3787
3789
"dataStoreType": "A String", # The type of the connected data store.
3790
+
"documentProcessingMode": "A String", # The document processing mode for the data store connection. Should only be set for PUBLIC_WEB and UNSTRUCTURED data stores. If not set it is considered as DOCUMENTS, as this is the legacy mode.
3788
3791
},
3789
3792
],
3790
3793
"enabled": True or False, # Whether Knowledge Connector is enabled or not.
@@ -5035,6 +5038,7 @@ <h3>Method Details</h3>
5035
5038
{ # A data store connection. It represents a data store in Discovery Engine and the type of the contents it contains.
5036
5039
"dataStore": "A String", # The full name of the referenced data store. Formats: `projects/{project}/locations/{location}/collections/{collection}/dataStores/{data_store}` `projects/{project}/locations/{location}/dataStores/{data_store}`
5037
5040
"dataStoreType": "A String", # The type of the connected data store.
5041
+
"documentProcessingMode": "A String", # The document processing mode for the data store connection. Should only be set for PUBLIC_WEB and UNSTRUCTURED data stores. If not set it is considered as DOCUMENTS, as this is the legacy mode.
5038
5042
},
5039
5043
],
5040
5044
"enabled": True or False, # Whether Knowledge Connector is enabled or not.
@@ -5354,7 +5358,7 @@ <h3>Method Details</h3>
5354
5358
},
5355
5359
],
5356
5360
},
5357
-
"dataStoreConnectionSignals": { # Data store connection feature output signals. Might be only partially field if processing stop before the final answer. Reasons for this can be, but are not limited to: empty UCS search results, positive RAI check outcome, grounding failure, ... # Optional. Data store connection feature output signals. Filled only when data stores are involved in serving the query and DetectIntentRequest.populate_data_store_connection_signals is set to true in the request.
5361
+
"dataStoreConnectionSignals": { # Data store connection feature output signals. Might be only partially field if processing stop before the final answer. Reasons for this can be, but are not limited to: empty UCS search results, positive RAI check outcome, grounding failure, ... # Optional. Data store connection feature output signals. Filled only when data stores are involved in serving the query.
5358
5362
"answer": "A String", # Optional. The final compiled answer.
5359
5363
"answerGenerationModelCallSignals": { # Diagnostic info related to the answer generation model call. # Optional. Diagnostic info related to the answer generation model call.
5360
5364
"model": "A String", # Name of the generative model. For example, "gemini-ultra", "gemini-pro", "gemini-1.5-flash" etc. Defaults to "Other" if the model is unknown.
0 commit comments