You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docs/dyn/dialogflow_v2.projects.agent.environments.users.sessions.html
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -149,6 +149,7 @@ <h3>Method Details</h3>
149
149
"audioConfig": { # Instructs the speech recognizer how to process the audio content. # Instructs the speech recognizer how to process the speech audio.
150
150
"audioEncoding": "A String", # Required. Audio encoding of the audio content to process.
151
151
"disableNoSpeechRecognizedEvent": True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn't return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent.
152
+
"enableAutomaticPunctuation": True or False, # Enable automatic punctuation option at the speech backend.
152
153
"enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.
153
154
"languageCode": "A String", # Required. The language of the supplied audio. Dialogflow does not do translations. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
154
155
"model": "A String", # Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to [Cloud Speech API documentation](https://cloud.google.com/speech-to-text/docs/basics#select-model) for more details. If you specify a model, the following models typically have the best performance: - phone_call (best for Agent Assist and telephony) - latest_short (best for Dialogflow non-telephony) - command_and_search (best for very short utterances and commands)
Copy file name to clipboardExpand all lines: docs/dyn/dialogflow_v2.projects.agent.sessions.html
+1Lines changed: 1 addition & 0 deletions
Original file line number
Diff line number
Diff line change
@@ -149,6 +149,7 @@ <h3>Method Details</h3>
149
149
"audioConfig": { # Instructs the speech recognizer how to process the audio content. # Instructs the speech recognizer how to process the speech audio.
150
150
"audioEncoding": "A String", # Required. Audio encoding of the audio content to process.
151
151
"disableNoSpeechRecognizedEvent": True or False, # Only used in Participants.AnalyzeContent and Participants.StreamingAnalyzeContent. If `false` and recognition doesn't return any result, trigger `NO_SPEECH_RECOGNIZED` event to Dialogflow agent.
152
+
"enableAutomaticPunctuation": True or False, # Enable automatic punctuation option at the speech backend.
152
153
"enableWordInfo": True or False, # If `true`, Dialogflow returns SpeechWordInfo in StreamingRecognitionResult with information about the recognized speech words, e.g. start and end time offsets. If false or unspecified, Speech doesn't return any word-level information.
153
154
"languageCode": "A String", # Required. The language of the supplied audio. Dialogflow does not do translations. See [Language Support](https://cloud.google.com/dialogflow/docs/reference/language) for a list of the currently supported language codes. Note that queries in the same session do not necessarily need to specify the same language.
154
155
"model": "A String", # Which Speech model to select for the given request. Select the model best suited to your domain to get best results. If a model is not explicitly specified, then we auto-select a model based on the parameters in the InputAudioConfig. If enhanced speech model is enabled for the agent and an enhanced version of the specified model for the language does not exist, then the speech is recognized using the standard version of the specified model. Refer to [Cloud Speech API documentation](https://cloud.google.com/speech-to-text/docs/basics#select-model) for more details. If you specify a model, the following models typically have the best performance: - phone_call (best for Agent Assist and telephony) - latest_short (best for Dialogflow non-telephony) - command_and_search (best for very short utterances and commands)
Copy file name to clipboardExpand all lines: docs/dyn/dialogflow_v2.projects.answerRecords.html
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -148,7 +148,7 @@ <h3>Method Details</h3>
148
148
},
149
149
},
150
150
"clickTime": "A String", # Time when the answer/item was clicked.
151
-
"clicked": True or False, # Indicates whether the answer/item was clicked by the human agent or not. Default to false.
151
+
"clicked": True or False, # Indicates whether the answer/item was clicked by the human agent or not. Default to false. For knowledge search and knowledge assist, the answer record is considered to be clicked if the answer was copied or any URI was clicked.
152
152
"correctnessLevel": "A String", # The correctness level of the specific answer.
153
153
"displayTime": "A String", # Time when the answer/item was displayed.
154
154
"displayed": True or False, # Indicates whether the answer/item was displayed to the human agent in the agent desktop UI. Default to false.
@@ -220,7 +220,7 @@ <h3>Method Details</h3>
220
220
},
221
221
},
222
222
"clickTime": "A String", # Time when the answer/item was clicked.
223
-
"clicked": True or False, # Indicates whether the answer/item was clicked by the human agent or not. Default to false.
223
+
"clicked": True or False, # Indicates whether the answer/item was clicked by the human agent or not. Default to false. For knowledge search and knowledge assist, the answer record is considered to be clicked if the answer was copied or any URI was clicked.
224
224
"correctnessLevel": "A String", # The correctness level of the specific answer.
225
225
"displayTime": "A String", # Time when the answer/item was displayed.
226
226
"displayed": True or False, # Indicates whether the answer/item was displayed to the human agent in the agent desktop UI. Default to false.
@@ -274,7 +274,7 @@ <h3>Method Details</h3>
274
274
},
275
275
},
276
276
"clickTime": "A String", # Time when the answer/item was clicked.
277
-
"clicked": True or False, # Indicates whether the answer/item was clicked by the human agent or not. Default to false.
277
+
"clicked": True or False, # Indicates whether the answer/item was clicked by the human agent or not. Default to false. For knowledge search and knowledge assist, the answer record is considered to be clicked if the answer was copied or any URI was clicked.
278
278
"correctnessLevel": "A String", # The correctness level of the specific answer.
279
279
"displayTime": "A String", # Time when the answer/item was displayed.
280
280
"displayed": True or False, # Indicates whether the answer/item was displayed to the human agent in the agent desktop UI. Default to false.
0 commit comments