You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: specification/inference/chat_completion_unified/UnifiedRequest.ts
+3-3Lines changed: 3 additions & 3 deletions
Original file line number
Diff line number
Diff line change
@@ -23,10 +23,10 @@ import { Id } from '@_types/common'
23
23
import{Duration}from'@_types/Time'
24
24
/**
25
25
* Perform chat completion inference
26
-
*
27
-
* The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation.
26
+
*
27
+
* The chat completion inference API enables real-time responses for chat completion tasks by delivering answers incrementally, reducing response times during computation.
28
28
* It only works with the `chat_completion` task type for `openai` and `elastic` inference services.
29
-
*
29
+
*
30
30
* NOTE: The `chat_completion` task type is only available within the _stream API and only supports streaming.
31
31
* The Chat completion inference API and the Stream inference API differ in their response structure and capabilities.
32
32
* The Chat completion inference API provides more comprehensive customization options through more fields and function calling support.
0 commit comments