Skip to content

Commit 1f6d4f9

Browse files
committed
Wrap finish reason in an enum
1 parent f5a4e97 commit 1f6d4f9

File tree

2 files changed

+43
-4
lines changed

2 files changed

+43
-4
lines changed

src/main/kotlin/com/cjcrafter/openai/chat/ChatChoice.kt

Lines changed: 12 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -3,21 +3,29 @@ package com.cjcrafter.openai.chat
33
import com.google.gson.JsonObject
44

55
/**
6-
* Holds the data for 1 generated text completion. For most use cases, only 1
7-
* [ChatChoice] is generated.
6+
* The OpenAI API returns a list of [ChatChoice]. Each chat choice has a
7+
* generated message ([ChatChoice.message]) and a finish reason
8+
* ([ChatChoice.finishReason]). For most use cases, you only need the generated
9+
* message.
10+
*
11+
* By default, only 1 [ChatChoice] is generated (since [ChatRequest.n] == 1).
12+
* When you increase `n`, more options are generated. The more options you
13+
* generate, the more tokens you use. In general, it is best to **ONLY**
14+
* generate 1 response, and to let the user regenerate the response.
815
*
916
* @param index The index in the array... 0 if [ChatRequest.n]=1.
1017
* @param message The generated text.
1118
* @param finishReason Why did the bot stop generating tokens?
19+
* @see FinishReason
1220
*/
13-
class ChatChoice(val index: Int, val message: ChatMessage, val finishReason: String) {
21+
class ChatChoice(val index: Int, val message: ChatMessage, val finishReason: FinishReason?) {
1422

1523
/**
1624
* JSON constructor for internal use.
1725
*/
1826
constructor(json: JsonObject) : this(
1927
json["index"].asInt,
2028
ChatMessage(json["message"].asJsonObject),
21-
json["finish_reason"].toString()
29+
if (json["finish_reason"].isJsonNull) null else FinishReason.valueOf(json["finish_reason"].asString.uppercase())
2230
)
2331
}
Lines changed: 31 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,31 @@
1+
package com.cjcrafter.openai.chat
2+
3+
/**
4+
* [FinishReason] wraps the possible reasons that a generation model may stop
5+
* generating tokens. For most **PROPER** use cases (see [best practices]()),
6+
* the finish reason will be [STOP]. When working with streams, finish reason
7+
* will be `null` since it has not completed the message yet.
8+
*/
9+
enum class FinishReason {
10+
11+
/**
12+
* [STOP] is the most common finish reason, and it occurs when the model
13+
* completely generates its entire message, and has nothing else to add.
14+
* Ideally, you always want your finish reason to be [STOP].
15+
*/
16+
STOP,
17+
18+
/**
19+
* [LENGTH] occurs when the bot is not able to finish the response within
20+
* its token limit. When it reaches the token limit, it sends the
21+
* incomplete message with finish reason [LENGTH]
22+
*/
23+
LENGTH,
24+
25+
/**
26+
* [TEMPERATURE] is a rare occurrence, and only happens when the
27+
* [ChatRequest.temperature] is low enough that it is impossible for the
28+
* model to continue generating text.
29+
*/
30+
TEMPERATURE
31+
}

0 commit comments

Comments
 (0)