Skip to content

Commit c332b1c

Browse files
authored
Improve kdoc for GenerationConfig (#6261)
1 parent 6e7e04e commit c332b1c

File tree

1 file changed

+64
-18
lines changed

1 file changed

+64
-18
lines changed

firebase-vertexai/src/main/kotlin/com/google/firebase/vertexai/type/GenerationConfig.kt

Lines changed: 64 additions & 18 deletions
Original file line numberDiff line numberDiff line change
@@ -19,18 +19,55 @@ package com.google.firebase.vertexai.type
1919
/**
2020
* Configuration parameters to use for content generation.
2121
*
22-
* @property temperature The degree of randomness in token selection, typically between 0 and 1
23-
* @property topK The sum of probabilities to collect to during token selection
24-
* @property topP How many tokens to select amongst the highest probabilities
25-
* @property candidateCount The max *unique* responses to return
26-
* @property maxOutputTokens The max tokens to generate per response
27-
* @property stopSequences A list of strings to stop generation on occurrence of
28-
* @property responseMimeType Response MIME type for the generated candidate text. For a list of
29-
* supported response MIME types, see the
30-
* [Vertex AI documentation](https://cloud.google.com/vertex-ai/docs/reference/rest/v1beta1/GenerationConfig#FIELDS.response_mime_type)
31-
* for a list of supported types.
32-
* @property responseSchema A schema that the response must adhere to, used with the
33-
* `application/json` MIME type.
22+
* @property temperature A parameter controlling the degree of randomness in token selection. A
23+
* temperature of 0 means that the highest probability tokens are always selected. In this case,
24+
* responses for a given prompt are mostly deterministic, but a small amount of variation is still
25+
* possible.
26+
*
27+
* @property topK The `topK` parameter changes how the model selects tokens for output. A `topK` of
28+
* 1 means the selected token is the most probable among all the tokens in the model's vocabulary,
29+
* while a `topK` of 3 means that the next token is selected from among the 3 most probable using
30+
* the `temperature`. For each token selection step, the `topK` tokens with the highest
31+
* probabilities are sampled. Tokens are then further filtered based on `topP` with the final token
32+
* selected using `temperature` sampling. Defaults to 40 if unspecified.
33+
*
34+
* @property topP The `topP` parameter changes how the model selects tokens for output. Tokens are
35+
* selected from the most to least probable until the sum of their probabilities equals the `topP`
36+
* value. For example, if tokens A, B, and C have probabilities of 0.3, 0.2, and 0.1 respectively
37+
* and the topP value is 0.5, then the model will select either A or B as the next token by using
38+
* the `temperature` and exclude C as a candidate. Defaults to 0.95 if unset.
39+
*
40+
* @property candidateCount The maximum number of generated response messages to return. This value
41+
* must be between [1, 8], inclusive. If unset, this will default to 1.
42+
*
43+
* - Note: Only unique candidates are returned. Higher temperatures are more likely to produce
44+
* unique candidates. Setting `temperature` to 0 will always produce exactly one candidate
45+
* regardless of the `candidateCount`.
46+
*
47+
* @property maxOutputTokens Specifies the maximum number of tokens that can be generated in the
48+
* response. The number of tokens per word varies depending on the language outputted. Defaults to 0
49+
* (unbounded).
50+
*
51+
* @property stopSequences A set of up to 5 `String`s that will stop output generation. If
52+
* specified, the API will stop at the first appearance of a stop sequence. The stop sequence will
53+
* not be included as part of the response.
54+
*
55+
* @property responseMimeType Output response MIME type of the generated candidate text (IANA
56+
* standard).
57+
*
58+
* Supported MIME types depend on the model used, but could include:
59+
* - `text/plain`: Text output; the default behavior if unspecified.
60+
* - `application/json`: JSON response in the candidates.
61+
*
62+
* @property responseSchema Output schema of the generated candidate text. If set, a compatible
63+
* [responseMimeType] must also be set.
64+
*
65+
* Compatible MIME types:
66+
* - `application/json`: Schema for JSON response.
67+
*
68+
* Refer to the
69+
* [Control generated output](https://cloud.google.com/vertex-ai/generative-ai/docs/multimodal/control-generated-output)
70+
* guide for more details.
3471
*/
3572
class GenerationConfig
3673
private constructor(
@@ -50,12 +87,21 @@ private constructor(
5087
* Mainly intended for Java interop. Kotlin consumers should use [generationConfig] for a more
5188
* idiomatic experience.
5289
*
53-
* @property temperature The degree of randomness in token selection, typically between 0 and 1
54-
* @property topK The sum of probabilities to collect to during token selection
55-
* @property topP How many tokens to select amongst the highest probabilities
56-
* @property candidateCount The max *unique* responses to return
57-
* @property maxOutputTokens The max tokens to generate per response
58-
* @property stopSequences A list of strings to stop generation on occurrence of
90+
* @property temperature See [GenerationConfig.temperature].
91+
*
92+
* @property topK See [GenerationConfig.topK].
93+
*
94+
* @property topP See [GenerationConfig.topP].
95+
*
96+
* @property candidateCount See [GenerationConfig.candidateCount].
97+
*
98+
* @property maxOutputTokens See [GenerationConfig.maxOutputTokens].
99+
*
100+
* @property stopSequences See [GenerationConfig.stopSequences].
101+
*
102+
* @property responseMimeType See [GenerationConfig.responseMimeType].
103+
*
104+
* @property responseSchema See [GenerationConfig.responseSchema].
59105
* @see [generationConfig]
60106
*/
61107
class Builder {

0 commit comments

Comments
 (0)