Skip to content

General improvement to VertexAI kdocs #6370

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 19 commits into from
Oct 11, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -33,14 +33,16 @@ import kotlinx.coroutines.flow.onEach
/**
* Representation of a multi-turn interaction with a model.
*
* Handles the capturing and storage of the communication with the model, providing methods for
* further interaction.
* Captures and stores the history of communication in memory, and provides it as context with each
* new message.
*
* **Note:** This object is not thread-safe, and calling [sendMessage] multiple times without
* waiting for a response will throw an [InvalidStateException].
*
* @param model The model to use for the interaction
* @property history The previous interactions with the model
* @param model The model to use for the interaction.
* @property history The previous content from the chat that has been successfully sent and received
* from the model. This will be provided to the model for each message sent (as context for the
* discussion).
*/
public class Chat(
private val model: GenerativeModel,
Expand All @@ -49,11 +51,15 @@ public class Chat(
private var lock = Semaphore(1)

/**
* Generates a response from the backend with the provided [Content], and any previous ones
* sent/returned from this chat.
* Sends a message using the provided [prompt]; automatically providing the existing [history] as
* context.
*
* @param prompt A [Content] to send to the model.
* @throws InvalidStateException if the prompt is not coming from the 'user' role
* If successful, the message and response will be added to the [history]. If unsuccessful,
* [history] will remain unchanged.
*
* @param prompt The input that, together with the history, will be given to the model as the
* prompt.
* @throws InvalidStateException if [prompt] is not coming from the 'user' role.
* @throws InvalidStateException if the [Chat] instance has an active request.
*/
public suspend fun sendMessage(prompt: Content): GenerateContentResponse {
Expand All @@ -70,9 +76,15 @@ public class Chat(
}

/**
* Generates a response from the backend with the provided text prompt.
* Sends a message using the provided [text prompt][prompt]; automatically providing the existing
* [history] as context.
*
* If successful, the message and response will be added to the [history]. If unsuccessful,
* [history] will remain unchanged.
*
* @param prompt The text to be converted into a single piece of [Content] to send to the model.
* @param prompt The input that, together with the history, will be given to the model as the
* prompt.
* @throws InvalidStateException if [prompt] is not coming from the 'user' role.
* @throws InvalidStateException if the [Chat] instance has an active request.
*/
public suspend fun sendMessage(prompt: String): GenerateContentResponse {
Expand All @@ -81,9 +93,15 @@ public class Chat(
}

/**
* Generates a response from the backend with the provided image prompt.
* Sends a message using the existing history of this chat as context and the provided image
* prompt.
*
* If successful, the message and response will be added to the history. If unsuccessful, history
* will remain unchanged.
*
* @param prompt The image to be converted into a single piece of [Content] to send to the model.
* @param prompt The input that, together with the history, will be given to the model as the
* prompt.
* @throws InvalidStateException if [prompt] is not coming from the 'user' role.
* @throws InvalidStateException if the [Chat] instance has an active request.
*/
public suspend fun sendMessage(prompt: Bitmap): GenerateContentResponse {
Expand All @@ -92,11 +110,17 @@ public class Chat(
}

/**
* Generates a streaming response from the backend with the provided [Content].
* Sends a message using the existing history of this chat as context and the provided [Content]
* prompt.
*
* The response from the model is returned as a stream.
*
* If successful, the message and response will be added to the history. If unsuccessful, history
* will remain unchanged.
*
* @param prompt A [Content] to send to the model.
* @return A [Flow] which will emit responses as they are returned from the model.
* @throws InvalidStateException if the prompt is not coming from the 'user' role
* @param prompt The input that, together with the history, will be given to the model as the
* prompt.
* @throws InvalidStateException if [prompt] is not coming from the 'user' role.
* @throws InvalidStateException if the [Chat] instance has an active request.
*/
public fun sendMessageStream(prompt: Content): Flow<GenerateContentResponse> {
Expand Down Expand Up @@ -146,10 +170,17 @@ public class Chat(
}

/**
* Generates a streaming response from the backend with the provided text prompt.
* Sends a message using the existing history of this chat as context and the provided text
* prompt.
*
* @param prompt a text to be converted into a single piece of [Content] to send to the model
* @return A [Flow] which will emit responses as they are returned from the model.
* The response from the model is returned as a stream.
*
* If successful, the message and response will be added to the history. If unsuccessful, history
* will remain unchanged.
*
* @param prompt The input(s) that, together with the history, will be given to the model as the
* prompt.
* @throws InvalidStateException if [prompt] is not coming from the 'user' role.
* @throws InvalidStateException if the [Chat] instance has an active request.
*/
public fun sendMessageStream(prompt: String): Flow<GenerateContentResponse> {
Expand All @@ -158,10 +189,17 @@ public class Chat(
}

/**
* Generates a streaming response from the backend with the provided image prompt.
* Sends a message using the existing history of this chat as context and the provided image
* prompt.
*
* The response from the model is returned as a stream.
*
* If successful, the message and response will be added to the history. If unsuccessful, history
* will remain unchanged.
*
* @param prompt A [Content] to send to the model.
* @return A [Flow] which will emit responses as they are returned from the model.
* @param prompt The input that, together with the history, will be given to the model as the
* prompt.
* @throws InvalidStateException if [prompt] is not coming from the 'user' role.
* @throws InvalidStateException if the [Chat] instance has an active request.
*/
public fun sendMessageStream(prompt: Bitmap): Flow<GenerateContentResponse> {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,13 +42,15 @@ internal constructor(
/**
* Instantiates a new [GenerativeModel] given the provided parameters.
*
* @param modelName name of the model in the backend
* @param generationConfig configuration parameters to use for content generation
* @param safetySettings safety bounds to use during alongside prompts during content generation
* @param requestOptions configuration options to utilize during backend communication
* @param tools list of tools to make available to the model
* @param toolConfig configuration that defines how the model handles the tools provided
* @param systemInstruction contains a [Content] that directs the model to behave a certain way
* @param modelName The name of the model to use, for example "gemini-1.5-pro".
* @param generationConfig The configuration parameters to use for content generation.
* @param safetySettings The safety bounds the model will abide to during content generation.
* @param tools A list of [Tool]s the model may use to generate content.
* @param toolConfig The [ToolConfig] that defines how the model handles the tools provided.
* @param systemInstruction [Content] instructions that direct the model to behave a certain way.
* Currently only text content is supported.
* @param requestOptions Configuration options for sending requests to the backend.
* @return The initialized [GenerativeModel] instance.
*/
@JvmOverloads
public fun generativeModel(
Expand Down Expand Up @@ -86,10 +88,11 @@ internal constructor(
@JvmStatic public fun getInstance(app: FirebaseApp): FirebaseVertexAI = getInstance(app)

/**
* Returns the [FirebaseVertexAI] instance for the provided [FirebaseApp] and [location]
* Returns the [FirebaseVertexAI] instance for the provided [FirebaseApp] and [location].
*
* @param location location identifier, defaults to `us-central1`; see available
* [Vertex AI regions](https://firebase.google.com/docs/vertex-ai/locations?platform=android#available-locations)
* .
*/
@JvmStatic
@JvmOverloads
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,11 @@ import com.google.firebase.appcheck.interop.InteropAppCheckTokenProvider
import com.google.firebase.auth.internal.InternalAuthProvider
import com.google.firebase.inject.Provider

/** Multi-resource container for Firebase Vertex AI */
/**
* Multi-resource container for Firebase Vertex AI.
*
* @hide
*/
internal class FirebaseVertexAIMultiResourceComponent(
private val app: FirebaseApp,
private val appCheckProvider: Provider<InteropAppCheckTokenProvider>,
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,7 +48,8 @@ import kotlinx.coroutines.flow.map
import kotlinx.coroutines.tasks.await

/**
* A controller for communicating with the API of a given multimodal model (for example, Gemini).
* Represents a multimodal model (like Gemini), capable of generating content based on various input
* types.
*/
public class GenerativeModel
internal constructor(
Expand Down Expand Up @@ -122,11 +123,12 @@ internal constructor(
)

/**
* Generates a [GenerateContentResponse] from the backend with the provided [Content].
* Generates new content from the input [Content] given to the model as a prompt.
*
* @param prompt [Content] to send to the model.
* @return A [GenerateContentResponse]. Function should be called within a suspend context to
* properly manage concurrency.
* @param prompt The input(s) given to the model as a prompt.
* @return The content generated by the model.
* @throws [FirebaseVertexAIException] if the request failed.
* @see [FirebaseVertexAIException] for types of errors.
*/
public suspend fun generateContent(vararg prompt: Content): GenerateContentResponse =
try {
Expand All @@ -136,10 +138,12 @@ internal constructor(
}

/**
* Generates a streaming response from the backend with the provided [Content].
* Generates new content as a stream from the input [Content] given to the model as a prompt.
*
* @param prompt [Content] to send to the model.
* @return A [Flow] which will emit responses as they are returned from the model.
* @param prompt The input(s) given to the model as a prompt.
* @return A [Flow] which will emit responses as they are returned by the model.
* @throws [FirebaseVertexAIException] if the request failed.
* @see [FirebaseVertexAIException] for types of errors.
*/
public fun generateContentStream(vararg prompt: Content): Flow<GenerateContentResponse> =
controller
Expand All @@ -148,52 +152,60 @@ internal constructor(
.map { it.toPublic().validate() }

/**
* Generates a [GenerateContentResponse] from the backend with the provided text prompt.
* Generates new content from the text input given to the model as a prompt.
*
* @param prompt The text to be converted into a single piece of [Content] to send to the model.
* @return A [GenerateContentResponse] after some delay. Function should be called within a
* suspend context to properly manage concurrency.
* @param prompt The text to be send to the model as a prompt.
* @return The content generated by the model.
* @throws [FirebaseVertexAIException] if the request failed.
* @see [FirebaseVertexAIException] for types of errors.
*/
public suspend fun generateContent(prompt: String): GenerateContentResponse =
generateContent(content { text(prompt) })

/**
* Generates a streaming response from the backend with the provided text prompt.
* Generates new content as a stream from the text input given to the model as a prompt.
*
* @param prompt The text to be converted into a single piece of [Content] to send to the model.
* @return A [Flow] which will emit responses as they are returned from the model.
* @param prompt The text to be send to the model as a prompt.
* @return A [Flow] which will emit responses as they are returned by the model.
* @throws [FirebaseVertexAIException] if the request failed.
* @see [FirebaseVertexAIException] for types of errors.
*/
public fun generateContentStream(prompt: String): Flow<GenerateContentResponse> =
generateContentStream(content { text(prompt) })

/**
* Generates a [GenerateContentResponse] from the backend with the provided image prompt.
* Generates new content from the image input given to the model as a prompt.
*
* @param prompt The image to be converted into a single piece of [Content] to send to the model.
* @return A [GenerateContentResponse] after some delay. Function should be called within a
* suspend context to properly manage concurrency.
* @return A [GenerateContentResponse] after some delay.
* @throws [FirebaseVertexAIException] if the request failed.
* @see [FirebaseVertexAIException] for types of errors.
*/
public suspend fun generateContent(prompt: Bitmap): GenerateContentResponse =
generateContent(content { image(prompt) })

/**
* Generates a streaming response from the backend with the provided image prompt.
* Generates new content as a stream from the image input given to the model as a prompt.
*
* @param prompt The image to be converted into a single piece of [Content] to send to the model.
* @return A [Flow] which will emit responses as they are returned from the model.
* @return A [Flow] which will emit responses as they are returned by the model.
* @throws [FirebaseVertexAIException] if the request failed.
* @see [FirebaseVertexAIException] for types of errors.
*/
public fun generateContentStream(prompt: Bitmap): Flow<GenerateContentResponse> =
generateContentStream(content { image(prompt) })

/** Creates a [Chat] instance which internally tracks the ongoing conversation with the model */
/** Creates a [Chat] instance using this model with the optionally provided history. */
public fun startChat(history: List<Content> = emptyList()): Chat =
Chat(this, history.toMutableList())

/**
* Counts the amount of tokens in a prompt.
* Counts the number of tokens in a prompt using the model's tokenizer.
*
* @param prompt A group of [Content] to count tokens of.
* @return A [CountTokensResponse] containing the amount of tokens in the prompt.
* @param prompt The input(s) given to the model as a prompt.
* @return The [CountTokensResponse] of running the model's tokenizer on the input.
* @throws [FirebaseVertexAIException] if the request failed.
* @see [FirebaseVertexAIException] for types of errors.
*/
public suspend fun countTokens(vararg prompt: Content): CountTokensResponse {
try {
Expand All @@ -204,20 +216,24 @@ internal constructor(
}

/**
* Counts the amount of tokens in the text prompt.
* Counts the number of tokens in a text prompt using the model's tokenizer.
*
* @param prompt The text to be converted to a single piece of [Content] to count the tokens of.
* @return A [CountTokensResponse] containing the amount of tokens in the prompt.
* @param prompt The text given to the model as a prompt.
* @return The [CountTokensResponse] of running the model's tokenizer on the input.
* @throws [FirebaseVertexAIException] if the request failed.
* @see [FirebaseVertexAIException] for types of errors.
*/
public suspend fun countTokens(prompt: String): CountTokensResponse {
return countTokens(content { text(prompt) })
}

/**
* Counts the amount of tokens in the image prompt.
* Counts the number of tokens in an image prompt using the model's tokenizer.
*
* @param prompt The image to be converted to a single piece of [Content] to count the tokens of.
* @return A [CountTokensResponse] containing the amount of tokens in the prompt.
* @param prompt The image given to the model as a prompt.
* @return The [CountTokensResponse] of running the model's tokenizer on the input.
* @throws [FirebaseVertexAIException] if the request failed.
* @see [FirebaseVertexAIException] for types of errors.
*/
public suspend fun countTokens(prompt: Bitmap): CountTokensResponse {
return countTokens(content { image(prompt) })
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -25,28 +25,43 @@ import kotlinx.coroutines.reactive.asPublisher
import org.reactivestreams.Publisher

/**
* Helper method for interacting with a [Chat] from Java.
* Wrapper class providing Java compatible methods for [Chat].
*
* @see from
* @see [Chat]
*/
public abstract class ChatFutures internal constructor() {

/**
* Generates a response from the backend with the provided [Content], and any previous ones
* sent/returned from this chat.
* Sends a message using the existing history of this chat as context and the provided [Content]
* prompt.
*
* @param prompt A [Content] to send to the model.
* If successful, the message and response will be added to the history. If unsuccessful, history
* will remain unchanged.
*
* @param prompt The input(s) that, together with the history, will be given to the model as the
* prompt.
* @throws InvalidStateException if [prompt] is not coming from the 'user' role
* @throws InvalidStateException if the [Chat] instance has an active request
*/
public abstract fun sendMessage(prompt: Content): ListenableFuture<GenerateContentResponse>

/**
* Generates a streaming response from the backend with the provided [Content].
* Sends a message using the existing history of this chat as context and the provided [Content]
* prompt.
*
* The response from the model is returned as a stream.
*
* If successful, the message and response will be added to the history. If unsuccessful, history
* will remain unchanged.
*
* @param prompt A [Content] to send to the model.
* @param prompt The input(s) that, together with the history, will be given to the model as the
* prompt.
* @throws InvalidStateException if [prompt] is not coming from the 'user' role
* @throws InvalidStateException if the [Chat] instance has an active request
*/
public abstract fun sendMessageStream(prompt: Content): Publisher<GenerateContentResponse>

/** Returns the [Chat] instance that was used to create this instance */
/** Returns the [Chat] object wrapped by this object. */
public abstract fun getChat(): Chat

private class FuturesImpl(private val chat: Chat) : ChatFutures() {
Expand Down
Loading
Loading