Skip to content

Commit ac671a7

Browse files
README - Update token count examples
1 parent dc24c2b commit ac671a7

File tree

2 files changed

+55
-13
lines changed

2 files changed

+55
-13
lines changed

README.md

Lines changed: 37 additions & 4 deletions
Original file line numberDiff line numberDiff line change
@@ -365,24 +365,57 @@ For this to work you need to use `OpenAIServiceStreamedFactory` from `openai-sca
365365

366366
- 🔥 **New**: Count expected used tokens before calling `createChatCompletions` or `createChatFunCompletions`, this helps you select proper model ex. `gpt-3.5-turbo` or `gpt-3.5-turbo-16k` and reduce costs. This is an experimental feature and it may not work for all models. Requires `openai-scala-count-tokens` lib.
367367

368+
An example how to count message tokens:
369+
```scala
370+
import io.cequence.openaiscala.domain.{AssistantMessage, BaseMessage, FunctionSpec, ModelId, SystemMessage, UserMessage}
371+
372+
class MyCompletionService extends OpenAICountTokensHelper {
373+
def exec = {
374+
val model = ModelId.gpt_4_turbo_2024_04_09
375+
376+
// messages to be sent to OpenAI
377+
val messages: Seq[BaseMessage] = Seq(
378+
SystemMessage("You are a helpful assistant."),
379+
UserMessage("Who won the world series in 2020?"),
380+
AssistantMessage("The Los Angeles Dodgers won the World Series in 2020."),
381+
UserMessage("Where was it played?"),
382+
)
383+
384+
val tokens = countMessageTokens(model, messages)
385+
}
386+
}
387+
```
388+
389+
An example how to count message tokens when a function is involved:
368390
```scala
369391
import io.cequence.openaiscala.service.OpenAICountTokensHelper
370392
import io.cequence.openaiscala.domain.{ChatRole, FunMessageSpec, FunctionSpec}
371393

394+
// TODO: simpler example
395+
import io.cequence.openaiscala.domain.{BaseMessage, FunctionSpec, ModelId, SystemMessage, UserMessage}
396+
372397
class MyCompletionService extends OpenAICountTokensHelper {
373398
def exec = {
374-
val messages: Seq[FunMessageSpec] = ??? // messages to be sent to OpenAI
399+
val model = ModelId.gpt_4_turbo_2024_04_09
400+
401+
// messages to be sent to OpenAI
402+
val messages: Seq[BaseMessage] =
403+
Seq(
404+
SystemMessage("You are a helpful assistant."),
405+
UserMessage("What's the weather like in San Francisco, Tokyo, and Paris?")
406+
)
407+
375408
// function to be called
376409
val function: FunctionSpec = FunctionSpec(
377410
name = "getWeather",
378411
parameters = Map(
379412
"type" -> "object",
380-
"properties" -> ListMap(
381-
"location" -> ListMap(
413+
"properties" -> Map(
414+
"location" -> Map(
382415
"type" -> "string",
383416
"description" -> "The city to get the weather for"
384417
),
385-
"unit" -> ListMap("type" -> "string", "enum" -> List("celsius", "fahrenheit"))
418+
"unit" -> Map("type" -> "string", "enum" -> List("celsius", "fahrenheit"))
386419
)
387420
)
388421
)

openai-count-tokens/README.md

Lines changed: 18 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -27,16 +27,25 @@ or to *pom.xml* (if you use maven)
2727

2828
## Usage
2929

30+
An example how to count message tokens:
3031
```scala
31-
import io.cequence.openaiscala.service.OpenAICountTokensHelper
32-
import io.cequence.openaiscala.domain.{ChatRole, FunMessageSpec, FunctionSpec}
33-
34-
val messages: Seq[FunMessageSpec] = ??? // messages to be sent to OpenAI
35-
val function: FunctionSpec = ??? // function to be called
36-
37-
val service = new OpenAICountTokensService()
38-
39-
val tokens = service.countFunMessageTokens(messages, List(function), Some(function.name))
32+
import io.cequence.openaiscala.domain.{AssistantMessage, BaseMessage, FunctionSpec, ModelId, SystemMessage, UserMessage}
33+
34+
class MyCompletionService extends OpenAICountTokensHelper {
35+
def exec = {
36+
val model = ModelId.gpt_4_turbo_2024_04_09
37+
38+
// messages to be sent to OpenAI
39+
val messages: Seq[BaseMessage] = Seq(
40+
SystemMessage("You are a helpful assistant."),
41+
UserMessage("Who won the world series in 2020?"),
42+
AssistantMessage("The Los Angeles Dodgers won the World Series in 2020."),
43+
UserMessage("Where was it played?"),
44+
)
45+
46+
val tokens = countMessageTokens(model, messages)
47+
}
48+
}
4049
```
4150

4251

0 commit comments

Comments
 (0)