Skip to content

add (docs): add some newer fields to the gpt file reference #683

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
12 changes: 6 additions & 6 deletions docs/docs/02-examples/01-cli.md
Original file line number Diff line number Diff line change
Expand Up @@ -157,7 +157,7 @@ Agents: k8s-agent, github-agent
Context: shared-context
Chat: true

Help the user acomplish their tasks using the tools you have. When the user starts this chat, just say hello and ask what you can help with. You donlt need to start off by guiding them.
Help the user acomplish their tasks using the tools you have. When the user starts this chat, just say hello and ask what you can help with. You don't need to start off by guiding them.
```

By being at the top of the file, this tool will serve as the script's entrypoint. Here are the parts of this tool that are worth additional explanation:
Expand Down Expand Up @@ -201,14 +201,14 @@ Context: shared-context
Agents: k8s-agent, github-agent
Chat: true

Help the user acomplish their tasks using the tools you have. When the user starts this chat, just say hello and ask what you can help with. You donlt need to start off by guiding them.
Help the user acomplish their tasks using the tools you have. When the user starts this chat, just say hello and ask what you can help with. You don't need to start off by guiding them.

---
Name: k8s-agent
Description: An agent that can help you with your Kubernetes cluster by executing kubectl commands
Context: shared-context
Tools: sys.exec
Parameter: task: The kubectl releated task to accomplish
Parameter: task: The kubectl related task to accomplish
Chat: true

You have the kubectl cli available to you. Use it to accomplish the tasks that the user asks of you.
Expand Down Expand Up @@ -268,15 +268,15 @@ By now you should notice a simple pattern emerging that you can follow to add yo
```
Name: {your cli}-agent
Description: An agent to help you with {your taks} related tasks using the gh cli
Context: {here's your biggest decsion to make}, shared-context
Context: {here's your biggest decision to make}, shared-context
Tools: sys.exec
Parameter: task: The {your task}The GitHub task to accomplish
Parameter: task: The {your task} to accomplish
Chat: true

You have the {your cli} cli available to you. Use it to accomplish the tasks that the user asks of you.
```

You can drop in your task and CLI and have a fairly functional CLI-based chat agent. The biggest decision you'll need to make is what and how much context to give your agent. For well-known for CLIs/technologies like kubectl and Kubernetes, you probably won't need a custom context. For custom CLIs, you'll definitely need to help the LLM out. The best approach is to experiment and see what works best.
You can drop in your task and CLI and have a fairly functional CLI-based chat agent. The biggest decision you'll need to make is what and how much context to give your agent. For well-known CLIs/technologies like kubectl and Kubernetes, you probably won't need a custom context. For custom CLIs, you'll definitely need to help the LLM out. The best approach is to experiment and see what works best.

## Next steps

Expand Down
4 changes: 2 additions & 2 deletions docs/docs/02-examples/04-local-files.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,11 +45,11 @@ This is actually the entirety of the script. We're packing a lot of power into j

The **Tools: ...** stanza pulls two useful tools into this assistant.

The [structured-data-querier](https://github.com/gptscript-ai/structured-data-querier) makes it possible to query csv, xlsx, and json files as though they SQL databases (using an application called [DuckDB](https://duckdb.org/)). This is extremely powerful when combined with the power of LLMs because it let's you ask natural language questions that the LLM can then translate to SQL.
The [structured-data-querier](https://github.com/gptscript-ai/structured-data-querier) makes it possible to query csv, xlsx, and json files as though they were SQL databases (using an application called [DuckDB](https://duckdb.org/)). This is extremely powerful when combined with the power of LLMs because it let's you ask natural language questions that the LLM can then translate to SQL.

The [pdf-reader](https://github.com/gptscript-ai/pdf-reader) isn't quite as exciting, but still useful. It parses and reads PDFs and returns the contents to the LLM. This will put the entire contents in your chat context, so it's not appropriate for extremely large PDFs, but it's handy for smaller ones.

**Context: github.com/gptscript-ai/context/workspace** introduces a context tool makes this assistant "workspace" aware. It's description reads:
**Context: github.com/gptscript-ai/context/workspace** introduces a context tool that makes this assistant "workspace" aware. Its description reads:
> Adds the workspace and tools needed to access the workspace to the current context

That translates to telling the LLM what the workspace directory is and instructing it to use that directory for reading and writing files. As we saw above, you can specify a workspace like this:
Expand Down
3 changes: 2 additions & 1 deletion docs/docs/03-tools/03-openapi.md
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
# OpenAPI Tools

GPTScript can treat OpenAPI v3 definition files as though they were tool files.
GPTScript can treat OpenAPI v2 and v3 definition files as though they were tool files.
Each operation (a path and HTTP method) in the file will become a simple tool that makes an HTTP request.
GPTScript will automatically and internally generate the necessary code to make the request and parse the response.

Expand Down Expand Up @@ -44,6 +44,7 @@ Will be resolved as `https://api.example.com/v1`.
:::warning
All authentication options will be completely ignored if the server uses HTTP and not HTTPS.
This is to protect users from accidentally sending credentials in plain text.
HTTP is only OK, if it's on localhost/127.0.0.1.
:::

### 1. Security Schemes
Expand Down
6 changes: 3 additions & 3 deletions docs/docs/03-tools/05-context.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Here is a simple example of a context provider tool that provides additional con

```yaml
# my-search-context-tool.gpt
export: sys.http.html2text?
share tools: sys.http.html2text?

#!/bin/bash
echo You are an expert web researcher with access to the Search tool.If the search tool fails to return any information stop execution of the script with message "Sorry! Search did not return any results". Feel free to get the contents of the returned URLs in order to get more information. Provide as much detail as you can. Also return the source of the search results.
Expand All @@ -71,7 +71,7 @@ Here is an example of a context provider tool that uses args to decide which sea

```yaml
# context_with_arg.gpt
export: github.com/gptscript-ai/search/duckduckgo, github.com/gptscript-ai/search/brave, sys.http.html2text?
share tools: github.com/gptscript-ai/search/duckduckgo, github.com/gptscript-ai/search/brave, sys.http.html2text?
args: search_tool: tool to search with

#!/bin/bash
Expand All @@ -84,7 +84,7 @@ Continuing with the above example, this is how you can use it in a script:
```yaml
# my_context_with_arg.gpt
context: ./context_with_arg.gpt with ${search} as search_tool
Args: search: Search tool to use
args: search: Search tool to use
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Only for consistency within this page

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm probably going to go through and capitalize everything (I think that's the more canonical way now) as I work on my docs revamp, so this is fine either way for now.


What are some of the most popular tourist destinations in Scotland, and how many people visit them each year?

Expand Down
3 changes: 1 addition & 2 deletions docs/docs/03-tools/06-how-it-works.md
Original file line number Diff line number Diff line change
@@ -1,7 +1,6 @@
# How it works

**_GPTScript is composed of tools._** Each tool performs a series of actions similar to a function. Tools have available
to them other tools that can be invoked similar to a function call. While similar to a function, the tools are
**_GPTScript is composed of tools._** Each tool performs a series of actions similar to a function. Tools have other tools available to them that can be invoked similar to a function call. While similar to a function, the tools are
primarily implemented with a natural language prompt. **_The interaction of the tools is determined by the AI model_**,
the model determines if the tool needs to be invoked and what arguments to pass. Tools are intended to be implemented
with a natural language prompt but can also be implemented with a command or HTTP call.
Expand Down
34 changes: 19 additions & 15 deletions docs/docs/03-tools/07-gpt-file-reference.md
Original file line number Diff line number Diff line change
Expand Up @@ -43,21 +43,25 @@ Tool instructions go here.

Tool parameters are key-value pairs defined at the beginning of a tool block, before any instructional text. They are specified in the format `key: value`. The parser recognizes the following keys (case-insensitive and spaces are ignored):

| Key | Description |
|--------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
| `Name` | The name of the tool. |
| `Model Name` | The LLM model to use, by default it uses "gpt-4-turbo". |
| `Global Model Name`| The LLM model to use for all the tools. |
| `Description` | The description of the tool. It is important that this properly describes the tool's purpose as the description is used by the LLM. |
| `Internal Prompt` | Setting this to `false` will disable the built-in system prompt for this tool. |
| `Tools` | A comma-separated list of tools that are available to be called by this tool. |
| `Global Tools` | A comma-separated list of tools that are available to be called by all tools. |
| `Credentials` | A comma-separated list of credential tools to run before the main tool. |
| `Args` | Arguments for the tool. Each argument is defined in the format `arg-name: description`. |
| `Max Tokens` | Set to a number if you wish to limit the maximum number of tokens that can be generated by the LLM. |
| `JSON Response` | Setting to `true` will cause the LLM to respond in a JSON format. If you set true you must also include instructions in the tool. |
| `Temperature` | A floating-point number representing the temperature parameter. By default, the temperature is 0. Set to a higher number for more creativity. |
| `Chat` | Setting it to `true` will enable an interactive chat session for the tool. |
| Key | Description |
|----------------------|-----------------------------------------------------------------------------------------------------------------------------------------------|
| `Name` | The name of the tool. |
| `Model Name` | The LLM model to use, by default it uses "gpt-4-turbo". |
| `Global Model Name` | The LLM model to use for all the tools. |
| `Description` | The description of the tool. It is important that this properly describes the tool's purpose as the description is used by the LLM. |
| `Internal Prompt` | Setting this to `false` will disable the built-in system prompt for this tool. |
| `Tools` | A comma-separated list of tools that are available to be called by this tool. |
| `Global Tools` | A comma-separated list of tools that are available to be called by all tools. |
| `Parameter` / `Args` | Arguments for the tool. Each argument is defined in the format `arg-name: description`. |
| `Max Tokens` | Set to a number if you wish to limit the maximum number of tokens that can be generated by the LLM. |
| `JSON Response` | Setting to `true` will cause the LLM to respond in a JSON format. If you set true you must also include instructions in the tool. |
| `Temperature` | A floating-point number representing the temperature parameter. By default, the temperature is 0. Set to a higher number for more creativity. |
| `Chat` | Setting it to `true` will enable an interactive chat session for the tool. |
| `Credential` | Credential tool to call to set credentials as environment variables before doing anything else. One per line. |
| `Agents` | A comma-separated list of agents that are available to the tool. |
| `Share Tools` | A comma-separated list of tools that are shared by the tool. |
| `Context` | A comma-separated list of context tools available to the tool. |
| `Share Context` | A comma-separated list of context tools shared by this tool with any tool including this tool in its context. |

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's more here, but I'm not sure if we should put everything (e.g. temperature and max_tokens) here



Expand Down
6 changes: 4 additions & 2 deletions docs/docs/05-alternative-model-providers.md
Original file line number Diff line number Diff line change
Expand Up @@ -12,9 +12,11 @@ model: mistral-large-latest from https://api.mistral.ai/v1
Say hello world
```

#### Note
Mistral's La Plateforme has an OpenAI compatible API, but the model does not behave identically to gpt-4. For that reason, we also have a provider for it that might get better results in some cases.
:::note

Mistral's La Plateforme has an OpenAI compatible API, but the model does not behave identically to gpt-4. For that reason, we also have a provider for it that might get better results in some cases.

:::

### Using a model that requires a provider
```gptscript
Expand Down