Agent SDK Go is an open-source framework for building powerful AI agents with Go that supports multiple LLM providers, function calling, agent handoffs, and more.
βοΈ Cloud Waitlist β’ π License
Inspired by OpenAI's Assistants API and OpenAI's Python Agent SDK.
- Overview
- Features
- Installation
- Quick Start
- Provider Setup
- Key Components
- Advanced Features
- Examples
- Cloud Support
- Development
- Contributing
- License
- Acknowledgements
Agent SDK Go provides a comprehensive framework for building AI agents in Go. It allows you to create agents that can use tools, perform handoffs to other specialized agents, and produce structured output - all while supporting multiple LLM providers.
Visit go-agent.org for comprehensive documentation, examples, and cloud service waitlist.
- β Multiple LLM Provider Support - Support for OpenAI, Anthropic Claude, and LM Studio
- β Tool Integration - Call Go functions directly from your LLM
- β Agent Handoffs - Create complex multi-agent workflows with specialized agents
- β Structured Output - Parse responses into Go structs
- β Streaming - Get real-time streaming responses
- β Tracing & Monitoring - Debug your agent flows
- β OpenAI Compatibility - Compatible with OpenAI tool definitions and API
- β Workflow State Management - Persist and manage state between agent executions
There are several ways to add this module to your project:
go get github.com/pontus-devoteam/agent-sdk-go
-
Add imports to your Go files:
import ( "github.com/pontus-devoteam/agent-sdk-go/pkg/agent" "github.com/pontus-devoteam/agent-sdk-go/pkg/model/providers/lmstudio" "github.com/pontus-devoteam/agent-sdk-go/pkg/runner" "github.com/pontus-devoteam/agent-sdk-go/pkg/tool" // Import other packages as needed )
-
Run
go mod tidy
to automatically fetch dependencies:go mod tidy
Add the following line to your go.mod
file:
require github.com/pontus-devoteam/agent-sdk-go latest
Then run:
go mod tidy
If you're starting a new project:
-
Create and navigate to your project directory:
mkdir my-agent-project cd my-agent-project
-
Initialize a new Go module:
go mod init github.com/yourusername/my-agent-project
-
Install the Agent SDK:
go get github.com/pontus-devoteam/agent-sdk-go
-
If you encounter version conflicts, you can specify a version:
go get github.com/pontus-devoteam/[email protected] # Replace with desired version
-
For private repositories or local development, consider using Go workspaces or replace directives in your go.mod file.
Note: Requires Go 1.23 or later.
package main
import (
"context"
"fmt"
"log"
"github.com/pontus-devoteam/agent-sdk-go/pkg/agent"
"github.com/pontus-devoteam/agent-sdk-go/pkg/model/providers/openai" // or providers/lmstudio or providers/anthropic
"github.com/pontus-devoteam/agent-sdk-go/pkg/runner"
"github.com/pontus-devoteam/agent-sdk-go/pkg/tool"
)
func main() {
// Create a provider (OpenAI example)
provider := openai.NewProvider("your-openai-api-key")
provider.SetDefaultModel("gpt-3.5-turbo")
// Or use Anthropic Claude (example)
// provider := anthropic.NewProvider("your-anthropic-api-key")
// provider.SetDefaultModel("claude-3-haiku-20240307")
// Or use LM Studio (local model example)
// provider := lmstudio.NewProvider()
// provider.SetBaseURL("http://127.0.0.1:1234/v1")
// provider.SetDefaultModel("gemma-3-4b-it")
// Create a function tool
getWeather := tool.NewFunctionTool(
"get_weather",
"Get the weather for a city",
func(ctx context.Context, params map[string]interface{}) (interface{}, error) {
city := params["city"].(string)
return fmt.Sprintf("The weather in %s is sunny.", city), nil
},
).WithSchema(map[string]interface{}{
"type": "object",
"properties": map[string]interface{}{
"city": map[string]interface{}{
"type": "string",
"description": "The city to get weather for",
},
},
"required": []string{"city"},
})
// Create an agent
assistant := agent.NewAgent("Assistant")
assistant.SetModelProvider(provider)
assistant.WithModel("gpt-3.5-turbo") // or "gemma-3-4b-it" for LM Studio or "claude-3-haiku-20240307" for Anthropic
assistant.SetSystemInstructions("You are a helpful assistant.")
assistant.WithTools(getWeather)
// Create a runner
r := runner.NewRunner()
r.WithDefaultProvider(provider)
// Run the agent
result, err := r.RunSync(assistant, &runner.RunOptions{
Input: "What's the weather in Tokyo?",
})
if err != nil {
log.Fatalf("Error running agent: %v", err)
}
// Print the result
fmt.Println(result.FinalOutput)
}
To use the OpenAI provider:
-
Get an API Key
- Sign up at OpenAI
- Create an API key in your account settings
-
Configure the Provider
provider := openai.NewProvider() provider.SetAPIKey("your-openai-api-key") provider.SetDefaultModel("gpt-3.5-turbo") // or any other OpenAI model
Click to expand setup instructions
To use the Anthropic provider:
-
Get an API Key
- Sign up at Anthropic Console
- Create an API key in your account settings
-
Configure the Provider
provider := anthropic.NewProvider("your-anthropic-api-key") provider.SetDefaultModel("claude-3-haiku-20240307") // or claude-3-sonnet/opus // Optional rate limiting configuration provider.WithRateLimit(40, 80000) // 40 requests/min, 80,000 tokens/min // Optional retry configuration provider.WithRetryConfig(3, 2*time.Second) // 3 retries with exponential backoff
Click to expand setup instructions
To use the LM Studio provider:
-
Install LM Studio
- Download from lmstudio.ai
- Install and run the application
-
Load a Model
- Download a model in LM Studio (Like Gemma-3-4B-It, Llama3, or other compatible models)
- Load the model
-
Start the Server
- Go to the "Local Server" tab
- Click "Start Server"
- Note the server URL (default: http://127.0.0.1:1234)
-
Configure the Provider
provider := lmstudio.NewProvider() provider.SetBaseURL("http://127.0.0.1:1234/v1") provider.SetDefaultModel("gemma-3-4b-it") // Replace with your model
The Agent is the core component that encapsulates the LLM with instructions, tools, and other configuration.
// Create a new agent
agent := agent.NewAgent("Assistant")
agent.SetSystemInstructions("You are a helpful assistant.")
agent.WithModel("gemma-3-4b-it")
agent.WithTools(tool1, tool2) // Add multiple tools at once
The Runner executes agents, handling the agent loop, tool calls, and handoffs.
// Create a runner
runner := runner.NewRunner()
runner.WithDefaultProvider(provider)
// Run the agent
result, err := runner.RunSync(agent, &runner.RunOptions{
Input: "Hello, world!",
MaxTurns: 10, // Optional: limit the number of turns
})
Tools allow agents to perform actions using your Go functions.
// Create a function tool
tool := tool.NewFunctionTool(
"get_weather",
"Get the weather for a city",
func(ctx context.Context, params map[string]interface{}) (interface{}, error) {
city := params["city"].(string)
return fmt.Sprintf("The weather in %s is sunny.", city), nil
},
).WithSchema(map[string]interface{}{
"type": "object",
"properties": map[string]interface{}{
"city": map[string]interface{}{
"type": "string",
"description": "The city to get weather for",
},
},
"required": []string{"city"},
})
Model providers allow you to use different LLM providers.
// Create a provider for OpenAI
openaiProvider := openai.NewProvider("your-openai-api-key")
openaiProvider.SetDefaultModel("gpt-4")
// Create a provider for Anthropic Claude
anthropicProvider := anthropic.NewProvider("your-anthropic-api-key")
anthropicProvider.SetDefaultModel("claude-3-haiku-20240307")
// Create a provider for LM Studio
lmStudioProvider := lmstudio.NewProvider()
lmStudioProvider.SetBaseURL("http://127.0.0.1:1234/v1")
lmStudioProvider.SetDefaultModel("gemma-3-4b-it")
// Set a provider as the default provider
runner := runner.NewRunner()
runner.WithDefaultProvider(openaiProvider) // or anthropicProvider or lmStudioProvider
Create specialized agents that collaborate on complex tasks
// Create specialized agents
mathAgent := agent.NewAgent("Math Agent")
mathAgent.SetModelProvider(provider)
mathAgent.WithModel("gemma-3-4b-it")
mathAgent.SetSystemInstructions("You are a specialized math agent.")
mathAgent.WithTools(calculatorTool)
weatherAgent := agent.NewAgent("Weather Agent")
weatherAgent.SetModelProvider(provider)
weatherAgent.WithModel("gemma-3-4b-it")
weatherAgent.SetSystemInstructions("You provide weather information.")
weatherAgent.WithTools(weatherTool)
// Create a frontend agent that coordinates tasks
frontendAgent := agent.NewAgent("Frontend Agent")
frontendAgent.SetModelProvider(provider)
frontendAgent.WithModel("gemma-3-4b-it")
frontendAgent.SetSystemInstructions(`You coordinate requests by delegating to specialized agents.
For math calculations, delegate to the Math Agent.
For weather information, delegate to the Weather Agent.`)
frontendAgent.WithHandoffs(mathAgent, weatherAgent)
// Run the frontend agent
result, err := runner.RunSync(frontendAgent, &runner.RunOptions{
Input: "What is 42 divided by 6 and what's the weather in Paris?",
MaxTurns: 20,
})
See the complete example in examples/multi_agent_example.
Create agents that can hand off tasks and receive results back
Bidirectional agent flow allows agents to delegate tasks to other agents and receive results back once the tasks are complete. This enables more complex workflows with proper task context management.
// Create specialized agents
orchestratorAgent := agent.NewAgent("Orchestrator")
orchestratorAgent.SetModelProvider(provider)
orchestratorAgent.WithModel("gpt-4")
orchestratorAgent.SetSystemInstructions("You coordinate tasks and analyze results.")
workerAgent := agent.NewAgent("Worker")
workerAgent.SetModelProvider(provider)
workerAgent.WithModel("gpt-3.5-turbo")
workerAgent.SetSystemInstructions("You process data and return results.")
workerAgent.WithTools(processingTool)
// Set up bidirectional handoffs
orchestratorAgent.WithHandoffs(workerAgent)
workerAgent.WithHandoffs(orchestratorAgent) // Allow worker to return to orchestrator
// Run the orchestrator agent
result, err := runner.RunSync(orchestratorAgent, &runner.RunOptions{
Input: "Analyze this data: [complex data]",
MaxTurns: 10,
})
Key components of bidirectional flow:
TaskID
: Unique identifier for tracking tasks across agentsReturnToAgent
: Specifies which agent to return to after task completionIsTaskComplete
: Flag indicating whether the task is complete
See the complete example in examples/bidirectional_flow_example.
Debug your agent workflows with tracing
// Run with tracing enabled
result, err := runner.RunSync(agent, &runner.RunOptions{
Input: "Hello, world!",
RunConfig: &runner.RunConfig{
TracingDisabled: false,
TracingConfig: &runner.TracingConfig{
WorkflowName: "my_workflow",
},
},
})
Parse responses into Go structs
// Define an output type
type WeatherReport struct {
City string `json:"city"`
Temperature float64 `json:"temperature"`
Condition string `json:"condition"`
}
// Create an agent with structured output
agent := agent.NewAgent("Weather Agent")
agent.SetSystemInstructions("You provide weather reports")
agent.SetOutputType(reflect.TypeOf(WeatherReport{}))
Get real-time streaming responses
// Run the agent with streaming
streamedResult, err := runner.RunStreaming(context.Background(), agent, &runner.RunOptions{
Input: "Hello, world!",
})
if err != nil {
log.Fatalf("Error running agent: %v", err)
}
// Process streaming events
for event := range streamedResult.Stream {
switch event.Type {
case model.StreamEventTypeContent:
fmt.Print(event.Content)
case model.StreamEventTypeToolCall:
fmt.Printf("\nCalling tool: %s\n", event.ToolCall.Name)
case model.StreamEventTypeDone:
fmt.Println("\nDone!")
}
}
Work with OpenAI-compatible tool definitions
// Auto-generate OpenAI-compatible tool definitions from Go functions
getCurrentTimeTool := tool.NewFunctionTool(
"get_current_time",
"Get the current time in a specified format",
func(ctx context.Context, params map[string]interface{}) (interface{}, error) {
return time.Now().Format(time.RFC3339), nil
},
)
// Convert it to OpenAI format (handled automatically when added to an agent)
openAITool := tool.ToOpenAITool(getCurrentTimeTool)
// Add an OpenAI-compatible tool definition directly to an agent
agent := agent.NewAgent("My Agent")
agent.AddToolFromDefinition(openAITool)
// Add multiple tool definitions at once
toolDefinitions := []map[string]interface{}{
tool.ToOpenAITool(tool1),
tool.ToOpenAITool(tool2),
}
agent.AddToolsFromDefinitions(toolDefinitions)
Manage state between agent executions
// Create a state store
stateStore := mocks.NewInMemoryStateStore()
// Create workflow configuration
workflowConfig := &runner.WorkflowConfig{
RetryConfig: &runner.RetryConfig{
MaxRetries: 2,
RetryDelay: time.Second,
RetryBackoffFactor: 2.0,
},
StateManagement: &runner.StateManagementConfig{
PersistState: true,
StateStore: stateStore,
CheckpointFrequency: time.Second * 5,
},
ValidationConfig: &runner.ValidationConfig{
PreHandoffValidation: []runner.ValidationRule{
{
Name: "StateValidation",
Validate: func(data interface{}) (bool, error) {
state, ok := data.(*runner.WorkflowState)
return ok && state != nil, nil
},
ErrorMessage: "Invalid workflow state",
Severity: runner.ValidationWarning,
},
},
},
}
// Create workflow runner
workflowRunner := runner.NewWorkflowRunner(baseRunner, workflowConfig)
// Initialize workflow state
state := &runner.WorkflowState{
CurrentPhase: "",
CompletedPhases: make([]string, 0),
Artifacts: make(map[string]interface{}),
LastCheckpoint: time.Now(),
Metadata: make(map[string]interface{}),
}
// Run workflow with state management
result, err := workflowRunner.RunWorkflow(context.Background(), agent, &runner.RunOptions{
MaxTurns: 10,
RunConfig: runConfig,
WorkflowConfig: workflowConfig,
Input: state,
})
See the complete example in examples/workflow_example.
The repository includes several examples to help you get started:
Example | Description |
---|---|
Multi-Agent Example | Demonstrates how to create a system of specialized agents that can collaborate on complex tasks using a local LLM via LM Studio |
OpenAI Example | Shows how to use the OpenAI provider with function calling capabilities |
OpenAI Multi-Agent Example | Illustrates multi-agent functionality using OpenAI models, with proper tool calling and streaming support |
Anthropic Example | Demonstrates how to use the Anthropic Claude API with tool calling capabilities |
Anthropic Handoff Example | Shows how to implement agent handoffs with Anthropic Claude models |
Bidirectional Flow Example | Demonstrates bidirectional agent communication with task delegation and return handoffs |
TypeScript Code Review Example | Shows a practical application with specialized code review agents that collaborate using bidirectional handoffs |
Workflow Example | Demonstrates advanced workflow management with state persistence between agent executions |
- Make sure LM Studio is running with a server at
http://127.0.0.1:1234/v1
- Navigate to the example directory
cd examples/multi_agent_example # or any other example using LM Studio
- Run the example
go run .
- Set your OpenAI API key as an environment variable
export OPENAI_API_KEY=your-api-key
- Navigate to the example directory
cd examples/openai_example # or openai_multi_agent_example
- Run the example
go run .
- Set your Anthropic API key as an environment variable
export ANTHROPIC_API_KEY=your-anthropic-api-key
- Navigate to the example directory
cd examples/anthropic_example # or anthropic_handoff_example
- Run the example
go run .
You can enable debug output for various components by setting the appropriate environment variable:
For general debugging (runner and core components):
DEBUG=1 go run examples/bidirectional_flow_example/main.go
For provider-specific debugging:
# OpenAI provider debugging
OPENAI_DEBUG=1 go run examples/openai_multi_agent_example/main.go
# Anthropic provider debugging
ANTHROPIC_DEBUG=1 go run examples/anthropic_example/main.go
# LM Studio provider debugging
LMSTUDIO_DEBUG=1 go run examples/multi_agent_example/main.go
You can also combine multiple debug flags:
DEBUG=1 OPENAI_DEBUG=1 go run examples/typescript_code_review_example/main.go
Development setup and workflows
- Go 1.23 or later
- Clone the repository
- Run the setup script to install required tools:
./scripts/ci_setup.sh
The project includes several scripts to help with development:
./scripts/lint.sh
: Runs formatting and linting checks./scripts/security_check.sh
: Runs security checks with gosec./scripts/check_all.sh
: Runs all checks including tests./scripts/version.sh
: Helps with versioning (run withbump
argument to bump version)
Tests are located in the test
directory and can be run with:
cd test && make test
Or use the check_all script to run all checks including tests:
./scripts/check_all.sh
The project uses GitHub Actions for CI/CD. The workflow is defined in .github/workflows/ci.yml
.
Contributions are welcome! Please see CONTRIBUTING.md for details.
This project is licensed under the MIT License - see the LICENSE file for details.
This project is inspired by OpenAI's Assistants API and OpenAI's Python Agent SDK, with the goal of providing similar capabilities in Go while being compatible with local LLMs.
For production deployments, we're developing a fully managed cloud service. Join our waitlist to be among the first to access:
- Managed Agent Deployment - Deploy agents without infrastructure hassle
- Horizontal Scaling - Handle any traffic volume
- Observability & Monitoring - Track performance and usage
- Cost Optimization - Pay only for what you use
- Enterprise Security - SOC2 compliance and data protection
Sign up for the Cloud Waitlist β
- Website: go-agent.org
- GitHub Issues: Report bugs or request features
- Discussions: Join the conversation
- Waitlist: Join the cloud service waitlist