Skip to content

pontus-devoteam/agent-sdk-go

Repository files navigation

Agent SDK Go

Build, deploy, and scale AI agents with ease

Website Cloud Waitlist

Agent SDK Go is an open-source framework for building powerful AI agents with Go that supports multiple LLM providers, function calling, agent handoffs, and more.

Code Quality Go Report Card Go Version PkgGoDev
CodeQL License Stars Contributors Last Commit

☁️ Cloud Waitlist β€’ πŸ“œ License

Inspired by OpenAI's Assistants API and OpenAI's Python Agent SDK.


πŸ“‹ Table of Contents


πŸ” Overview

Agent SDK Go provides a comprehensive framework for building AI agents in Go. It allows you to create agents that can use tools, perform handoffs to other specialized agents, and produce structured output - all while supporting multiple LLM providers.

Visit go-agent.org for comprehensive documentation, examples, and cloud service waitlist.

🌟 Features

  • βœ… Multiple LLM Provider Support - Support for OpenAI, Anthropic Claude, and LM Studio
  • βœ… Tool Integration - Call Go functions directly from your LLM
  • βœ… Agent Handoffs - Create complex multi-agent workflows with specialized agents
  • βœ… Structured Output - Parse responses into Go structs
  • βœ… Streaming - Get real-time streaming responses
  • βœ… Tracing & Monitoring - Debug your agent flows
  • βœ… OpenAI Compatibility - Compatible with OpenAI tool definitions and API
  • βœ… Workflow State Management - Persist and manage state between agent executions

πŸ“¦ Installation

There are several ways to add this module to your project:

Option 1: Using go get (Recommended)

go get github.com/pontus-devoteam/agent-sdk-go

Option 2: Add to your imports and use go mod tidy

  1. Add imports to your Go files:

    import (
        "github.com/pontus-devoteam/agent-sdk-go/pkg/agent"
        "github.com/pontus-devoteam/agent-sdk-go/pkg/model/providers/lmstudio"
        "github.com/pontus-devoteam/agent-sdk-go/pkg/runner"
        "github.com/pontus-devoteam/agent-sdk-go/pkg/tool"
        // Import other packages as needed
    )
  2. Run go mod tidy to automatically fetch dependencies:

    go mod tidy

Option 3: Manually edit your go.mod file

Add the following line to your go.mod file:

require github.com/pontus-devoteam/agent-sdk-go latest

Then run:

go mod tidy

New Project Setup

If you're starting a new project:

  1. Create and navigate to your project directory:

    mkdir my-agent-project
    cd my-agent-project
  2. Initialize a new Go module:

    go mod init github.com/yourusername/my-agent-project
  3. Install the Agent SDK:

    go get github.com/pontus-devoteam/agent-sdk-go

Troubleshooting

  • If you encounter version conflicts, you can specify a version:

    go get github.com/pontus-devoteam/[email protected]  # Replace with desired version
  • For private repositories or local development, consider using Go workspaces or replace directives in your go.mod file.

Note: Requires Go 1.23 or later.

πŸš€ Quick Start

package main

import (
    "context"
    "fmt"
    "log"

    "github.com/pontus-devoteam/agent-sdk-go/pkg/agent"
    "github.com/pontus-devoteam/agent-sdk-go/pkg/model/providers/openai"  // or providers/lmstudio or providers/anthropic
    "github.com/pontus-devoteam/agent-sdk-go/pkg/runner"
    "github.com/pontus-devoteam/agent-sdk-go/pkg/tool"
)

func main() {
    // Create a provider (OpenAI example)
    provider := openai.NewProvider("your-openai-api-key")
    provider.SetDefaultModel("gpt-3.5-turbo")

    // Or use Anthropic Claude (example)
    // provider := anthropic.NewProvider("your-anthropic-api-key")
    // provider.SetDefaultModel("claude-3-haiku-20240307")

    // Or use LM Studio (local model example)
    // provider := lmstudio.NewProvider()
    // provider.SetBaseURL("http://127.0.0.1:1234/v1")
    // provider.SetDefaultModel("gemma-3-4b-it")

    // Create a function tool
    getWeather := tool.NewFunctionTool(
        "get_weather",
        "Get the weather for a city",
        func(ctx context.Context, params map[string]interface{}) (interface{}, error) {
            city := params["city"].(string)
            return fmt.Sprintf("The weather in %s is sunny.", city), nil
        },
    ).WithSchema(map[string]interface{}{
        "type": "object",
        "properties": map[string]interface{}{
            "city": map[string]interface{}{
                "type": "string",
                "description": "The city to get weather for",
            },
        },
        "required": []string{"city"},
    })

    // Create an agent
    assistant := agent.NewAgent("Assistant")
    assistant.SetModelProvider(provider)
    assistant.WithModel("gpt-3.5-turbo")  // or "gemma-3-4b-it" for LM Studio or "claude-3-haiku-20240307" for Anthropic
    assistant.SetSystemInstructions("You are a helpful assistant.")
    assistant.WithTools(getWeather)

    // Create a runner
    r := runner.NewRunner()
    r.WithDefaultProvider(provider)

    // Run the agent
    result, err := r.RunSync(assistant, &runner.RunOptions{
        Input: "What's the weather in Tokyo?",
    })
    if err != nil {
        log.Fatalf("Error running agent: %v", err)
    }

    // Print the result
    fmt.Println(result.FinalOutput)
}

πŸ–₯️ Provider Setup

OpenAI Setup

To use the OpenAI provider:

  1. Get an API Key

    • Sign up at OpenAI
    • Create an API key in your account settings
  2. Configure the Provider

    provider := openai.NewProvider()
    provider.SetAPIKey("your-openai-api-key")
    provider.SetDefaultModel("gpt-3.5-turbo")  // or any other OpenAI model

Anthropic Setup

Click to expand setup instructions

To use the Anthropic provider:

  1. Get an API Key

  2. Configure the Provider

    provider := anthropic.NewProvider("your-anthropic-api-key")
    provider.SetDefaultModel("claude-3-haiku-20240307")  // or claude-3-sonnet/opus
    
    // Optional rate limiting configuration
    provider.WithRateLimit(40, 80000) // 40 requests/min, 80,000 tokens/min
    
    // Optional retry configuration
    provider.WithRetryConfig(3, 2*time.Second) // 3 retries with exponential backoff

LM Studio Setup

Click to expand setup instructions

To use the LM Studio provider:

  1. Install LM Studio

    • Download from lmstudio.ai
    • Install and run the application
  2. Load a Model

    • Download a model in LM Studio (Like Gemma-3-4B-It, Llama3, or other compatible models)
    • Load the model
  3. Start the Server

  4. Configure the Provider

    provider := lmstudio.NewProvider()
    provider.SetBaseURL("http://127.0.0.1:1234/v1")
    provider.SetDefaultModel("gemma-3-4b-it") // Replace with your model

🧩 Key Components

Agent

The Agent is the core component that encapsulates the LLM with instructions, tools, and other configuration.

// Create a new agent
agent := agent.NewAgent("Assistant")
agent.SetSystemInstructions("You are a helpful assistant.")
agent.WithModel("gemma-3-4b-it")
agent.WithTools(tool1, tool2) // Add multiple tools at once

Runner

The Runner executes agents, handling the agent loop, tool calls, and handoffs.

// Create a runner
runner := runner.NewRunner()
runner.WithDefaultProvider(provider)

// Run the agent
result, err := runner.RunSync(agent, &runner.RunOptions{
    Input: "Hello, world!",
    MaxTurns: 10, // Optional: limit the number of turns
})

Tools

Tools allow agents to perform actions using your Go functions.

// Create a function tool
tool := tool.NewFunctionTool(
    "get_weather",
    "Get the weather for a city",
    func(ctx context.Context, params map[string]interface{}) (interface{}, error) {
        city := params["city"].(string)
        return fmt.Sprintf("The weather in %s is sunny.", city), nil
    },
).WithSchema(map[string]interface{}{
    "type": "object",
    "properties": map[string]interface{}{
        "city": map[string]interface{}{
            "type": "string",
            "description": "The city to get weather for",
        },
    },
    "required": []string{"city"},
})

Model Providers

Model providers allow you to use different LLM providers.

// Create a provider for OpenAI
openaiProvider := openai.NewProvider("your-openai-api-key")
openaiProvider.SetDefaultModel("gpt-4")

// Create a provider for Anthropic Claude
anthropicProvider := anthropic.NewProvider("your-anthropic-api-key")
anthropicProvider.SetDefaultModel("claude-3-haiku-20240307")

// Create a provider for LM Studio
lmStudioProvider := lmstudio.NewProvider()
lmStudioProvider.SetBaseURL("http://127.0.0.1:1234/v1")
lmStudioProvider.SetDefaultModel("gemma-3-4b-it")

// Set a provider as the default provider
runner := runner.NewRunner()
runner.WithDefaultProvider(openaiProvider) // or anthropicProvider or lmStudioProvider

πŸ”§ Advanced Features

Multi-Agent Workflows

Create specialized agents that collaborate on complex tasks
// Create specialized agents
mathAgent := agent.NewAgent("Math Agent")
mathAgent.SetModelProvider(provider)
mathAgent.WithModel("gemma-3-4b-it")
mathAgent.SetSystemInstructions("You are a specialized math agent.")
mathAgent.WithTools(calculatorTool)

weatherAgent := agent.NewAgent("Weather Agent")
weatherAgent.SetModelProvider(provider)
weatherAgent.WithModel("gemma-3-4b-it")
weatherAgent.SetSystemInstructions("You provide weather information.")
weatherAgent.WithTools(weatherTool)

// Create a frontend agent that coordinates tasks
frontendAgent := agent.NewAgent("Frontend Agent")
frontendAgent.SetModelProvider(provider)
frontendAgent.WithModel("gemma-3-4b-it")
frontendAgent.SetSystemInstructions(`You coordinate requests by delegating to specialized agents.
For math calculations, delegate to the Math Agent.
For weather information, delegate to the Weather Agent.`)
frontendAgent.WithHandoffs(mathAgent, weatherAgent)

// Run the frontend agent
result, err := runner.RunSync(frontendAgent, &runner.RunOptions{
    Input: "What is 42 divided by 6 and what's the weather in Paris?",
    MaxTurns: 20,
})

See the complete example in examples/multi_agent_example.

Bidirectional Agent Flow

Create agents that can hand off tasks and receive results back

Bidirectional agent flow allows agents to delegate tasks to other agents and receive results back once the tasks are complete. This enables more complex workflows with proper task context management.

// Create specialized agents
orchestratorAgent := agent.NewAgent("Orchestrator")
orchestratorAgent.SetModelProvider(provider)
orchestratorAgent.WithModel("gpt-4")
orchestratorAgent.SetSystemInstructions("You coordinate tasks and analyze results.")

workerAgent := agent.NewAgent("Worker")
workerAgent.SetModelProvider(provider)
workerAgent.WithModel("gpt-3.5-turbo")
workerAgent.SetSystemInstructions("You process data and return results.")
workerAgent.WithTools(processingTool)

// Set up bidirectional handoffs
orchestratorAgent.WithHandoffs(workerAgent)
workerAgent.WithHandoffs(orchestratorAgent)  // Allow worker to return to orchestrator

// Run the orchestrator agent
result, err := runner.RunSync(orchestratorAgent, &runner.RunOptions{
    Input: "Analyze this data: [complex data]",
    MaxTurns: 10,
})

Key components of bidirectional flow:

  • TaskID: Unique identifier for tracking tasks across agents
  • ReturnToAgent: Specifies which agent to return to after task completion
  • IsTaskComplete: Flag indicating whether the task is complete

See the complete example in examples/bidirectional_flow_example.

Tracing

Debug your agent workflows with tracing
// Run with tracing enabled
result, err := runner.RunSync(agent, &runner.RunOptions{
    Input: "Hello, world!",
    RunConfig: &runner.RunConfig{
        TracingDisabled: false,
        TracingConfig: &runner.TracingConfig{
            WorkflowName: "my_workflow",
        },
    },
})

Structured Output

Parse responses into Go structs
// Define an output type
type WeatherReport struct {
    City        string  `json:"city"`
    Temperature float64 `json:"temperature"`
    Condition   string  `json:"condition"`
}

// Create an agent with structured output
agent := agent.NewAgent("Weather Agent")
agent.SetSystemInstructions("You provide weather reports")
agent.SetOutputType(reflect.TypeOf(WeatherReport{}))

Streaming

Get real-time streaming responses
// Run the agent with streaming
streamedResult, err := runner.RunStreaming(context.Background(), agent, &runner.RunOptions{
    Input: "Hello, world!",
})
if err != nil {
    log.Fatalf("Error running agent: %v", err)
}

// Process streaming events
for event := range streamedResult.Stream {
    switch event.Type {
    case model.StreamEventTypeContent:
        fmt.Print(event.Content)
    case model.StreamEventTypeToolCall:
        fmt.Printf("\nCalling tool: %s\n", event.ToolCall.Name)
    case model.StreamEventTypeDone:
        fmt.Println("\nDone!")
    }
}

OpenAI Tool Definitions

Work with OpenAI-compatible tool definitions
// Auto-generate OpenAI-compatible tool definitions from Go functions
getCurrentTimeTool := tool.NewFunctionTool(
    "get_current_time",
    "Get the current time in a specified format",
    func(ctx context.Context, params map[string]interface{}) (interface{}, error) {
        return time.Now().Format(time.RFC3339), nil
    },
)

// Convert it to OpenAI format (handled automatically when added to an agent)
openAITool := tool.ToOpenAITool(getCurrentTimeTool)

// Add an OpenAI-compatible tool definition directly to an agent
agent := agent.NewAgent("My Agent")
agent.AddToolFromDefinition(openAITool)

// Add multiple tool definitions at once
toolDefinitions := []map[string]interface{}{
    tool.ToOpenAITool(tool1),
    tool.ToOpenAITool(tool2),
}

agent.AddToolsFromDefinitions(toolDefinitions)

Workflow State Management

Manage state between agent executions
// Create a state store
stateStore := mocks.NewInMemoryStateStore()

// Create workflow configuration
workflowConfig := &runner.WorkflowConfig{
    RetryConfig: &runner.RetryConfig{
        MaxRetries:         2,
        RetryDelay:        time.Second,
        RetryBackoffFactor: 2.0,
    },
    StateManagement: &runner.StateManagementConfig{
        PersistState:        true,
        StateStore:          stateStore,
        CheckpointFrequency: time.Second * 5,
    },
    ValidationConfig: &runner.ValidationConfig{
        PreHandoffValidation: []runner.ValidationRule{
            {
                Name:         "StateValidation",
                Validate:     func(data interface{}) (bool, error) {
                    state, ok := data.(*runner.WorkflowState)
                    return ok && state != nil, nil
                },
                ErrorMessage: "Invalid workflow state",
                Severity:     runner.ValidationWarning,
            },
        },
    },
}

// Create workflow runner
workflowRunner := runner.NewWorkflowRunner(baseRunner, workflowConfig)

// Initialize workflow state
state := &runner.WorkflowState{
    CurrentPhase:    "",
    CompletedPhases: make([]string, 0),
    Artifacts:       make(map[string]interface{}),
    LastCheckpoint:  time.Now(),
    Metadata:        make(map[string]interface{}),
}

// Run workflow with state management
result, err := workflowRunner.RunWorkflow(context.Background(), agent, &runner.RunOptions{
    MaxTurns:       10,
    RunConfig:      runConfig,
    WorkflowConfig: workflowConfig,
    Input:         state,
})

See the complete example in examples/workflow_example.

πŸ“š Examples

The repository includes several examples to help you get started:

Example Description
Multi-Agent Example Demonstrates how to create a system of specialized agents that can collaborate on complex tasks using a local LLM via LM Studio
OpenAI Example Shows how to use the OpenAI provider with function calling capabilities
OpenAI Multi-Agent Example Illustrates multi-agent functionality using OpenAI models, with proper tool calling and streaming support
Anthropic Example Demonstrates how to use the Anthropic Claude API with tool calling capabilities
Anthropic Handoff Example Shows how to implement agent handoffs with Anthropic Claude models
Bidirectional Flow Example Demonstrates bidirectional agent communication with task delegation and return handoffs
TypeScript Code Review Example Shows a practical application with specialized code review agents that collaborate using bidirectional handoffs
Workflow Example Demonstrates advanced workflow management with state persistence between agent executions

Running Examples with a Local LLM

  1. Make sure LM Studio is running with a server at http://127.0.0.1:1234/v1
  2. Navigate to the example directory
    cd examples/multi_agent_example # or any other example using LM Studio
  3. Run the example
    go run .

Running Examples with OpenAI

  1. Set your OpenAI API key as an environment variable
    export OPENAI_API_KEY=your-api-key
  2. Navigate to the example directory
    cd examples/openai_example # or openai_multi_agent_example
  3. Run the example
    go run .

Running Examples with Anthropic

  1. Set your Anthropic API key as an environment variable
    export ANTHROPIC_API_KEY=your-anthropic-api-key
  2. Navigate to the example directory
    cd examples/anthropic_example # or anthropic_handoff_example
  3. Run the example
    go run .

Debugging

You can enable debug output for various components by setting the appropriate environment variable:

For general debugging (runner and core components):

DEBUG=1 go run examples/bidirectional_flow_example/main.go

For provider-specific debugging:

# OpenAI provider debugging
OPENAI_DEBUG=1 go run examples/openai_multi_agent_example/main.go

# Anthropic provider debugging
ANTHROPIC_DEBUG=1 go run examples/anthropic_example/main.go

# LM Studio provider debugging
LMSTUDIO_DEBUG=1 go run examples/multi_agent_example/main.go

You can also combine multiple debug flags:

DEBUG=1 OPENAI_DEBUG=1 go run examples/typescript_code_review_example/main.go

πŸ› οΈ Development

Development setup and workflows

Requirements

  • Go 1.23 or later

Setup

  1. Clone the repository
  2. Run the setup script to install required tools:
./scripts/ci_setup.sh

Development Workflow

The project includes several scripts to help with development:

  • ./scripts/lint.sh: Runs formatting and linting checks
  • ./scripts/security_check.sh: Runs security checks with gosec
  • ./scripts/check_all.sh: Runs all checks including tests
  • ./scripts/version.sh: Helps with versioning (run with bump argument to bump version)

Running Tests

Tests are located in the test directory and can be run with:

cd test && make test

Or use the check_all script to run all checks including tests:

./scripts/check_all.sh

CI/CD

The project uses GitHub Actions for CI/CD. The workflow is defined in .github/workflows/ci.yml.

πŸ‘₯ Contributing

Contributions are welcome! Please see CONTRIBUTING.md for details.

πŸ“„ License

This project is licensed under the MIT License - see the LICENSE file for details.

πŸ™ Acknowledgements

This project is inspired by OpenAI's Assistants API and OpenAI's Python Agent SDK, with the goal of providing similar capabilities in Go while being compatible with local LLMs.

☁️ Cloud Support

For production deployments, we're developing a fully managed cloud service. Join our waitlist to be among the first to access:

  • Managed Agent Deployment - Deploy agents without infrastructure hassle
  • Horizontal Scaling - Handle any traffic volume
  • Observability & Monitoring - Track performance and usage
  • Cost Optimization - Pay only for what you use
  • Enterprise Security - SOC2 compliance and data protection

Sign up for the Cloud Waitlist β†’

πŸ‘₯ Community & Support

About

Build AI agents in light speed

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •