Skip to content

docs: act-via-code #105

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 9 commits into from
Jan 26, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -64,4 +64,4 @@ graph-sitter-types/out/**
graph-sitter-types/typings/**
coverage.json
tests/integration/verified_codemods/codemod_data/repo_commits.json

.codegen/*
50 changes: 17 additions & 33 deletions docs/blog/act-via-code.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,32 +5,32 @@ iconType: "solid"
description: "The path to advanced code manipulation agents"
---

<Frame caption="Voyager (Jim Fan)">
<Frame caption="Voyager (2023) solved agentic tasks with code execution">
<img src="/images/nether-portal.png" />
</Frame>


# Act via Code
Two and a half years since the launch of the GPT-3 API, code assistants have emerged as potentially the premier use case of LLMs. The rapid adoption of AI-powered IDEs and prototype builders isn't surprising — code is structured, deterministic, and rich with patterns, making it an ideal domain for machine learning. Experienced developers working with tools like Cursor (myself included) can tell that the field of software engineering is about to go through rapid change.

Two and a half years since the launch of the GPT-3 API, code assistants have emerged as the most powerful and practically useful applications of LLMs. The rapid adoption of AI-powered IDEs and prototype builders isn't surprising — code is structured, deterministic, and rich with patterns, making it an ideal domain for machine learning. As model capabilities continue to scale, we're seeing compounding improvements in code understanding and generation.

Yet there's a striking gap between what AI agents can understand and what they can actually do. While they can reason about complex architectural changes, debug intricate issues, and propose sophisticated refactors, they often can't execute these ideas. The ceiling isn't intelligence or context—it's the ability to manipulate code at scale. Large-scale modifications remain unreliable or impossible, not because agents don't understand what to do, but because they lack the right interfaces to do it.
Yet there's a striking gap between understanding and action. Today's AI agents can analyze enterprise codebases and propose sophisticated improvements—eliminating tech debt, untangling dependencies, improving modularity. But ask them to actually implement these changes across millions of lines of code, and they hit a wall. Their ceiling isn't intelligence—it's the ability to safely and reliably execute large-scale modifications on real, enterprise codebases.

The bottleneck isn't intelligence — it's tooling. By giving AI models the ability to write and execute code that modifies code, we're about to unlock an entire class of tasks that agents already understand but can't yet perform. Code execution environments represent the most expressive tool we could offer an agent—enabling composition, abstraction, and systematic manipulation of complex systems. When paired with ever-improving language models, this will unlock another step function improvement in AI capabilities.

## Beating Minecraft with Code Execution

In mid-2023, a research project called [Voyager](https://voyager.minedojo.org) made waves: it effectively solved Minecraft, performing several multiples better than the prior SOTA on many important dimensions. This was a massive breakthrough previous reinforcement learning systems had struggled for years with even basic Minecraft tasks.
In mid-2023, a research project called [Voyager](https://voyager.minedojo.org) made waves: it effectively solved Minecraft, performing several multiples better than the prior SOTA. This was a massive breakthrough as previous reinforcement learning systems had struggled for years with even basic Minecraft tasks.

While the AI community was focused on scaling intelligence, Voyager demonstrated something more fundamental: the right tools can unlock entirely new tiers of capability. The same GPT-4 model that struggled with Minecraft using traditional frameworks achieved remarkable results when allowed to write and execute code. This wasn't about raw intelligence—it was about giving the agent a more expressive way to act.
While the AI community was focused on scaling intelligence, Voyager demonstrated something more fundamental: the right tools can unlock entirely new tiers of capability. The same GPT-4 model that struggled with Minecraft using standard agentic frameworks (like [ReAct](https://klu.ai/glossary/react-agent-model)) achieved remarkable results when allowed to write and execute code. This wasn't about raw intelligence—it was about giving the agent a more expressive way to act.

<Frame>
<img src="/images/voyager-performance.png" />
</Frame>

The breakthrough came from a simple yet powerful insight: let the AI write code. Instead of limiting the agent to primitive "tools," Voyager allowed GPT-4 to write and execute [JS programs](https://github.com/MineDojo/Voyager/tree/main/skill_library/trial2/skill/code) that controlled Minecraft actions through a clean API:
The breakthrough came from a simple yet powerful insight: let the AI write code. Instead of limiting the agent to primitive "tools," Voyager allowed GPT-4 to write and execute [JS programs](https://github.com/MineDojo/Voyager/tree/main/skill_library/trial2/skill/code) that controlled Minecraft actions through a clean API.

```javascript
// Example "action program" from Voyager, 2023
// written by gpt-4
async function chopSpruceLogs(bot) {
const spruceLogCount = bot.inventory.count(mcData.itemsByName.spruce_log.id);
const logsToMine = 3 - spruceLogCount;
Expand All @@ -44,7 +44,7 @@ async function chopSpruceLogs(bot) {
}
```

This approach transformed the agent's capabilities. Rather than being constrained to atomic actions like `equipItem(...)`, it could create higher-level operations like [`craftShieldWithFurnace()`](https://github.com/MineDojo/Voyager/blob/main/skill_library/trial2/skill/code/craftShieldWithFurnace.js) through composing JS APIs. The system also implemented a memory mechanism, storing successful programs for reuse in similar situations—effectively building its own library of proven solutions it could later refer to and adapt to similar circumstances.
This approach transformed the agent's capabilities. Rather than being constrained to atomic actions like `equipItem(...)`, it could create higher-level operations like [`craftShieldWithFurnace()`](https://github.com/MineDojo/Voyager/blob/main/skill_library/trial2/skill/code/craftShieldWithFurnace.js) through composing JS APIs. Furthermore, Wang et al. implemented a memory mechanism, in which successful "action programs" could later be recalled, copied, and built upon, effectively enabling the agent to accumulate experience.

<Frame>
<img src="/images/voyager-retrieval.png" />
Expand All @@ -56,23 +56,21 @@ As the Voyager authors noted:

## Code is an Ideal Action Space

The implications of code as an action space extend far beyond gaming. Code provides a uniquely powerful interface between AI and real-world systems. When an agent writes code, it gains several critical advantages over traditional atomic tools.
The implications of code as an action space extend far beyond gaming. This architectural insight — letting AI act through code rather than atomic commands — will lead to a step change in the capabilities of AI systems. Nowhere is this more apparent than in software engineering, where agents already understand complex transformations but lack the tools to execute them effectively.

When an agent writes code, it gains several critical advantages over traditional atomic tools:

### Code is Composable
Code is the ultimate composable medium. Agents can build their own tools by combining simpler operations, wrapping any function as a building block for more complex behaviors. This aligns well with what is perhaps LLMs' premier capability: understanding and interpolating between examples to create new solutions.
- **Composability**: Agents can build their own tools by combining simpler operations. This aligns perfectly with LLMs' demonstrated ability to compose and interpolate between examples to create novel solutions.

### Code Constrains the Action Space
APIs can enforce guardrails that keep agents on track. By designing interfaces that make invalid operations impossible to express, we can prevent entire classes of errors before they happen. The type system becomes a powerful tool for shaping agent behavior.
- **Constrained Action Space**: Well-designed APIs act as guardrails, making invalid operations impossible to express. The type system becomes a powerful tool for preventing entire classes of errors before they happen.

### Code Provides Objective Feedback
Code execution gives immediate, unambiguous feedback. When something goes wrong, you get stack traces and error messages—not just a confidence score. This concrete error signal is invaluable for agents learning to navigate complex systems.
- **Objective Feedback**: Code execution provides immediate, unambiguous feedback through stack traces and error messages—not just confidence scores. This concrete error signal is invaluable for learning.

### Code is a Natural Medium for Collaboration
Programs are a shared language between humans and agents. Code explicitly encodes reasoning in a reviewable format, making agent actions transparent and debuggable. There's no magic—just deterministic execution that can be understood, modified, and improved by both humans and AI.
- **Natural Collaboration**: Programs are a shared language between humans and agents. Code explicitly encodes reasoning in a reviewable format, making actions transparent, debuggable, and easily re-runnable.

## For Software Engineering

This brings us to software engineering, where we see a massive gap between AI's theoretical capabilities and practical achievements. Many code modification tasks are fundamentally programmatic—dependency analysis, refactors, control flow analysis—yet we lack the tools to express them properly.
Software engineering tasks are inherently programmatic and graph-based — dependency analysis, refactors, control flow analysis, etc. Yet today's AI agents interface with code primarily through string manipulation, missing the rich structure that developers and their tools rely on. By giving agents APIs that operate on the codebase's underlying graph structure rather than raw text, we can unlock a new tier of capabilities. Imagine agents that can rapidly traverse dependency trees, analyze control flow, and perform complex refactors while maintaining perfect awareness of the codebase's structure.

Consider how a developer thinks about refactoring: it's rarely about direct text manipulation. Instead, we think in terms of high-level operations: "move this function," "rename this variable everywhere," "split this module." These operations can be encoded into a powerful Python API:

Expand All @@ -84,17 +82,3 @@ for component in codebase.jsx_components:
# powerful edit APIs that handle edge cases
component.rename(component.name + 'Page')
```

This isn't just another code manipulation library—it's a scriptable language server that builds on proven foundations like LSP and codemods, but designed specifically for programmatic analysis and refactoring.

## What does this look like?

At Codegen, we've built exactly this system. Our approach centers on four key principles:

The foundation must be Python, enabling easy composition with existing tools and workflows. Operations must be in-memory for performance, handling large-scale changes efficiently. The system must be open source, allowing developers and AI researchers to extend and enhance it. And perhaps most importantly, it must be thoroughly documented—not just for humans, but for the next generation of AI agents that will build upon it.

## What does this enable?

We've already used this approach to merge hundreds of thousands of lines of code in enterprise codebases. Our tools have automated complex tasks like feature flag deletion, test suite reorganization, import cycle elimination, and dead code removal. But more importantly, we've proven that code-as-action-space isn't just theoretical—it's a practical approach to scaling software engineering.

This is just the beginning. With Codegen, we're providing the foundation for the next generation of code manipulation tools—built for both human developers and AI agents. We believe this approach will fundamentally change how we think about and implement large-scale code changes, making previously impossible tasks not just possible, but routine.
5 changes: 5 additions & 0 deletions docs/blog/codemod-frameworks.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -5,6 +5,11 @@ icon: "code-compare"
iconType: "solid"
---

# Others to add
- [Abracadabra](https://github.com/nicoespeon/abracadabra)
- [Rope](https://rope.readthedocs.io/en/latest/overview.html#rope-overview)
- [Grit](https://github.com/getgrit/gritql)

Code transformation tools have evolved significantly over the years, each offering unique approaches to programmatic code manipulation. Let's explore the strengths and limitations of major frameworks in this space.

## Python's AST Module
Expand Down
29 changes: 3 additions & 26 deletions docs/blog/posts.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,39 +4,16 @@ icon: "clock"
iconType: "solid"
---

<Update
label="2024-01-24"
description="Static Analysis in the Age of AI Coding Assistants"
title="Static Analysis in the Age of AI Coding Assistants"
>
## Static Analysis in the Age of AI Coding Assistants

Why traditional language servers aren't enough for the future of AI-powered code manipulation

</Update>

<Update
label="2024-01-24"
description="A Deep Dive into Codemod Frameworks"
title="Codemod Frameworks"
href="/blog/codemod-frameworks"
>
## Codemod Frameworks

Comparing popular tools for programmatic code transformation

</Update>

<Update label="2024-01-24" description="Acting via Code">

## Act via Code

Programs are the natural convergence of LLMs and traditional computation.
Why code as an action space will lead to a step function improvement in agent capabilities.

<Card
img="/images/voyager.png"
img="/images/nether-portal.png"
title="Act via Code"
href="https://codegen.com"
href="/blog/act-via-code"
/>

</Update>
6 changes: 3 additions & 3 deletions docs/building-with-codegen/at-a-glance.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -60,11 +60,11 @@ Learn how to use Codegen's core APIs to analyze and transform code.
Understand function call patterns and manipulate call sites.
</Card>
<Card
title="Imports & Exports"
title="Imports"
icon="file-import"
href="/building-with-codegen/imports-and-exports"
href="/building-with-codegen/imports"
>
Work with module imports, exports, and manage dependencies.
Work with module imports and manage dependencies.
</Card>
<Card
title="Traversing the Call Graph"
Expand Down
180 changes: 180 additions & 0 deletions docs/building-with-codegen/exports.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,180 @@
---
title: "The Export API"
sidebarTitle: "Exports"
icon: "file-export"
iconType: "solid"
---

The [Export](/api-reference/core/Export) API provides tools for managing exports and module boundaries in TypeScript codebases.

## Export Statements vs Exports

Similar to imports, Codegen provides two levels of abstraction for working with exports:

- [ExportStatement](/api-reference/core/ExportStatement) - Represents a complete export statement
- [Export](/api-reference/core/Export) - Represents individual exported symbols

```typescript
// One ExportStatement containing multiple Export objects
export { foo, bar as default, type User };
// Creates:
// - Export for 'foo'
// - Export for 'bar' as default
// - Export for 'User' as a type

// Direct exports create one ExportStatement per export
export const value = 42;
export function process() {}
```

You can access these through your file's collections:

```python
# Access all export statements
for stmt in file.export_statements:
print(f"Statement: {stmt.source}")

# Access individual exports in the statement
for exp in stmt.exports:
print(f" Export: {exp.name}")
```

<Note>
ExportStatement inherits from [Statement](/building-with-codegen/statements-and-code-blocks), providing operations like `remove()` and `insert_before()`. This is particularly useful when you want to manipulate the entire export declaration.
</Note>

## Export Types

Codegen supports several types of exports:

```typescript
// Direct exports
export const value = 42; // Value export
export function myFunction() {} // Function export
export class MyClass {} // Class export
export type MyType = string; // Type export
export interface MyInterface {} // Interface export
export enum MyEnum {} // Enum export

// Re-exports
export { foo, bar } from './other-file'; // Named re-exports
export type { Type } from './other-file'; // Type re-exports
export * from './other-file'; // Wildcard re-exports
export * as utils from './other-file'; // Namespace re-exports

// Aliased exports
export { foo as foop }; // Basic alias
export { foo as default }; // Default export alias
export { bar as baz } from './other-file'; // Re-export with alias
```

## Working with Exports

The Export API provides methods to identify and filter exports:

```python
# Check export types
for exp in file.exports:
if exp.is_type_export():
print(f"Type export: {exp.name}")
elif exp.is_default_export():
print(f"Default export: {exp.name}")
elif exp.is_wildcard_export():
print(f"Wildcard export from: {exp.from_file.filepath}")

# Work with re-exports
for exp in file.exports:
if exp.is_reexport():
if exp.is_external_export:
print(f"External re-export: {exp.name} from {exp.from_file.filepath}")
else:
print(f"Internal re-export: {exp.name}")
```

## Export Resolution

You can trace exports to their original symbols:

```python
for exp in file.exports:
if exp.is_reexport():
# Get original and current symbols
current = exp.exported_symbol
original = exp.resolved_symbol

print(f"Re-exporting {original.name} from {exp.from_file.filepath}")
print(f"Through: {' -> '.join(e.file.filepath for e in exp.export_chain)}")
```

## Common Operations

Here are common operations for working with exports:

```python
# Add new export
file.add_export("MyComponent")

# Add export with alias
file.add_export("MyComponent", alias="default")

# Convert to type export
export = file.get_export("MyType")
export.make_type_export()

# Remove export
export.remove() # Removes export but keeps symbol

# Update export properties
export.update(
name="NewName",
is_type=True,
is_default=False
)
```

## Managing Re-exports

Common patterns for working with re-exports:

```python
# Create public API
index_file = codebase.get_file("index.ts")

# Re-export from internal files
for internal_file in codebase.files:
if internal_file.name != "index":
for symbol in internal_file.symbols:
if symbol.is_public:
index_file.add_export(
symbol,
from_file=internal_file
)

# Convert default to named exports
for exp in file.exports:
if exp.is_default_export():
exp.make_named_export()

# Consolidate re-exports
from collections import defaultdict

file_exports = defaultdict(list)
for exp in file.exports:
if exp.is_reexport():
file_exports[exp.from_file].append(exp)

for from_file, exports in file_exports.items():
if len(exports) > 1:
# Create consolidated re-export
names = [exp.name for exp in exports]
file.add_export_from_source(
f"export {{ {', '.join(names)} }} from '{from_file.filepath}'"
)
# Remove individual exports
for exp in exports:
exp.remove()
```

<Note>
When managing exports, consider the impact on your module's public API. Not all symbols that can be exported should be exported.
</Note>
Loading
Loading