Skip to content

Add additional MCP examples #430

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 6 commits into from
Feb 12, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
6 changes: 3 additions & 3 deletions docs/introduction/ide-usage.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -59,8 +59,8 @@ it will allow an agent to:
- improve a codemod
- get setup instructions

### Configuration
#### Usage with Cline:
### IDE Configuration
#### Cline
Add this to your cline_mcp_settings.json:
```
{
Expand All @@ -79,7 +79,7 @@ Add this to your cline_mcp_settings.json:
```


#### Usage with Cursor:
#### Cursor:
Under the `Settings` > `Feature` > `MCP Servers` section, click "Add New MCP Server" and add the following:

```
Expand Down
3 changes: 2 additions & 1 deletion docs/mint.json
Original file line number Diff line number Diff line change
Expand Up @@ -100,7 +100,8 @@
"tutorials/sqlalchemy-1.6-to-2.0",
"tutorials/fixing-import-loops-in-pytorch",
"tutorials/python2-to-python3",
"tutorials/flask-to-fastapi"
"tutorials/flask-to-fastapi",
"tutorials/build-mcp"
]
},
{
Expand Down
75 changes: 75 additions & 0 deletions docs/tutorials/build-mcp.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,75 @@
---
title: "Building a Model Context Protocol server with Codegen"
sidebarTitle: "MCP Server"
icon: "boxes-stacked"
iconType: "solid"
---

Learn how to build a Model Context Protocol (MCP) server that enables AI models to understand and manipulate code using Codegen's powerful tools.

This guide will walk you through creating an MCP server that can provide semantic code search

<Info>View the full code in our [examples repository](https://github.com/codegen-sh/codegen-sdk/tree/develop/src/codegen/extensions/mcp)</Info>


## Setup:
Install the MCP python library
```
uv pip install mcp
```

## Step 1: Setting Up Your MCP Server

First, let's create a basic MCP server using Codegen's MCP tools:

server.py
```python
from codegen import Codebase
from mcp.server.fastmcp import FastMCP
from typing import Annotated
# Initialize the codebase
codebase = Codebase.from_repo(".")

# create the MCP server using FastMCP
mcp = FastMCP(name="demo-mcp", instructions="Use this server for semantic search of codebases")


if __name__ == "__main__":
# Initialize and run the server
print("Starting demo mpc server...")
mcp.run(transport="stdio")

```

## Step 2: Create the search tool

Let's implement the semantic search tool.

server.py
```python
from codegen.extensions.tools.semantic_search import semantic_search

....

@mcp.tool('codebase_semantic_search', "search codebase with the provided query")
def search(query: Annotated[str, "search query to run against codebase"]):
codebase = Codebase("provide location to codebase", programming_language="provide codebase Language")
# use the semantic search tool from codegen.extenstions.tools OR write your own
results = semantic_search(codebase=codebase, query=query)
return results

....
```

## Run Your MCP Server

You can run and inspect your MCP server with:

```
mcp dev server.py
```

If you'd like to integrate this into an IDE checkout out this [setup guide](/introduction/ide-usage#mcp-server-setup)

And that's a wrap, chime in at our [community
Slack](https://community.codegen.com) if you have quesions or ideas for additional MCP tools/capabilities
1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -68,6 +68,7 @@ dependencies = [
"langchain_core",
"langchain_openai",
"numpy>=2.2.2",
"mcp[cli]",
]

license = { text = "Apache-2.0" }
Expand Down
63 changes: 63 additions & 0 deletions src/codegen/cli/mcp/agent/docs_expert.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
"""Demo implementation of an agent with Codegen tools."""

from langchain_core.messages import BaseMessage
from langchain_core.runnables.history import RunnableWithMessageHistory

from codegen.extensions.langchain.agent import create_codebase_agent
from codegen.sdk.core.codebase import Codebase

AGENT_INSTRUCTIONS = """
Instruction Set for Codegen SDK Expert Agent

Overview:
This instruction set is designed for an agent that is an expert on the Codegen SDK, specifically the Python library. The agent will be asked questions about the SDK, including classes, utilities, properties, and how to accomplish tasks using the SDK. The goal is to provide helpful responses that assist users in achieving their tasks with the SDK.

Key Responsibilities:
1. Expertise in Codegen SDK:
- The agent is an expert on the Codegen SDK, with a deep understanding of its components and functionalities.
- It should be able to provide detailed explanations of classes, utilities, and properties defined in the SDK.

2. Answering Questions:
- The agent will be asked questions about the Codegen SDK, such as:
- "Find all imports"
- "How do I add an import for a symbol?"
- "What is a statement object?"
- Responses should be clear, concise, and directly address the user's query.

3. Task-Oriented Responses:
- The user is typically accomplishing a task using the Codegen SDK.
- Responses should be helpful toward that goal, providing guidance and solutions that facilitate task completion.

4. Python Library Focus:
- Assume that questions are related to the Codegen SDK Python library.
- Provide Python-specific examples and explanations when applicable.

Use the provided agent tools to look up additional information if needed.
By following this instruction set, the agent will be well-equipped to assist users in effectively utilizing the Codegen SDK for their projects.
"""


def create_sdk_expert_agent(
codebase: Codebase,
model_name: str = "gpt-4o",
temperature: float = 0,
verbose: bool = True,
) -> RunnableWithMessageHistory:
"""Create an agent with all codebase tools.

Args:
codebase: The codebase to operate on
model_name: Name of the model to use (default: gpt-4)
temperature: Model temperature (default: 0)
verbose: Whether to print agent's thought process (default: True)

Returns:
Initialized agent with message history
"""
# Initialize language model

system_message: BaseMessage = BaseMessage(content=AGENT_INSTRUCTIONS, type="SYSTEM")

agent = create_codebase_agent(chat_history=[system_message], codebase=codebase, model_name=model_name, temperature=temperature, verbose=verbose)

return agent
15 changes: 15 additions & 0 deletions src/codegen/cli/mcp/server.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,8 +3,10 @@
from mcp.server.fastmcp import Context, FastMCP

from codegen.cli.api.client import RestAPI
from codegen.cli.mcp.agent.docs_expert import create_sdk_expert_agent
from codegen.cli.mcp.resources.system_prompt import SYSTEM_PROMPT
from codegen.cli.mcp.resources.system_setup_instructions import SETUP_INSTRUCTIONS
from codegen.sdk.core.codebase import Codebase
from codegen.shared.enums.programming_language import ProgrammingLanguage

# Initialize FastMCP server
Expand Down Expand Up @@ -39,6 +41,19 @@ def get_service_config() -> dict[str, Any]:
# ----- TOOLS -----


@mcp.tool()
def ask_codegen_sdk(query: Annotated[str, "Ask a question to an exper agent for details about any aspect of the codegen sdk core set of classes and utilities"]):
codebase = Codebase("../../sdk/core")
agent = create_sdk_expert_agent(codebase=codebase)

result = agent.invoke(
{"input": query},
config={"configurable": {"session_id": "demo"}},
)

return result["output"]


@mcp.tool()
def generate_codemod(
title: Annotated[str, "The title of the codemod (hyphenated)"],
Expand Down
54 changes: 0 additions & 54 deletions src/codegen/extensions/langchain/__init__.py
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

remove this file because it causes import loops if you're import any of the tools elsewhere

This file was deleted.

72 changes: 68 additions & 4 deletions src/codegen/extensions/langchain/agent.py
Original file line number Diff line number Diff line change
@@ -1,9 +1,10 @@
"""Demo implementation of an agent with Codegen tools."""

from langchain import hub
from langchain.agents import AgentExecutor
from langchain.agents.openai_functions_agent.base import OpenAIFunctionsAgent
from langchain_community.chat_message_histories import ChatMessageHistory
from langchain.hub import pull
from langchain_core.chat_history import InMemoryChatMessageHistory
from langchain_core.messages import BaseMessage
from langchain_core.runnables.history import RunnableWithMessageHistory
from langchain_openai import ChatOpenAI

Expand All @@ -30,6 +31,7 @@ def create_codebase_agent(
model_name: str = "gpt-4o",
temperature: float = 0,
verbose: bool = True,
chat_history: list[BaseMessage] = [],
) -> RunnableWithMessageHistory:
"""Create an agent with all codebase tools.

Expand Down Expand Up @@ -65,7 +67,7 @@ def create_codebase_agent(
]

# Get the prompt to use
prompt = hub.pull("hwchase17/openai-functions-agent")
prompt = pull("hwchase17/openai-functions-agent")

# Create the agent
agent = OpenAIFunctionsAgent(
Expand All @@ -82,7 +84,69 @@ def create_codebase_agent(
)

# Create message history handler
message_history = ChatMessageHistory()
message_history = InMemoryChatMessageHistory(messages=chat_history)

# Wrap with message history
return RunnableWithMessageHistory(
agent_executor,
lambda session_id: message_history,
input_messages_key="input",
history_messages_key="chat_history",
)


def create_codebase_inspector_agent(
codebase: Codebase,
model_name: str = "gpt-4o",
temperature: float = 0,
verbose: bool = True,
chat_history: list[BaseMessage] = [],
) -> RunnableWithMessageHistory:
"""Create an agent with all codebase tools.

Args:
codebase: The codebase to operate on
model_name: Name of the model to use (default: gpt-4)
temperature: Model temperature (default: 0)
verbose: Whether to print agent's thought process (default: True)

Returns:
Initialized agent with message history
"""
# Initialize language model
llm = ChatOpenAI(
model_name=model_name,
temperature=temperature,
)

# Get all codebase tools
tools = [
ViewFileTool(codebase),
ListDirectoryTool(codebase),
SearchTool(codebase),
DeleteFileTool(codebase),
RevealSymbolTool(codebase),
]

# Get the prompt to use
prompt = pull("codegen-agent/codebase-agent")

# Create the agent
agent = OpenAIFunctionsAgent(
llm=llm,
tools=tools,
prompt=prompt,
)

# Create the agent executor
agent_executor = AgentExecutor(
agent=agent,
tools=tools,
verbose=verbose,
)

# Create message history handler
message_history = InMemoryChatMessageHistory(messages=chat_history)

# Wrap with message history
return RunnableWithMessageHistory(
Expand Down
Loading
Loading