File Context is a tool that allows developers to work with their codebase in a more contextual manner. It provides a web interface for exploring, searching, and understanding your code, enhanced with AI capabilities. The system is designed to work with multiple AI providers including Ollama, llama.cpp, and Together.ai, giving you flexibility in choosing your preferred AI backend.
The project consists of two main components:
- Client - A frontend application built with React/TypeScript that provides a user interface for interacting with your codebase.
- File Context MCP (Main Control Program) - A backend service that handles file operations, AI processing, and serves the API for the client.
- Interactive file explorer
- Code search and navigation
- AI-powered code understanding and analysis
- Support for multiple AI models including local (Ollama, llama.cpp) and cloud-based models (via Together AI)
- Docker and Docker Compose
- Node.js (LTS version recommended)
- An API key from Together AI (optional, for cloud-based models)
- Ollama or llama.cpp running locally (optional, for local models)
-
Clone the repository:
git clone https://github.com/yourusername/file-context.git cd file-context
-
Create a
.env
file in thefile-context-mcp
directory:TOGETHER_API_KEY=your_together_api_key # Optional OLLAMA_BASE_URL=http://host.docker.internal:11434 # If using Ollama locally LLAMA_CPP_BASE_URL=http://host.docker.internal:8080 # If using llama.cpp locally MODEL_NAME=llama3.2 # Default model to use
-
Start the application using Docker Compose:
docker-compose up
-
Access the application in your browser at http://localhost:5173
If you prefer to run the components separately:
-
Navigate to the backend directory:
cd file-context-mcp
-
Install dependencies:
npm install
-
Create a
.env
file with the configuration mentioned above. -
Build and start the server:
npm run build npm start
The API server will start on port 3001 (or as configured in your .env file).
-
Navigate to the client directory:
cd client
-
Install dependencies:
npm install
-
Start the development server:
npm run dev
The client will be available at http://localhost:5173.
PORT
: Port for the API server (default: 3001)TOGETHER_API_KEY
: API key for Together AIOLLAMA_BASE_URL
: Base URL for Ollama APILLAMA_CPP_BASE_URL
: Base URL for llama.cpp APIMODEL_NAME
: Default AI model to use
VITE_API_URL
: URL of the backend API (default in Docker: http://localhost:3001)