This repository provides a structured approach to learning about Large Language Models (LLMs) and Agentic AI systems through practical examples.
- Python 3.10+
- A Google API key (for Gemini models) or access to local LLM via Ollama
- Basic understanding of Python programming
This project uses uv, an extremely fast Python package manager and project tool written in Rust.
- Install uv:
curl -LsSf https://astral.sh/uv/install.sh | sh
- Create a virtual environment and install dependencies:
uv venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
uv pip sync
- Install optional dependencies:
# For development tools
uv pip install -e ".[dev]"
# For notebook support
uv pip install -e ".[notebook]"
# For GPU acceleration
uv pip install -e ".[gpu]"
# For web-related examples
uv pip install -e ".[web]"
If you prefer traditional pip:
pip install -r requirements.txt
For examples using local LLMs, you'll need to install Ollama separately on your system:
- Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh
- Start the Ollama service:
ollama serve
- Pull the models required for the examples:
ollama pull llama2
ollama pull mistral
ollama pull dwightfoster03/functionary-small-v3.1
This repository includes a devcontainer configuration for VS Code with CUDA support:
- Install Docker and VS Code
- Install the Remote Development extension pack
- Install Ollama on your host system (not in the container)
- Open this repository in VS Code and click "Reopen in Container" when prompted
- The container will set up everything automatically, including:
- Python 3.10 with uv package manager
- CUDA support for GPU acceleration
- All required dependencies
Note: Ollama needs to be running on your host system and accessible to the container. The default connection URL is http://localhost:11434.
Start with the simplest examples to understand how to interact with language models:
- Simple Text Generation: test_llama.py
- Getting Started with LangChain: agentic-framworks/langchain/get_started.py
To run using uv:
uv run test_llama.py
# or using the defined script shortcut
uv run --script tutorial
Learn how to make LLMs call specific functions:
- Basic Function Calling: agents/function_calling/function_calling_with_llm.py
- Advanced Function Calling: agents/function_calling/function_calling_with_llm_v2.py
To run using uv:
uv run --script function-call
Explore single-purpose agents:
- Movie Recommendation Agent: agents/movie-recommendation.py
- Reminder Agent: agents/reminder-agent.py
Dive into more complex agent systems:
- News Summary Agent: agents/news-summary-agent/news-summary-agent.py
- Web Crawler Agent: agents/crawler_agent/
Learn how multiple agents can work together:
- Multi-Agent System: agents/multi-agent/multi-agent-strctured_out.py
- Planning Agent: agents/planning_agent/centralized_planning.py
Explore more advanced concepts:
- Model Internals: Check the tokenizers/ and transformers/ directories
- Fine-tuning: See examples in fine-tuning/
If you're completely new to LLMs and AI agents, follow these steps:
- Start with agentic-framworks/langchain/get_started.py to understand basic LLM interactions
- Move to agents/function_calling/function_calling_with_llm.py to learn about function calling
- Try out the simple agents in the agents/ directory
- Progress to more complex multi-agent systems