Skip to content

pandasanjay/play-with-local-llama

Repository files navigation

LLM and Agentic AI Systems: A Beginner's Guide

This repository provides a structured approach to learning about Large Language Models (LLMs) and Agentic AI systems through practical examples.

Prerequisites

  • Python 3.10+
  • A Google API key (for Gemini models) or access to local LLM via Ollama
  • Basic understanding of Python programming

Installation

Using uv (Recommended)

This project uses uv, an extremely fast Python package manager and project tool written in Rust.

  1. Install uv:
curl -LsSf https://astral.sh/uv/install.sh | sh
  1. Create a virtual environment and install dependencies:
uv venv
source .venv/bin/activate  # On Windows: .venv\Scripts\activate
uv pip sync
  1. Install optional dependencies:
# For development tools
uv pip install -e ".[dev]"

# For notebook support
uv pip install -e ".[notebook]"

# For GPU acceleration
uv pip install -e ".[gpu]"

# For web-related examples
uv pip install -e ".[web]"

Using pip (Alternative)

If you prefer traditional pip:

pip install -r requirements.txt

Ollama Setup

For examples using local LLMs, you'll need to install Ollama separately on your system:

  1. Install Ollama:
curl -fsSL https://ollama.ai/install.sh | sh
  1. Start the Ollama service:
ollama serve
  1. Pull the models required for the examples:
ollama pull llama2
ollama pull mistral
ollama pull dwightfoster03/functionary-small-v3.1

Using the Development Container

This repository includes a devcontainer configuration for VS Code with CUDA support:

  1. Install Docker and VS Code
  2. Install the Remote Development extension pack
  3. Install Ollama on your host system (not in the container)
  4. Open this repository in VS Code and click "Reopen in Container" when prompted
  5. The container will set up everything automatically, including:
    • Python 3.10 with uv package manager
    • CUDA support for GPU acceleration
    • All required dependencies

Note: Ollama needs to be running on your host system and accessible to the container. The default connection URL is http://localhost:11434.

Learning Path

1. Basic LLM Interactions

Start with the simplest examples to understand how to interact with language models:

To run using uv:

uv run test_llama.py
# or using the defined script shortcut
uv run --script tutorial

2. Function Calling

Learn how to make LLMs call specific functions:

To run using uv:

uv run --script function-call

3. Simple Agents

Explore single-purpose agents:

4. Advanced Agents

Dive into more complex agent systems:

5. Multi-Agent Systems

Learn how multiple agents can work together:

6. Advanced Topics

Explore more advanced concepts:

Quick Start Guide

If you're completely new to LLMs and AI agents, follow these steps:

  1. Start with agentic-framworks/langchain/get_started.py to understand basic LLM interactions
  2. Move to agents/function_calling/function_calling_with_llm.py to learn about function calling
  3. Try out the simple agents in the agents/ directory
  4. Progress to more complex multi-agent systems

Resources

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published