ollama-haskell
is an unofficial Haskell client for Ollama, inspired by ollama-python
. It enables interaction with locally running LLMs through the Ollama HTTP API — directly from Haskell.
- 💬 Chat with models
- ✍️ Text generation (with streaming)
- ✅ Chat with structured messages and tools
- 🧠 Embeddings
- 🧰 Model management (list, pull, push, show, delete)
- 🗃️ In-memory conversation history
- ⚙️ Configurable timeouts, retries, streaming handlers
{-# LANGUAGE OverloadedStrings #-}
module Main where
import Data.Ollama.Generate
import qualified Data.Text.IO as T
main :: IO ()
main = do
let ops =
defaultGenerateOps
{ modelName = "gemma3"
, prompt = "What is the meaning of life?"
}
eRes <- generate ops Nothing
case eRes of
Left err -> putStrLn $ "Something went wrong: " ++ show err
Right r -> do
putStr "LLM response: "
T.putStrLn (genResponse r)
Add to your .cabal
file:
build-depends:
base >=4.7 && <5,
ollama-haskell
Or use with stack
/nix-shell
.
See examples/OllamaExamples.hs
for:
- Chat with conversation memory
- Structured JSON output
- Embeddings
- Tool/function calling
- Multimodal input
- Streaming and non-streaming variants
Make sure you have Ollama installed and running locally. Run ollama pull llama3
to download a model.
Use Nix:
nix-shell
This will install stack
and Ollama.
Created and maintained by @tusharad. PRs and feedback are welcome!
Have ideas or improvements? Feel free to open an issue or submit a PR!