A powerful CLI tool that simplifies the installation and management of Ollama and its models. Written in Rust for maximum performance and reliability.
____ _ _
/ __ \| | |
| | | | | | __ _ _ __ ___ __ _
| | | | | |/ _` | '_ ` _ \ / _` |
| |__| | | | (_| | | | | | | (_| |
\____/|_|_|\__,_|_| |_| |_|\__,_|
- 🚀 One-Command Setup: Automatically installs and configures Ollama
- 🔄 Cross-Platform: Supports macOS and Linux with smart installation methods
- 📦 Smart Model Management: Download and run models with simple commands
- 🎯 Default Model: Comes pre-configured with
gemma3:1b
- 🎨 Beautiful UI: Colorful terminal output with emoji indicators
- 🛡️ Robust Error Handling: Clear error messages and graceful failure recovery
- Rust (latest stable version)
- Cargo (comes with Rust)
- macOS or Linux operating system
-
Clone the repository:
git clone https://github.com/yourusername/setollama.git cd setollama
-
Build the release version:
cargo build --release
-
(Optional) Install system-wide:
sudo cp target/release/setollama /usr/local/bin/
# Install Ollama and run default model (gemma3:1b)
setollama
# Show help and available commands
setollama help
# Check if Ollama is installed and running
setollama check
# List all installed models
setollama list
# Run a specific model
setollama model llama2
-
Default Command (
setollama
):- Checks if Ollama is installed
- Installs it if necessary
- Downloads and runs the default model (gemma3:1b)
-
Help (
setollama help
):- Displays comprehensive help information
- Shows available commands and examples
-
Check (
setollama check
):- Verifies Ollama installation
- Checks if the server is running
-
List (
setollama list
):- Shows all installed models
- Displays model details like size and modification date
-
Model (
setollama model <name>
):- Downloads the specified model if not present
- Runs the model after successful download
-
Primary Method: Uses Homebrew if available
brew install ollama
-
Fallback Method: Uses official installation script
curl -fsSL https://ollama.ai/install.sh | sh
- Uses the official installation script:
curl -fsSL https://ollama.ai/install.sh | sh
-
First-time setup with default model:
setollama
-
Running a specific model:
setollama model llama2
-
Checking installation status:
setollama check
-
Ollama not starting:
- Check the
ollama.log
file in the current directory - Ensure you have sufficient permissions
- Try running
ollama serve
manually
- Check the
-
Model download issues:
- Verify your internet connection
- Check available disk space
- Ensure the model name is correct
Contributions are welcome! Here's how you can help:
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.
- Ollama for providing the amazing base technology
- The Rust community for the excellent tools and libraries
- All contributors who help improve this tool
If you encounter any issues or have questions:
- Check the Issues page
- Create a new issue if your problem isn't already reported
- Provide as much detail as possible when reporting issues
Made with ❤️ by [Your Name]