Skip to main content

🚀 Quick Start: Your First AI Chat in 5 Minutes

Welcome to Libre WebUI! This guide will get you chatting with AI in just a few minutes. No technical expertise required!

Complete Setup Time

Total time: 5-10 minutes including model download (depending on your internet speed)

📋 What You'll Need

  • A computer with at least 4GB RAM (8GB+ recommended)
  • Internet connection (for initial setup only)
  • 5-10 minutes of your time

🎯 Step 1: Install Ollama (The AI Engine)

Ollama is the engine that runs AI models on your computer. It's free and easy to install.

Install Steps:

  1. Visit ollama.ai
  2. Click "Download for Windows"
  3. Run the installer and follow the prompts
  4. Ollama will start automatically
Windows Users

The installer will automatically add Ollama to your system PATH, so you can use it from any terminal.

Verify Installation

Open a terminal and verify Ollama is installed:

ollama --version
Expected Output

You should see a version number like ollama version is 0.1.x. If not, restart your computer and try again.


🤖 Step 2: Download Your First AI Model

Think of AI models as different "brains" - each with unique capabilities. Let's start with a fast, friendly one:

Best for most users:

ollama pull gemma3:4b

Size: ~4GB | Speed: Fast | Quality: Excellent

  • Current best single-GPU model
  • Great balance of speed and intelligence
  • Perfect for daily use

⚡ Alternative Models

Ultra Fast (for slower computers)
ollama pull llama3.2:1b

Size: ~1GB | Speed: Ultra-fast | Quality: Good

  • Smallest, fastest model
  • Works on any computer
  • Great for quick questions
Powerhouse (for powerful hardware)
ollama pull phi4:14b

Size: ~14GB | Speed: Good | Quality: Excellent

  • Microsoft's compact powerhouse
  • Requires 16GB+ RAM
  • State-of-the-art performance
With Vision (for image analysis)
ollama pull qwen2.5vl:3b

Size: ~3GB | Speed: Fast | Quality: Good

  • Can understand images
  • Upload photos and ask questions
  • Perfect for visual tasks
Download Progress

This will download several gigabytes of data. While it downloads:

  • ☕ Grab a coffee
  • 📖 Read about what you can do with AI
  • 🎵 Listen to some music

The download typically takes 5-15 minutes depending on your internet speed.

🌐 Step 3: Start Libre WebUI

Now let's get the interface running. Since you already have Ollama installed from Step 1, we'll use Docker with external Ollama connection:

Perfect for this setup since you already have Ollama installed:

# Use the external Ollama configuration
docker-compose -f docker-compose.external-ollama.yml up -d

What this does:

  • Runs Libre WebUI in Docker
  • Connects to your existing Ollama installation
  • Maps port 8080 for web access
  • Saves your data persistently
Why External Ollama?

Since you installed Ollama in Step 1, this setup:

  • ✅ Uses your existing Ollama installation
  • ✅ Avoids running duplicate Ollama instances
  • ✅ Better resource management
  • ✅ Easier to manage models with ollama pull

First time setup:

# Clone the repository first
git clone https://github.com/libre-webui/libre-webui.git
cd libre-webui

# Then run the external Ollama setup
docker-compose -f docker-compose.external-ollama.yml up -d

🔧 Verify Ollama Connection

Before starting Libre WebUI, make sure your Ollama is running:

# Check if Ollama is running
curl http://localhost:11434/api/version

# If not running, start it
ollama serve

🎉 Step 4: Start Chatting!

  1. Open your browser and go to:
  2. You should see the Libre WebUI interface!
  3. Click "New Chat" or just start typing in the message box
  4. Type your first message like "Hello! Can you introduce yourself?"
  5. Press Enter and watch the AI respond in real-time!
Troubleshooting

If you don't see any models available, make sure:

  • Ollama is running: ollama serve
  • You have a model downloaded: ollama pull llama3.2
  • Check the Docker External Ollama guide for detailed troubleshooting

🎊 Congratulations! You're Now Running Local AI!

Your setup is complete! Here's what just happened:

  • ✅ Ollama is running the AI model on your computer
  • ✅ Libre WebUI provides the beautiful chat interface
  • ✅ Everything is running locally - no data leaves your machine
  • ✅ You have unlimited, private AI conversations

🎮 What to Try Next

Basic Conversations

  • "Explain quantum physics in simple terms"
  • "Write a short story about a robot"
  • "Help me plan a healthy meal"

Practical Tasks

  • "Help me write a professional email"
  • "Proofread this text: [paste your text]"
  • "Brainstorm names for my new project"

Creative Projects

  • "Help me write a poem about friendship"
  • "Create a workout routine for beginners"
  • "Suggest improvements for my resume"

Learning & Research

  • "What are the pros and cons of solar energy?"
  • "Explain machine learning like I'm 12 years old"
  • "Compare different programming languages"

📊 Download More Models

Want to try different AI personalities? Download more models:

For General Use:

# Ultra-fast for simple tasks
ollama pull llama3.2:1b

# Current best single-GPU model
ollama pull gemma3:4b

# State-of-the-art performance
ollama pull llama3.3:70b

For Specific Tasks:

# Advanced programming and coding agents
ollama pull devstral:24b

# Understanding images and documents
ollama pull qwen2.5vl:32b

# Complex reasoning and thinking
ollama pull deepseek-r1:32b

# Multimodal tasks with Meta's latest
ollama pull llama4:16x17b

Check Your Models:

ollama list

🎨 Explore the Interface

🔧 Settings Menu

  • Click the gear icon (⚙️) to change models
  • Adjust response creativity and length
  • Customize your experience

⌨️ Keyboard Shortcuts

  • ⌘B (Ctrl+B): Toggle sidebar
  • ⌘, (Ctrl+,): Open settings
  • ?: Show all shortcuts
  • ⌘D (Ctrl+D): Toggle dark/light theme

📱 Mobile Friendly

Libre WebUI works great on phones and tablets too!

🔒 Privacy & Security

🎉 Your data is 100% private!

  • ✅ Everything runs on your computer
  • ✅ No internet required after setup
  • ✅ No data sent to external servers
  • ✅ Complete control over your conversations
  • ✅ No tracking or analytics

🆘 Having Trouble?

Can't create a new chat?

  1. Make sure Ollama is running: ollama list
  2. Check you have at least one model downloaded
  3. Restart both backend and frontend
  4. See our Troubleshooting Guide

AI responses are slow?

  • Try a smaller model like llama3.2:1b
  • Close other applications to free up memory
  • Make sure you have enough RAM (4GB minimum)

Model download failed?

  • Check your internet connection
  • Make sure you have enough disk space
  • Try downloading a smaller model first

🚀 Next Steps

🎯 Power User Features

🎭 Try Demo Mode

Want to show Libre WebUI to friends? Try Demo Mode for a no-setup demonstration.

📚 Learn More

🤝 Join the Community

  • 🐛 Found a bug? Report it on GitHub
  • 💡 Have an idea? Submit a feature request
  • ❤️ Love Libre WebUI? Star the repository and share with friends!

🎉 Welcome to the future of private AI!

You now have a powerful, private AI assistant running entirely on your computer. No subscriptions, no data sharing, no limits - just pure AI power at your fingertips.

Happy chatting! 🤖✨