🔧 Troubleshooting: Quick Fixes for Common Issues
Having trouble with Libre WebUI? Don't worry! Most issues have simple solutions. Let's get you back to chatting with AI quickly.
90% of issues are solved by checking these three things: Ollama running, models downloaded, and backend/frontend started.
🚨 Most Common Issue: "Can't Create New Chat"
This usually means one of three things is missing. Let's check them in order:
✅ Quick Fix: The One-Command Solution
If you have the start script, try this first:
cd /home/rob/Documents/libre-webui-dev
./start.sh
This should start everything automatically! If it works, you're done! 🎉
🔍 Step-by-Step Diagnosis
If the quick fix didn't work, let's figure out what's wrong:
- 1️⃣ Ollama Running?
- 2️⃣ Models Downloaded?
- 3️⃣ Backend Running?
- 4️⃣ Frontend Running?
The Problem: Ollama is the AI engine. Without it, there's no AI to chat with.
Check if installed:
ollama --version
If you see "command not found":
- 📥 Install Ollama: Go to ollama.ai and download for your system
- 💻 Restart your terminal after installation
If you see a version number, start Ollama:
ollama serve
Keep this terminal open!
The Problem: Ollama is running but has no AI "brains" to use.
Check available models:
ollama list
If the list is empty or shows an error:
Download a recommended model:
# Current best single-GPU model (recommended)
ollama pull gemma3:4b
# Or ultra-fast for slower computers
ollama pull llama3.2:1b
# For advanced users with good hardware
ollama pull llama3.3:70b
Wait for the download to finish (1-32GB depending on the model).
The Problem: The backend connects your browser to Ollama.
Start the backend:
cd backend
npm install # Only needed the first time
npm run dev
You should see: Server running on port 3001
or similar.
Keep this terminal open!
The Problem: The frontend is the beautiful interface you see in your browser.
Start the frontend:
cd frontend
npm install # Only needed the first time
npm run dev
You should see: A message with a local URL like http://localhost:5173
Keep this terminal open!
🎯 Visual Troubleshooting
In Your Browser (http://localhost:5173):
✅ Good Signs:
- You see the Libre WebUI interface
- There's a model name shown in the header or sidebar
- The "New Chat" button is clickable
- Settings menu shows available models
❌ Warning Signs:
- Yellow banner saying "No models available"
- "New Chat" button is grayed out
- Error messages about connection
- Blank page or loading forever
Quick Browser Fixes:
- Hard refresh: Hold Shift and click refresh
- Clear cache: Press F12 → Network tab → check "Disable cache"
- Check console: Press F12 → Console tab (look for red errors)
🛠️ Common Error Messages & Solutions
"Cannot connect to Ollama"
Solution: Start Ollama: ollama serve
"No models found"
Solution: Download a model: ollama pull gemma3:4b
"Failed to fetch" or "Network Error"
Solution: Start the backend: cd backend && npm run dev
"This site can't be reached"
Solution: Start the frontend: cd frontend && npm run dev
"Port already in use"
Solution: Something else is using the port. Find and stop it:
# Check what's using port 3001 (backend)
lsof -i :3001
# Check what's using port 5173 (frontend)
lsof -i :5173
# Kill the process (replace XXXX with the PID number)
kill -9 XXXX
⚡ Performance Issues
AI Responses Are Very Slow
Solutions:
- Try a more efficient model:
ollama pull phi4:14b
(compact powerhouse) - Use ultra-fast models:
ollama pull llama3.2:1b
orollama pull gemma3:1b
- Close other applications to free up memory
- Check your RAM: You need at least 4GB free for most models
"Timeout of 30000ms exceeded" Errors
Problem: Large models on multiple GPUs need more time to load into memory.
Solutions:
-
Quick Fix - Environment Variables:
# Backend (.env file or environment)
OLLAMA_TIMEOUT=300000 # 5 minutes for regular operations
OLLAMA_LONG_OPERATION_TIMEOUT=900000 # 15 minutes for model loading
# Frontend (.env file or environment)
VITE_API_TIMEOUT=300000 # 5 minutes for API calls -
For Large Models (like CodeLlama 70B, Llama 70B+):
# Increase to 30 minutes for very large models
OLLAMA_LONG_OPERATION_TIMEOUT=1800000
VITE_API_TIMEOUT=1800000 -
Restart the services after changing environment variables
Interface Is Laggy
Solutions:
- Hard refresh your browser (Shift + Refresh)
- Close other browser tabs
- Try a different browser (Chrome, Firefox, Safari)
Models Won't Download
Solutions:
- Check internet connection
- Free up disk space (models can be 1-32GB each)
- Try a smaller model first:
ollama pull llama3.2:1b
🚀 Advanced Troubleshooting
Multiple Terminal Management
You need 3 things running simultaneously:
Terminal 1 (Ollama):
ollama serve
# Keep this running
Terminal 2 (Backend):
cd backend
npm run dev
# Keep this running
Terminal 3 (Frontend):
cd frontend
npm run dev
# Keep this running
Check Everything Is Working
Run these commands to verify each part:
# Check Ollama
curl http://localhost:11434/api/tags
# Check Backend
curl http://localhost:3001/api/ollama/health
# Check Frontend
# Open http://localhost:5173 in your browser
Each should return data, not errors.
🆘 Still Stuck?
Before Asking for Help:
- ✅ Try the quick fix at the top of this guide
- ✅ Check all three services are running (Ollama, backend, frontend)
- ✅ Download at least one model (
ollama pull llama3.2:3b
) - ✅ Restart everything and try again
When Reporting Issues:
Please include:
- Operating system (Windows, Mac, Linux)
- Error messages (exact text)
- Browser console errors (press F12 → Console)
- Terminal output from backend/frontend
Get Help:
- 🐛 Report bugs: GitHub Issues
- 💬 Ask questions: GitHub Discussions
- 📚 Read more: Check other guides in the docs folder
🎯 Prevention Tips
For Smooth Operation:
- Keep terminals open while using Libre WebUI
- Don't close Ollama - it needs to stay running
- Download models when you have good internet
- Monitor disk space - AI models are large files
- Restart everything occasionally to clear memory
System Requirements Reminder:
- Minimum: 4GB RAM, 15GB free disk space (for compact models)
- Recommended: 8GB+ RAM, 50GB+ free disk space (for mid-size models)
- Power User: 16GB+ RAM, 100GB+ free disk space (for large models)
- Enthusiast: 32GB+ RAM, 200GB+ SSD storage (for state-of-the-art models)
🎉 Most issues are solved by ensuring all three services are running!
Remember: Ollama (AI engine) + Backend (API) + Frontend (interface) = Working Libre WebUI
Still having trouble? The Quick Start Guide has step-by-step setup instructions.
🔌 Plugin Issues
Can't Connect to External AI Services
The Problem: You have API keys but external services (OpenAI, Anthropic, etc.) aren't working.
Common Solutions:
-
Check API Key Format:
# Set API keys in backend/.env
OPENAI_API_KEY=your_openai_key_here
ANTHROPIC_API_KEY=your_anthropic_key_here
GROQ_API_KEY=your_groq_key_here
GEMINI_API_KEY=your_gemini_key_here
MISTRAL_API_KEY=your_mistral_key_here
GITHUB_API_KEY=your_github_token_here -
Verify API Keys Are Valid:
# Test OpenAI
curl -H "Authorization: Bearer $OPENAI_API_KEY" \
https://api.openai.com/v1/models
# Test Anthropic
curl -H "x-api-key: $ANTHROPIC_API_KEY" \
https://api.anthropic.com/v1/messages -
Update Plugin Models:
# Update all providers
./scripts/update-all-models.sh
# Or update specific providers
./scripts/update-openai-models.sh
./scripts/update-anthropic-models.sh
./scripts/update-groq-models.sh
./scripts/update-gemini-models.sh
./scripts/update-mistral-models.sh
./scripts/update-github-models.sh
Plugin Update Scripts Failing
The Problem: Model update scripts are reporting errors.
Common Solutions:
-
Check API Keys:
# Verify environment variables are set
echo $OPENAI_API_KEY
echo $ANTHROPIC_API_KEY
echo $GROQ_API_KEY
echo $GEMINI_API_KEY
echo $MISTRAL_API_KEY
echo $GITHUB_API_KEY -
Check Script Permissions:
# Make scripts executable
chmod +x scripts/update-*.sh -
Run Individual Scripts with Debug:
# Run with verbose output
bash -x ./scripts/update-openai-models.sh
Models Not Showing in UI
The Problem: Plugin models aren't appearing in the model selector.
Solutions:
-
Restart Backend:
# Stop backend (Ctrl+C) and restart
cd backend
npm run dev -
Check Plugin Status:
- Go to Settings → Plugins
- Verify plugins are enabled
- Check for any error messages
-
Manual Plugin Refresh:
# Update all plugins
./scripts/update-all-models.sh
# Restart backend to reload models
cd backend && npm run dev