🤖 Setting Up AI Features
Configure AI-powered email generation with Ollama
⏱️ 15 min read
AI-Powered Email Creation
DripEmails.org integrates with Ollama to provide AI-powered email content generation, helping you create engaging campaigns faster.
💡 Pro Tip: AI works best when you provide clear, specific prompts. The more context you give, the better the results.
Prerequisites
Before using AI features, you need:
- Ollama installed: Download from ollama.ai
- AI model downloaded: We recommend llama2 or mistral
- Ollama running: The Ollama service must be active
- Network access: DripEmails.org connects to localhost:11434
Step 1: Install Ollama
curl https://ollama.ai/install.sh | sh
# On Windows:
# Download installer from ollama.ai
Step 2: Download an AI Model
Choose a model based on your needs:
Balanced performance and quality
ollama pull llama2
Fast and efficient
ollama pull mistral
Higher quality (requires more resources)
ollama pull llama2:13b
Step 3: Start Ollama Service
Leave this running in a terminal window while using DripEmails.org.
Step 4: Configure in DripEmails.org
- Go to Settings → AI Configuration
- Verify Ollama connection status (should show "Connected")
- Select your preferred model from the dropdown
- Test the connection with the "Test AI" button
- Adjust generation settings if needed
⚠️ Important: If connection fails, ensure Ollama is running on localhost:11434 and your firewall isn't blocking it.
AI Generation Settings
Customize how AI generates content:
- Model: Which AI model to use
- Temperature: Creativity level (0.1 = conservative, 0.9 = creative)
- Max tokens: Maximum length of generated text
- Tone: Professional, casual, friendly, urgent
- Language: Output language preference
Testing Your Setup
Verify everything works:
- Create a new campaign
- Click the "Generate with AI" button
- Enter a simple prompt: "Write a welcome email for new subscribers"
- Click "Generate"
- AI should produce content within 5-10 seconds
Troubleshooting
Connection Failed
Check that Ollama is running: ollama list should show your models
Slow Generation
Try a smaller model (mistral) or reduce max_tokens setting
Poor Quality Output
Use more specific prompts and try increasing temperature slightly
System Requirements
Minimum:
- 8GB RAM
- 4 CPU cores
- 10GB free disk space
Recommended:
- 16GB+ RAM
- 8+ CPU cores or GPU
- 20GB free disk space
Best Practices
- Keep Ollama running when actively using AI features
- Start with llama2 for best balance of speed and quality
- Use specific prompts for better results
- Always review and edit AI-generated content
- Save successful prompts for reuse
- Update Ollama regularly for improvements