The 3 Tiers of n8n Setup : From Beginner to Scale
TL;DR:
You can run n8n on a $6 droplet or scale it to handle hundreds of thousands of executions—your call.
Start with cloud or SQLite to get comfy. Switch to PostgreSQL when things start creaking.
Need serious horsepower? Add Redis, enable queue mode, and hang on tight.
Also… back up your stuff. Seriously. Future-you will send past-you a thank-you note.
Bottom line
I've built and managed n8n setups across all tiers, and here's what I've learned: start with either cloud hosting ($24/month) or a simple Digital Ocean droplet with SQLite ($6-12/month) to get your feet wet. When you hit database constraints, move to PostgreSQL for better concurrency and reliability. When you need serious throughput, implement queue mode with Redis for processing high workflow volumes. Migrate from SQLite when hitting 5,000-10,000 daily executions or when your database grows to 4-5GB. Switch to queue mode when you need to handle more than 50 concurrent workflows or your system experiences request spikes.
BEGINNER TIER: Cloud or basic self-hosting
The beginner tier gives you two solid options to get started: n8n cloud or self-hosting with SQLite.
Option 1: Cloud hosting for simplicity
I recommend starting with n8n cloud if you want the fastest setup with zero infrastructure headaches:
Cost: $24/month (simplest option to get your feet wet)
Execution limits: 2,500 workflows monthly with unlimited steps
Active workflows: 5 maximum (plus unlimited test workflows)
Concurrent runs: Limited to 5
Technical skill needed: Minimal - perfect for non-technical teams
Advantages: Zero maintenance, automatic updates, no server management
This is perfect if you want to familiarize yourself with the platform without big commitments. Pricing is higher than self-hosting, but you're paying for convenience.
Option 2: Self-hosted on Digital Ocean
For the more technical folks, this unlocks most pro features at a fraction of the cost:
Cost: Under $8/month (Digital Ocean basic droplet)
Database: SQLite (built-in, no extra configuration)
Installation: Quick setup using Digital Ocean marketplace one-click install
Executions: Unlimited (only limited by your server resources)
Technical skill needed: Basic Docker and Linux comfort
I STRONGLY recommend setting up regular backups if going this route. SQLite runs locally on your server and is file-system based, making it more susceptible to corruption. It's still solid, but backups are essential since it's a limitation of this database type.
Pro Tip: AI assistants like Claude or ChatGPT can walk you through the setup process if you're new to self-hosting. Just ask them to guide you step-by-step through setting up your n8n instance on Digital Ocean!
When to upgrade from Beginner tier
I've found these clear warning signs that it's time to move up:
Database locks: You'll see "SQLITE_BUSY" errors in logs
Size issues: Performance tanks as database grows beyond 4-5GB
Workflow volume: Execution slows beyond 5,000-10,000 daily runs
Concurrency limits: SQLite struggles with 10-15 concurrent workflows
Team growth: Multiple users editing workflows simultaneously causes problems
Don't wait until these become critical - plan your migration before hitting these limits.
ADVANCED TIER: Full self-hosted with PostgreSQL
This is where I recommend most organizations land for production use.
Option 1: The self-hosted sweet spot
I recommend going full self-hosted for n8n at this level:
Cost: Still affordable at $30/month for a Digital Ocean droplet
Execution cost: My heaviest instances with 76K execution runs weekly still run on droplets under $30/month (for the math nerds, that's $0.00009868421053 per execution!)
Database: Upgrade to PostgreSQL for serious performance
Recommendation: We use Supabase as our PostgreSQL provider
We have Supabase projects just running our heavier n8n instances, and it works great. This setup could actually qualify as more advanced than what I'm outlining here, but we'll get to that next.
For this setup, AI assistants can be extremely helpful for configuring PostgreSQL connection strings and environment variables. They can provide detailed instructions for connecting your n8n instance to an external database.
Option 2: n8n Pro plan for those who prefer managed solutions
If you need to scale but aren't comfortable with self-hosting, the n8n Pro plan offers a flexible managed alternative:
Starting cost: starting at $60/month)
Base execution limits: 10,000 workflows monthly
Active workflows: Up to 25 initially
Concurrent executions: Maximum of 10
Customizable scaling: This is key - you can add more active workflows and executions to meet your needs simply by emailing their support team
Technical skill needed: Minimal - perfect for teams that need more capacity without infrastructure headaches
Advantages: Zero server maintenance, priority support, flexible scaling without infrastructure changes
What makes this option particularly valuable is the ability to customize your plan as you grow. Unlike the rigid tier structure of many SaaS products, n8n lets you add capacity incrementally by reaching out to their team. This gives you a managed solution that can scale alongside your automation needs without forcing you to jump to significantly higher price points or self-hosting before you're ready. Sign up for the Pro plan here.
What PostgreSQL gives you over SQLite
I've seen these major benefits in our own production environments:
Real concurrency: Multiple team members working on workflows simultaneously without locking issues
Rock-solid reliability: PostgreSQL's transaction handling prevents database corruption
Room to grow: Handles much larger datasets without performance hits
Requirement for scaling: Required for queue mode (coming up next)
Team-friendly: Properly handles workflow editing from multiple users
The performance difference is substantial - expect 5-10x better concurrency handling.
SCALE TIER: Queue mode with Redis
This is for those who want it all but aren't paying for enterprise licenses yet.
The ultimate self-hosted setup
For high-volume workflow automation, this is my recommended architecture:
Server: Same $30/month Digital Ocean droplet
Key difference: Setting up queue mode with:
Redis for message brokering
PostgreSQL for data storage (we use Supabase)
Multiple workers for execution
Critical setup: Configure BOTH webhook and normal workers! Otherwise, your main instance still handles all executions for whichever worker type you don't configure
In my setup, we use this architecture for our heaviest automation needs. It's a hybrid approach - we maintain a mix of self-hosted SQLite instances, one queue mode server, and one n8n cloud server, giving us flexibility across different use cases.
The queue mode setup gets more complex, but the n8n docs provide a great guide, and AI assistants can help translate the technical documentation into step-by-step instructions customized for your environment. Ask them to help with your Docker Compose configuration and environment variables setup!
When to move to queue mode
From my experience managing n8n at scale, look for these signals:
Concurrency demands: You need to process 50+ concurrent workflows
Execution speed issues: You're approaching execution limits (220 workflows/second)
Traffic spikes: You experience bursts of hundreds/thousands of requests
Webhook volume: High volume of incoming webhooks requiring parallel processing
Uptime requirements: You need high availability and fault tolerance
The distributed architecture in queue mode delivers substantially higher throughput by spreading execution across worker nodes, keeping your main instance responsive.
What queue mode enables
This setup pattern unlocks advanced capabilities I've found valuable:
Worker specialization: Dedicated workers for specific workflow types
Easy scaling: Add more workers as demand grows
Fault tolerance: System keeps working even if individual executions fail
Better resource usage: Efficient distribution of processing work
Responsive UI: Main interface stays snappy regardless of execution load
My heaviest production instance handles about 400,000 executions monthly with this architecture.
Essential backup strategies
Don't skip this part! Here's what I've learned about properly backing up n8n.
For SQLite setups (beginner tier)
Focus on the database file - it's your lifeline:
Volume backup: Use Digital Ocean's built-in backup system for the n8n_data volume
Automation: Set up cron jobs for scheduled backups
Export option: Use n8n CLI to export workflows separately
The database file contains everything - your workflows, credentials, and execution history.
For PostgreSQL and Redis (advanced/scale tiers)
More components mean more comprehensive backup needs:
Regular dumps: Schedule PostgreSQL backups with pg_dump
Redis persistence: Enable AOF or RDB snapshots
Config backup: Save your environment variables and Docker Compose files
Security first: Keep your N8N_ENCRYPTION_KEY secure but accessible
Off-site storage: Don't keep backups only on the same server
I recommend daily database backups at minimum. Trust me - you'll thank yourself when you need them!
How to choose your tier
I've managed setups across all these tiers, and here's my practical advice on selecting yours:
Start small, but plan ahead
Begin with the simplest setup that meets your current needs. For most teams just exploring n8n, the beginner tier is perfect. You'll learn the platform without significant investment.
Watch for growth signals
Monitor these metrics to know when to upgrade:
Daily execution count approaching 5,000-10,000
Database size nearing 4-5GB
Concurrent workflow needs exceeding 10-15
Team size growing beyond 5 active users
Frequent "database locked" errors in logs
Consider your technical comfort
Your team's technical capability should influence your choice:
Non-technical teams: Start with cloud hosting
Basic technical skills: Self-hosted SQLite
DevOps capability: PostgreSQL with managed provider
Engineering team available: Queue mode with Redis
My approach
In my environment at Church Media Squad, we maintain a mix of all three tiers - each serving different use cases based on volume, criticality, and integration needs. This hybrid approach gives us flexibility while controlling costs.
The right setup evolves with your automation journey. Start simple, monitor your growth metrics, and upgrade when the limitations of your current tier become apparent.
Want more automation insights?
Follow me on LinkedIn for more practical tips on workflow automation, system integration, and building efficient processes using tools like n8n. I regularly share the lessons I've learned managing automation at scale.
And remember, modern AI assistants like Claude or ChatGPT can be invaluable tools when setting up your n8n infrastructure. They can help translate technical documentation into practical steps, debug configuration issues, and even generate Docker Compose files or environment variable configurations tailored to your specific needs. Don't hesitate to leverage these tools to make your n8n journey smoother!