}

An AI chatbot on your website. Sounds simple enough. Until you look at the options. Then you're standing in a jungle of SaaS platforms, API providers, agency pitches, and open-source projects. Prices ranging from 0 to 10,000 euros per month. Features that look nearly identical at first glance.
At Exasync, we've built, tested in production, and discarded several chatbot approaches over the past few months. Custom GPTs, n8n workflow chatbots, direct API integrations. The result: 80 percent of companies don't need an expensive enterprise chatbot. An n8n workflow with the GPT-4 API covers most use cases. Setup in two hours. Costs under 50 euros per month.
This article shows which chatbot type fits which business. With real numbers, concrete architecture recommendations, and a decision matrix that saves you the research.
Before we talk pricing, we need to sort through the landscape. There are roughly four categories:
1. SaaS chatbot platforms like Intercom (Fin AI), Drift (Salesloft), Tidio, or Zendesk AI. Ready-made solutions, hosted, often with CRM integration. You pay per user or per resolved inquiry. Advantage: quick to deploy. Disadvantage: expensive, little control over the model, vendor lock-in.
2. API integrations with OpenAI (GPT-4o, GPT-4), Anthropic (Claude), or Google (Gemini). You build the chatbot yourself, using language models as the backend. Full control, lowest ongoing costs, but development effort required.
3. Workflow-based chatbots via platforms like n8n, Make.com, or Zapier. No code in the traditional sense — instead, visual workflows that connect API calls, database queries, and response logic. The sweet spot for most mid-market companies.
4. Custom GPTs and Assistants directly through OpenAI. Since 2024, you can build custom GPTs with a knowledge base. Works well for internal use cases, but has limited suitability for customer-facing chatbots on your own website since you remain within the OpenAI interface.
This is the decisive question. The answer depends on how many inquiries your chatbot processes per month and how complex the responses need to be. Here's an honest breakdown:
Scenario: 1,000 customer inquiries per month, averaging 200 words input + 300 words output per conversation.
OpenAI GPT-4o API: approx. 8-15 EUR/month. Pay-per-token, no subscription. Hidden costs: frontend hosting (5-20 EUR), development time.
n8n Cloud + GPT-4o: approx. 35-75 EUR/month. n8n Starter (24 EUR) + API costs. Hidden costs: webhook hosting, Supabase for chat history (free up to 50k rows).
n8n Self-Hosted + GPT-4o: approx. 15-30 EUR/month. Server (10-20 EUR) + API costs. Hidden costs: maintenance, updates, SSL certificate.
Intercom (Essential + Fin AI): from 620 EUR/month. 1 seat (29 EUR) + approx. 600 Fin resolutions at 0.99 EUR each. Hidden costs: Copilot upgrade (35 EUR/seat), Proactive Support (99 EUR).
Drift (Premium): from 2,500 EUR/month. Chatbot + meeting booking + lead routing. Enterprise features only on the most expensive plan.
Tidio (Lyro AI): approx. 40-80 EUR/month. Chatbot + live chat + 50-200 conversations.
Voiceflow + GPT-4: approx. 50-100 EUR/month. Visual bot builder + API costs. Enterprise features from 625 EUR.
The table makes it clear: between the API solution and an enterprise platform like Drift, there's a factor of 100 to 200. So the question isn't "What's the best tool?" but rather: What problem are you actually solving?
Our experience at Exasync: most companies overestimate what they need. Typical requirements we hear:
All of this works with an n8n workflow and the GPT-4 API. No Intercom needed. No Drift.
Enterprise chatbots only make sense when you:
Solo / Freelancer (under 100 inquiries/month): Custom GPT or Tidio Free. OpenAI Assistants API + simple widget. 5-20 EUR/month.
Startup 2-10 employees (100-500 inquiries): Workflow chatbot. n8n Cloud + GPT-4o + Supabase. 30-60 EUR/month.
SME 10-50 employees (500-2,000 inquiries): Workflow chatbot or Tidio/Voiceflow. n8n self-hosted + GPT-4o + Supabase. 30-150 EUR/month.
Mid-market 50-250 employees (2,000-10,000 inquiries): Hybrid: workflow + Intercom Essential. 200-800 EUR/month.
Enterprise 250+ employees (over 10,000 inquiries): Enterprise platform. Intercom Advanced/Expert or Drift. 1,000-10,000+ EUR/month.
At Exasync, we use both n8n Cloud and a self-hosted instance. The standard stack for a customer chatbot:
Architecture:
That's seven nodes in n8n that you can build in two hours. We use Supabase as the backend because the Realtime feature synchronizes conversations instantly.
Cost at 1,000 inquiries/month: n8n Cloud Starter: 24 EUR + OpenAI API (GPT-4o): approx. 8-12 EUR + Supabase Free Tier: 0 EUR. Total: approx. 32-36 EUR/month. For comparison: Intercom would cost at least 620 EUR for the same volume. A factor of 17.
Mistake 1: Starting with the most expensive tool. Many companies sign an Intercom or Drift contract before they know how many inquiries their bot receives. Tip: start with an API solution, measure for three months, then decide.
Mistake 2: Setting up the bot without a knowledge base. GPT-4 is smart, but without specific company knowledge, the model hallucinates. 20 well-crafted Q&A pairs matter more than any feature of an expensive platform.
Mistake 3: No fallback to human support. Every chatbot needs escalation logic. In n8n: Condition Node with confidence below 70 percent → create ticket.
Mistake 4: Not saving chat history. Without saved conversations, you can't analyze what customers are asking. We evaluate the top 20 questions monthly.
Mistake 5: Ignoring GDPR. Anyone using OpenAI processes data outside the EU. Sign a DPA with OpenAI. For sensitive industries: European provider or self-hosted LLM.
If you're building a chatbot today, think in modules:
Layer 1 — Frontend (interchangeable): Chat widget via REST API or WebSocket. No lock-in.
Layer 2 — Orchestration (n8n or Make): Business logic in the workflow tool. Routing, context, escalation, CRM integration.
Layer 3 — LLM Backend (interchangeable): OpenAI, Anthropic, Mistral, or self-hosted. Provider switch in 10 minutes.
Layer 4 — Data layer (Supabase): Conversations, feedback, analytics. PostgreSQL, Realtime, Row-Level Security.
GPT-4o: The all-rounder. Strong at conversation, fast, affordable ($2.50/million input tokens). The first choice for most chatbot scenarios.
Claude 3.5 Sonnet: Excels at longer, more nuanced responses. Similar pricing. Great choice for complex explanations.
GPT-4o mini / Claude Haiku: For high-volume FAQ. 5-10x cheaper with slightly reduced quality.
Self-Hosted (Llama, Mistral): Maximum data control. GPU server required (16+ GB VRAM). Makes sense above 50,000+ inquiries/month.
Three concrete insights from our practice that you won't find in any comparison article:
Insight 1: The system prompt matters more than the model. We tested the same chatbot workflow with GPT-4o, Claude 3.5 Sonnet, and GPT-4o mini. The quality difference in responses was minimal — as long as the system prompt was precisely crafted. A mediocre model with an excellent prompt beats a top model with a generic prompt. Invest 80 percent of your time in the prompt, not in model selection.
Insight 2: The first 50 conversations are gold. After launch, we read every single chat conversation. Not analyzed automatically — read manually. We discovered three question patterns we hadn't anticipated. We incorporated those patterns into the system prompt. Response quality visibly improved afterward. Automated monitoring is important, but the first few weeks need human attention.
Insight 3: Fewer features, more reliability. Our first chatbot prototype could book appointments, summarize documents, and answer product questions. It was error-prone and response times varied. The current version can only answer product questions and escalate to human support. It's been running flawlessly for weeks. A chatbot that does one thing reliably is more valuable than one that does five things halfway. Using AI in the workplace
Our tip for getting started: define exactly three questions your bot should answer. Not ten, not twenty — three. Build the best bot you can for those. When those three questions are answered correctly 95 percent of the time, expand to five. This incremental approach saves time, money, and frustration.
Specifically:
In most cases, the lean stack is enough. The most common mistake we see with clients: they compare enterprise platforms before they even know how many inquiries their chatbot will handle. Don't do that. Build the simplest working bot first, measure real usage data, and upgrade specifically where the data requires it. That doesn't just save money — it saves time above all and avoids vendor lock-in with providers whose features you may never need.
Further reading: Automate Business Processes | n8n vs. Zapier vs. Make | AI for Business: The Complete Guide
Want to set up an AI chatbot? Talk to us — free initial consultation. We'll look at your use case and recommend the right approach — no sales pressure and no platform lock-in.