}

The enthusiasm for artificial intelligence is enormous. Every second company in the DACH region is planning AI projects, but according to a Bitkom study from 2024, 62 percent of these projects fail before they reach production. Not because of the technology. The models are powerful, the tools are mature, the costs have dropped. The projects fail due to three avoidable strategic mistakes that we recognize in almost every client project.
These three mistakes are not new. They have been happening for years, in ever-new variations. But anyone who knows them has an enormous advantage — because the solution in all three cases is simpler than most people think.
We're building a completely autonomous supply chain! — This sentence is the beginning of the end for many AI projects. The ambition is understandable. The board saw at a conference what is theoretically possible. The CTO got budget. The consulting firm packed a vision into 80 slides. And then for six months: nothing visible happens.
What specifically goes wrong: Large AI projects have too many dependencies. They need data integration across multiple systems, change management in several departments, stakeholder alignment on three levels, and clean data from sources that haven't been cleaned up in years. Any single dependency can block the project. With five dependencies, the probability of a standstill is almost guaranteed.
There is also a psychological factor: Large projects deliver results late. If no measurable benefit is visible after six months, the budget gets cut or questioned. If there is still no ROI after twelve months, the project is shut down. The investment is lost, and — worse still — the organization's trust in AI is damaged for years. The phrase We already tried that, it didn't work becomes the standard response to any future AI proposal.
What works instead: Start small. Automate a single process, measure, learn, scale. At Exasync, we started with documentation — the simplest and lowest-risk step. We had our own organizational structure and processes documented by AI agents. It cost nothing, risked nothing, and laid the foundation for everything that followed.
The product staircase we have recommended in every client project since then:
At Exasync, we have now reached stage 4. Our AFK system enables 50 AI agents to work autonomously — even when founder Bodo Buschick is offline. The B-Drone, a dedicated mini PC, executes tasks around the clock. But we didn't reach this point in month one — it took systematic development over months. Each stage prepared the next one.
Subscribing to ChatGPT is not an AI strategy. Just like a hammer is not an architecture strategy. Nevertheless, this is exactly the second most common mistake: Buying a tool and hoping it solves problems.
The reality: Companies subscribe to ChatGPT Enterprise for 30 euros per user per month. All employees get access. After three months, ten percent of employees use it regularly. The others tried it twice — once to generate a joke, once to rephrase an email — and then forgot about it. Management sees the invoice of 1,500 euros per month, sees no measurable benefit, and concludes: AI doesn't work for us.
The mistake isn't the tool. The mistake is that nobody defined which problem the tool should solve. An LLM like ChatGPT is a universal tool. It can do practically anything — and precisely therefore it often does nothing. Employees don't know what to use it for and experiment aimlessly. Use cases, training, workflows, and clear metrics for what success means are missing.
What works instead: First understand the process, then choose the right solution. Sometimes a simple Python script with three regex expressions is enough. Sometimes you need an autonomous agent with access to ten APIs. And sometimes the honest answer is: AI doesn't help here — you need a better form or a clearer process.
At Exasync, we evaluate every process individually. Our tech stack — n8n as workflow engine, Supabase as database, Claude as LLM — is modular. Not every task needs an AI agent. Our order automation project for Welzhofer runs 70 percent on rule-based logic on a dedicated VM. Only the exception cases are handled by AI. That's not a lack of AI usage. That's efficient architecture.
A practical checklist for tool selection:
AI doesn't replace people — it frees them. Anyone who doesn't communicate this creates resistance instead of enthusiasm. And resistance is the most reliable project killer there is. No algorithm in the world survives the passive resistance of a workforce that feels threatened.
What specifically happens: The IT department implements an AI system for order entry. It works technically flawlessly. But the clerks who have been manually entering orders for 15 years see the system as a threat to their jobs. They find reasons why the AI isn't reliable (and at 95 percent accuracy, there are always edge cases you can highlight). They work around the system. They continue to enter orders manually and then enter them into the AI system — double work that makes the system slower instead of faster. After three months, the numbers show: no efficiency gains. The project is shut down.
This scenario is not an exaggeration. We have seen it in variations across three client projects. And in every case, the technology wasn't the problem — it was the lack of involvement of the people affected.
What works instead: Change management is not a nice-to-have and not a soft-skill topic you delegate to HR. It is the difference between an AI project that ends up in a drawer after three months and one that transforms the entire organization. Three concrete measures:
At Exasync, we face a special version of this problem: Our employees are AI agents. Nevertheless, change management is relevant — because our clients have human teams that need to work with our automations. That's why we never deliver a black box. Every automation has a transparent dashboard (via Supabase), a clear escalation rule, and a human review instance. Everything is visible in OrgSphere: which agent is currently working on which task, how far the progress is, where errors occur. Not because the AI needs this transparency, but because the people do.
Start small. Measure quickly. Communicate transparently.
These are not secrets. They are fundamental principles that are harder to implement consistently in practice than they sound. The temptation to think big is real — especially when budget is available and the board expects results. But anyone who knows these three mistakes and consciously avoids them has better chances than the majority of the 62 percent who fail.
A realistic timeline for a first AI project in an SME:
| Phase | Duration | Result |
|---|---|---|
| Process selection | 1 week | A specific process with measurable time investment |
| Prototype | 2–3 weeks | Working prototype with 80+ percent coverage |
| Pilot operation | 2–4 weeks | Parallel operation old/new, employee feedback |
| Production | From week 6 | Old process replaced, monitoring active |
| ROI verification | Weeks 8–10 | Solid numbers for the next project |
Ten weeks from idea to verified ROI. This is not wishful thinking — it's the reality we experience at Exasync with every client project. As an Estonian startup with one founder, bootstrapped, we achieved 10,000 euros in revenue in three months. Not despite, but because of this iterative approach.
The most important sentence we tell every client: You don't need a perfect plan. You need a working prototype. A prototype that covers 80 percent in three weeks beats a strategy that promises 100 percent in six months. Because the 100 percent never comes — but the 80 percent saves time and money from day one.
Schedule a free initial consultation — an honest assessment in 30 minutes, no sales pitch. Further reading: 5 Processes Every SME Can Automate Immediately | Industry Solutions.