AI & Automation

AI in Customer Support: What Works, What Doesn't, and What's Next

By Felix Maru · March 25, 2026 · 7 min read

Every support tool vendor is pitching AI right now. "Resolve 80% of tickets automatically." "Cut response times by 90%." "Replace your Tier 1 team." The promises are enormous. The reality is more nuanced — and if you deploy AI the wrong way, you'll lose customer trust faster than you'll save on headcount.

I've spent the past year integrating AI into real support workflows at xFusion. Not as a science project, but as a daily operational tool handling thousands of tickets across multiple client accounts. Here's what I've learned about what AI actually does well, where it falls flat, and where all of this is heading.

What AI Actually Does Well in Support Today

Let's start with the wins, because they're real. AI isn't magic, but when you point it at the right problems, the impact is immediate and measurable.

The pattern is clear: AI excels at tasks that are repetitive, pattern-based, and don't require emotional intelligence. It's a power tool, not a replacement worker.

What Doesn't Work (and Why Customers Hate It)

Now the uncomfortable part. Some of the most heavily marketed AI use cases in support are the ones that damage customer relationships the most.

Fully automated replies. This is the big one. Companies deploy chatbots or auto-responders that send AI-generated answers without human review, and customers can tell. The responses are technically accurate but miss context. They answer the question that was asked but not the question that was meant. A customer writes "I've been charged twice and I'm furious" and gets back a calm, robotic paragraph about refund policies. Technically correct. Emotionally tone-deaf.

Complex troubleshooting. AI can handle "How do I reset my password?" It cannot handle "My Shopify integration broke after I updated my theme, but only on product pages with variants, and only when the customer is logged in." These multi-layered, context-dependent issues require a human who can ask follow-up questions, form hypotheses, and test solutions iteratively.

Empathy. AI can mimic empathetic language — "I understand how frustrating this must be" — but customers increasingly recognize it as hollow. When someone has a genuinely bad experience, they want to feel heard by a person, not processed by a system. The uncanny valley of AI empathy is real, and crossing it erodes trust.

I've seen teams roll back fully automated AI responses within weeks of deploying them. CSAT scores drop, complaints about "talking to a robot" spike, and the time saved on automation gets eaten by damage control.

The "AI Drafts, Human Sends" Model

This is the approach I've built at xFusion, and it's the sweet spot I keep coming back to. The principle is simple: AI does the heavy lifting, but a human always has the final say before anything reaches the customer.

Here's how it works in practice:

  1. A ticket arrives and is automatically classified by AI — tagged with category, priority, sentiment, and suggested playbook
  2. AI drafts a response based on the ticket content, customer history, and relevant KB articles
  3. The agent reviews the draft. They might use it as-is (maybe 20% of the time), modify it (60%), or discard it and write from scratch (20%)
  4. The agent sends the response. Their name is on it. They own the interaction.

The key insight is that drafting is the slowest part of support, not decision-making. An experienced agent can evaluate a draft in 15 seconds but might take 3 minutes to write one from scratch. AI eliminates the blank-page problem and lets agents focus on what they're actually good at: judgment, nuance, and connection.

Think of AI as a very fast, very knowledgeable junior agent who writes first drafts. You wouldn't let a junior agent send replies without review. Don't let AI do it either.

The Tools I Use and How They Fit Together

I'm not loyal to any single AI tool. I use whatever works best for each specific task, and I orchestrate them with n8n to create workflows that are greater than the sum of their parts.

The total cost of this stack is remarkably low — well under $200/month for API usage across all client accounts. The ROI is measured in hours saved per day, not minutes.

Building Trust: Introducing AI Without Scaring Your Team

This is the part nobody talks about. You can build the most brilliant AI workflow in the world, but if your support team doesn't trust it, they won't use it. And if they think AI is there to replace them, they'll actively resist it.

Here's the approach that worked at xFusion:

The agents who feared AI the most became its biggest advocates once they realized it handled the boring parts and let them focus on the interesting, challenging tickets they actually enjoy.

What's Next: Agentic Workflows and Proactive Support

The current "AI drafts, human sends" model is a bridge. It's the right approach today because AI isn't reliable enough for full autonomy. But the technology is improving fast, and I'm already building toward the next phase.

Agentic workflows are the next evolution. Instead of AI handling one step (draft a response), it handles an entire ticket lifecycle: classify, research, draft, check for policy compliance, and — for straightforward cases — send automatically while flagging complex ones for human review. The AI doesn't just suggest; it acts, within defined boundaries.

I'm prototyping this now with n8n and Claude. For a narrow category of tickets — order status inquiries where the answer is a simple lookup — the AI handles everything end-to-end. For everything else, it prepares and a human decides. The goal is to gradually expand the "autonomous" category as accuracy improves, while keeping humans firmly in control of anything that requires judgment.

Proactive support is the other frontier. Instead of waiting for customers to report problems, AI monitors for patterns that predict issues — a spike in checkout errors, a product page loading slowly, a shipping delay affecting a region — and triggers outreach before the customer even notices. The best support ticket is the one that never gets created.

But here's the number I keep coming back to: AI handles the 60% of support work that's repetitive, predictable, and pattern-based. Humans handle the 40% that requires judgment, creativity, and genuine empathy. That ratio will shift over time, but the human 40% isn't going away — it's becoming more valuable. The agents who thrive will be the ones who are excellent at the things AI can't do: building relationships, making judgment calls, and turning frustrated customers into loyal ones.

Ready to Bring AI Into Your Support Workflow?

If you're thinking about integrating AI into your support operations, start with the "AI drafts, human sends" model. It delivers immediate ROI without the risk of fully automated responses. Build trust with your team, measure the impact, and expand gradually. Need help designing the workflow or building it in n8n? Let's talk.

Share 𝕏 in

Comments

Amara OseiMarch 25, 2026

The "AI drafts, human sends" framing is perfect. We tried full automation with our chatbot last year and CSAT dropped 12 points in the first month. Rolled it back and switched to draft suggestions — agents are happier, customers are happier, and we're still saving 30%+ on handle time. Wish more vendors were honest about the limitations instead of pushing full automation.

Jason RiemerMarch 26, 2026

Really interested in your n8n orchestration setup. We're using Zendesk and have been trying to pipe tickets through Claude for classification but the webhook setup has been tricky. Would you be open to sharing the n8n workflow template? Also curious how you handle rate limiting when ticket volume spikes — do you queue them or batch process?