41% of Global Code Is Now AI-Generated. Your Helpdesk Is About to Feel It.
A stat crossed my desk recently that stopped me mid-scroll: roughly 41% of all code written globally is now generated or heavily assisted by AI. Not written by developers with AI suggestions along the way — generated, sometimes by people who have never written a line of code in their lives. The term that's attached itself to this practice is "vibe coding," and by most estimates it now accounts for a meaningful chunk of the software running inside your organization right now.
If you work in IT support — as I do — that number is not an abstract tech industry trend. It is a pipeline of future helpdesk tickets. And most IT teams I talk to are not prepared for what that pipeline looks like.
What Vibe Coding Actually Is
The term was coined by AI researcher Andrej Karpathy in early 2025. The idea is simple: you describe what you want to a language model, the model writes the code, and you deploy it. You are not debugging line by line. You are not reviewing logic flows or checking dependencies. You are trusting that the AI got it right and moving on.
That works — until it doesn't. And when it doesn't, the person who built the thing rarely has the context to diagnose why. So the ticket lands on your queue.
I use Claude and ChatGPT in my own workflows daily. I use them to prototype n8n workflows, draft automation logic, and troubleshoot API integrations. I know exactly what these tools get right and what they get wrong, and the gap between the two is where IT support lives now.
The New Ticket Categories I'm Seeing
There are four ticket types that have become noticeably more common since AI coding tools went mainstream. These are patterns I'm observing across support contexts — not a formal survey, but consistent enough to be worth naming.
1. "I changed nothing and now it's broken."
AI-generated code tends to be brittle in ways that hand-written code isn't. When a human developer writes a function, they usually understand the dependency chain well enough to write defensively. When an LLM writes it, the code often works perfectly under the exact conditions the model assumed — and fails the moment one of those assumptions shifts. A package updates. An API endpoint changes a response format. A date format differs by region. The original developer has no idea why, because they never read the internals. The ticket description is almost always a variant of "it worked last week."
2. Security issues from code that was never reviewed.
A December 2025 analysis found that AI-co-authored code contains roughly 2.7 times more security issues than human-written code — elevated rates of hardcoded credentials, missing input validation, insecure direct object references. When a non-developer uses a tool like Cursor or Lovable to build an internal portal and deploys it directly to production, nobody reviewed that code. It shows up on your radar when something goes wrong: a login bypass, a data exposure, an API key committed to a public repo. The security tickets are often the worst ones because the builder doesn't know what they built well enough to help you reverse-engineer the problem.
3. Shadow IT at unprecedented scale.
Shadow IT is not new. But vibe coding has dramatically lowered the barrier to entry. Previously, an operations manager who wanted a custom reporting dashboard had to either get IT involved or learn enough to use a no-code platform like Airtable. Now they ask Claude to build one, deploy it on Vercel in twenty minutes, and tell nobody. The tool works. Then it breaks, or it scales poorly, or it handles customer data in a way that violates your security policy — and IT hears about it for the first time when something has already gone wrong.
4. Dependency hell with no author to ask.
Language models are trained on data with a cutoff date. They recommend libraries, packages, and API versions that were current as of that training cutoff — which may be six months or two years behind the present. An AI-generated Node.js script might pull in a package that's been deprecated, has an open CVE, or simply no longer installs cleanly in current environments. When the person who generated the code has no idea what a package.json is, diagnosing a broken dependency tree becomes entirely your problem.
The challenge with AI-generated code isn't that it fails — it's that when it fails, the person closest to it often has the least context to help you fix it. That asymmetry is new. IT support has never had to account for it at scale before.
How Triage Has to Change
The first-response questions I ask for any software-related ticket have shifted. Two years ago, my intake flow assumed that the person reporting the issue had some relationship with the code — they wrote it, they configured it, they at least read through it. That assumption no longer holds.
Now I add these to my triage script for anything involving custom-built tooling:
- Did AI generate or heavily assist with building this? You'd be surprised how often the answer is yes once people understand the question isn't an accusation.
- Is there documentation, a README, or a prompt log? Some AI coding tools keep session history. If the builder saved their prompts, that's your best chance at reconstructing what the code was intended to do.
- What platform is it running on, and who owns the deployment? Shadow IT tools often live on personal accounts — Vercel, Railway, Render — which means your standard access and rollback procedures don't apply.
- Has anything in the environment changed recently, even outside this tool? Brittle AI code breaks on environmental drift, so a seemingly unrelated update elsewhere can be the actual cause.
None of this replaces a proper investigation. But it gets you to the right investigation significantly faster.
What IT Teams Actually Need to Do
The reactive side — better triage — is necessary but not sufficient. The problem compounds if you only deal with it ticket by ticket.
Three things are worth building proactively:
An AI-tool disclosure policy. Not a ban — a disclosure requirement. If someone in your organization builds or deploys internal tooling with AI assistance, they need to log it somewhere: what it does, what data it touches, where it runs. This does not have to be bureaucratic. A shared sheet, a Notion database, a simple form. The goal is to know what exists before it breaks.
A lightweight security review gate for internal deployments. Any tool that handles customer data, credentials, or internal systems should go through a thirty-minute review before production deployment — regardless of who built it or how. The review doesn't need to be exhaustive. Check for hardcoded credentials, validate input handling on any external-facing fields, confirm the deployment account is org-controlled. That's enough to catch the most common AI-code vulnerabilities before they become incidents.
Runbook templates for AI-generated tool failures. The single most time-consuming part of debugging vibe-coded software is the first thirty minutes, when you're trying to understand what the code is even supposed to do. A short standard template — "what does this tool do, what data does it touch, how is it deployed, what changed recently" — completed by the builder at deployment time saves you that thirty minutes every time something breaks. I've started requiring this at a US top company I work with, and the reduction in triage time has been noticeable.
The Bigger Shift
Every significant wave of democratized software creation has eventually created more IT work, not less. The move from mainframes to personal computers did it. The SaaS explosion did it. The no-code wave did it. Vibe coding is the latest iteration of the same pattern: a tool that lowers the barrier to creation also lowers the barrier to creating problems that need IT to solve.
I'm not arguing against AI code generation. I use it myself, constantly. The productivity gains are real. The ability to prototype something functional in an afternoon rather than a sprint has changed what's possible for small teams. But productivity gains don't eliminate the support surface — they expand it in different directions.
The IT support teams that adapt to this fastest will be the ones that stop treating AI-generated code as a novelty edge case and start building it into their standard operating procedures. The tickets are already there. The question is whether your process is ready for them.
If you're thinking through how to adapt your support workflows or build internal governance for AI-built tooling, reach out — it's something I'm working through in real time and happy to compare notes on.
Comments