AI put us in sixth gear. March says drop to third.
March 2026 was quite a month with AI news. I’m not surprised!
Anthropic leaked Claude Code's source code through a misconfigured debug file. Copilot injected ads into 1.5 million pull requests. OpenAI's Codex exposed a flaw that enabled GitHub user token theft. Axios (100M weekly downloads) was hijacked through one compromised maintainer account. LiteLLM (95M monthly downloads) got backdoored through its own security scanner.
Not one of these was caused by AI going rogue. Every single one was caused by a human under pressure, whose role forced them to move faster than their better judgment could follow.
The gap nobody wants to talk about
Following on my recent post about the Anthropic model leak, I keep coming back to the same conclusion: AI isn't creating the vulnerability. Instead, it's exposing one that was always there and making it wider every day.
The engineers building these systems are expected to ship at a pace that doesn't leave room for thorough testing and quality assurance. To keep pace, the developers integrating AI into their workflows are being handed extraordinary power without the corresponding understanding of what that code actually does. And the security practices that should be catching these failures are themselves becoming attack surfaces.
Here's the uncomfortable part. I feel it too.
The volume of information I'm processing daily through AI has outpaced what I can hold in my human head. Keeping a mental map of everything I'm building, testing and coordinating is becoming genuinely difficult. Nobody is demanding I match this speed. But the availability of that speed itself is the pressure. When everything around me accelerates like this, staying thorough feels like a risk.
Why is this really happening?
I believe this is the result of a giant gap between the hype and reality.
On one side: sky-high promises about what AI can do, workforce reductions sold as progress and a narrative that speed is the only competitive advantage that matters.
On the other: the reality that planning, understanding, governing and correcting takes time. Human time. Slow, unglamorous, essential time.
Developers who take that time, who review, own and govern their code in full detail are starting to risk their jobs. Because there's always someone willing to ship faster, skip the checks and deliver the illusion of productivity. And too many decision-makers can't tell the difference between speed and recklessness until the breach is already in the headlines.
This is the cycle: companies pressure teams to move at AI speed. Teams cut corners to survive. Corners become attack surfaces. Attack surfaces become incidents. Incidents become the exact headlines those companies were hoping AI would help them avoid.
The uncomfortable truth
Nobody wants to slow down. I don't want to slow down. But March just showed us, repeatedly, what happens when the entire ecosystem decides that speed is more important than soundness.
In my own work, I'm fighting that pull by protecting the one thing AI can't replace: the time to think. If you're leading a team right now, the most radical thing you can do is give your people permission to be thorough. Speed without judgment isn't an advantage. It's simply a delayed liability.
About the author
Yas Dalkilic