AI-Assisted Development: What We Learned After Six Months
by Dries Vincent, Senior Developer
The Initial Excitement
When AI coding assistants first landed, our team had strong opinions. Some developers were convinced robots were coming for our jobs. Others thought it was overhyped autocomplete. I was cautiously curious.
Six months in, the reality is more nuanced than anyone predicted. AI didn't replace developers or fundamentally change how we work. But it did shift things in ways we didn't expect.
What AI Actually Helps With
Let me be specific about where AI shines for us:
Boilerplate code is basically free now. Setting up CRUD endpoints, writing test fixtures, creating component boilerplate—tasks that used to eat up 20 minutes are now 30 seconds. Is this groundbreaking? No. Does it add up? Absolutely.
Context switching hurts less. When jumping into an unfamiliar codebase, asking "what does this function do?" or "how is this pattern used elsewhere?" gets immediate, usually accurate answers. It's like having a colleague who's read every line of code.
Documentation actually gets written. The tedious part of writing docs isn't explaining complex logic—it's documenting the boring stuff. AI handles that. We're shipping better-documented code because the barrier dropped to zero.
Learning new frameworks is faster. When picking up a new library, having something explain conventions and suggest idiomatic patterns accelerates the learning curve. You still need to understand deeply, but the initial ramp-up is smoother.
What AI Doesn't Help With
Just as important—here's where AI falls flat:
Architecture decisions are still on us. AI can suggest patterns, but choosing between approaches requires judgment about tradeoffs, team capabilities, and future needs. No model makes these calls well.
Understanding requirements remains human work. AI can't sit in a client meeting and read between the lines. It can't catch that moment when what they're saying and what they need are different things.
Debugging complex issues rarely works. AI can catch simple bugs, but when you're deep into a race condition or tracking down a subtle state management issue, you need real understanding, not pattern matching.
Code review can't be delegated. AI might catch style issues, but assessing whether code is maintainable, whether the approach makes sense, whether it fits the system—that's craft, not automation.
The Unexpected Benefits
The surprises weren't technical—they were cultural.
Junior developers are ramping up faster. Not because AI teaches them—we still do that—but because they're spending less time stuck on syntax and more time learning concepts. They're asking better questions.
Code reviews got more interesting. We're spending less time on "you missed a semicolon" and more time on "is this the right approach?" The discussions improved because the tedious stuff got handled automatically.
Our TypeScript adoption accelerated. AI is noticeably better with typed code, so there's now peer pressure to add types. Not because management mandated it, but because developers prefer working with better tools.
The Unexpected Challenges
But there are real downsides we didn't anticipate.
Over-reliance is real. We caught a developer committing AI-generated code they didn't fully understand. It worked, but was unnecessarily complex. Now we have a team rule: if you can't explain it, you don't commit it.
Quality variation is frustrating. Sometimes AI suggestions are brilliant. Sometimes they're subtly wrong in ways that aren't immediately obvious. You need constant vigilance, which is mentally taxing in a different way.
Context limits matter. AI doesn't understand your whole system. It sees the file you're working on, maybe nearby files. It doesn't understand the architectural decisions made six months ago or the gotchas lurking in that other service.
How We Use It Now
Our team settled into some patterns:
We use AI heavily for:
- Initial code scaffolding
- Writing tests for straightforward functions
- Refactoring well-defined changes
- Explaining unfamiliar code
- Generating documentation
We don't use AI for:
- Designing system architecture
- Making library/framework choices
- Anything security-critical without thorough review
- Complex business logic without deep understanding
And critically: everything gets reviewed. AI-generated code isn't special—it's just another input that needs human judgment.
Impact on Productivity
Here's the honest assessment: we're probably 15-20% faster on most tasks. Not the 10x improvement some claimed, but meaningful enough that going back feels noticeably slower.
But "faster" isn't the full story. We're also making different tradeoffs. Spending less time on rote coding means more time on design, testing, and documentation. The work shifted toward higher-value activities.
What This Means for Developers
Are junior developers in trouble because AI can write simple code? We're not seeing that. What we need from juniors is the same as always: curiosity, ability to learn, communication skills, and problem-solving ability. AI doesn't change that.
Are senior developers less valuable? The opposite. Judgment matters more when you're evaluating AI suggestions. Experience matters more when decisions can be made faster. The role shifted slightly, but the value increased.
Looking Forward
AI coding tools are improving fast. Uncomfortably fast, honestly. What was impossible six months ago is routine now. But here's what I expect won't change:
Software development is fundamentally about understanding problems and making good decisions. The code is just the artifact. AI can help generate that artifact faster, but it can't replace understanding the problem.
The developers who'll struggle are those who were already just typing code without thinking. The developers who'll thrive are those who think deeply, communicate well, and use AI as leverage for their judgment rather than a replacement for it.
We're still figuring this out. Every few weeks the capabilities shift and we recalibrate how we work. But six months in, I'm cautiously optimistic. AI didn't replace developers. It gave us a powerful, occasionally frustrating, undeniably useful tool that makes some parts of our job better.
That's not the revolution some predicted, but it's the evolution that's actually useful.