← back

AI Coding Tools: The Productivity Multiplier With an Asterisk

·7 min read

My AI coding tool multiplied my productivity by 10x! Unfortunately, it also multiplied my debugging time by 10x. Net result: 1x.

AI coding assistants have gone from novelty to necessity in what feels like no time at all. I've been using them daily for a while now, and I'll be honest: they've genuinely changed how I work. But the conversation around these tools has become so polarised that I think we're missing the nuance. They're neither the death of programming nor the silver bullet that some claim. The truth, as always, is somewhere in the middle—and that middle comes with an asterisk.

Let me be upfront: I'm bullish on AI coding tools. They've made me faster, they've helped me explore unfamiliar codebases, and they've reduced the friction of tedious boilerplate work. But I've also seen them introduce subtle bugs, generate plausible-looking code that's fundamentally wrong, and create a false sense of confidence that can be dangerous. The productivity multiplier is real—it just comes with conditions.

Where AI Coding Tools Genuinely Shine

There are areas where these tools are genuinely transformative, and I think it's important to acknowledge that before we get into the caveats.

Boilerplate and scaffolding: This is where AI tools earn their keep. Writing tests, generating CRUD operations, setting up project structures, writing configuration files—this is work that's necessary but not creative. AI handles it brilliantly, and it frees up mental energy for the problems that actually matter. I've gone from spending 30 minutes on test scaffolding to getting it done in a couple of minutes.

Exploring unfamiliar territory: When I need to work with a library or framework I haven't used before, AI tools are incredibly useful. They can generate working examples, explain patterns, and help me get up to speed faster than reading documentation alone. It's like having a knowledgeable colleague available at all hours—one who never gets annoyed at basic questions.

Code translation and refactoring: Moving code between languages or modernising legacy code is tedious work that AI handles surprisingly well. It understands patterns across languages and can translate idioms appropriately. I've used it to convert Python scripts to Go and JavaScript to TypeScript with minimal manual intervention.

Documentation and comments: Let's be honest—most developers don't enjoy writing documentation. AI tools can generate reasonable documentation from code, which at least gives you a starting point to refine. It's not perfect, but it's better than the nothing that most codebases have.

The Asterisk: Where Things Get Complicated

Here's where the nuance matters, and where I've seen teams get into trouble by trusting AI tools too much.

Subtle bugs and edge cases: AI-generated code often looks correct at first glance. It follows patterns, uses appropriate naming conventions, and even includes error handling. But it can miss edge cases that an experienced developer would catch—off-by-one errors, race conditions, incorrect assumptions about data formats. The code compiles, the tests pass (if the AI wrote those too, they might test the wrong things), and the bug only surfaces in production. I've been bitten by this more than once.

Over-confidence in generated code: There's a psychological trap where AI-generated code feels more trustworthy because it looks clean and well-structured. But looking clean and being correct are different things. I've caught myself skimming AI-generated code less carefully than I would code written by a junior developer, which is exactly backwards. The AI is essentially a very fast junior developer who never pushes back or asks clarifying questions.

Architectural decisions: AI tools are great at the tactical level—writing functions, implementing algorithms, handling individual tasks. But they're not great at strategic decisions—choosing the right architecture, designing systems that scale, making trade-offs between competing priorities. If you let AI make architectural choices, you'll end up with a codebase that works but isn't designed for the long term. AI optimises for the immediate context, not for the bigger picture.

Security considerations: This one worries me. AI tools can generate code with security vulnerabilities—SQL injection, insecure defaults, improper input validation—that look perfectly fine to someone who isn't specifically looking for security issues. If you're building anything that handles user data or financial information, you need to review AI-generated code with extra scrutiny. The AI doesn't understand the security context of your application.

Understanding vs velocity: This is the most subtle issue. When AI writes code for you, you might ship faster, but you might also understand less. If something breaks at 2am and you didn't write the code, debugging becomes harder. There's real value in understanding every line of your codebase, and AI tools can erode that understanding if you're not careful. I've started treating AI-generated code like I would a pull request from a contractor—I review it properly before it goes in.

How I Actually Use Them

After experimenting with various approaches, here's how I've settled into using AI coding tools effectively:

Use them for the first draft, not the final product: I let AI generate initial implementations, then review and refine. The AI gets me 70-80% of the way there quickly, and I spend my time on the remaining 20-30% that requires actual thinking. This is where the real productivity gain lives—not in eliminating the thinking, but in eliminating the typing.

Always review with fresh eyes: I never accept AI-generated code without reviewing it as carefully as I would a pull request. I read every line, question assumptions, and test edge cases. The few minutes spent reviewing saves hours of debugging later.

Keep context small and specific: AI tools work better with focused, well-defined tasks. Instead of asking "build me a user authentication system," I ask for specific components: "write a function that validates a JWT token and returns the user claims." Smaller tasks mean better output and easier review.

Don't let it make architectural decisions: I design the architecture, choose the patterns, and make the strategic decisions. Then I let AI fill in the implementation details. It's the difference between using AI as a tool versus using it as a replacement for thinking.

Test everything independently: If the AI wrote the code AND the tests, the tests might just verify that the code does what the AI thought it should do—not what it actually needs to do. I write or at least review all tests independently of the generated code.

The Productivity Reality

Here's the honest truth about AI coding tools and productivity: they make experienced developers faster, but they don't make inexperienced developers better. If you understand what good code looks like, AI tools help you produce it faster. If you don't, AI tools help you produce bad code faster.

The studies claiming "40% productivity improvement" or "2x faster development" are measuring speed, not quality. Lines of code per hour is a terrible metric on its own. What matters is working, maintainable, secure code that solves the right problem—and measuring that is much harder.

In my experience, the real productivity gain is more like 20-30% for experienced developers on well-understood tasks. For novel problems, complex debugging, or architectural work, the gain is minimal or sometimes negative (because you spend time correcting AI's misunderstanding of the problem).

Key takeaways

AI coding tools are genuinely useful, and I'd struggle to go back to working without them. But they're a multiplier on existing skill, not a replacement for it. The asterisk on the productivity claim is this: you need to know enough to evaluate what the AI gives you, or you're just generating bugs faster.

Use them for what they're good at—boilerplate, exploration, translation, first drafts. Be sceptical of what they produce—review everything, test independently, maintain your understanding of the codebase. And most importantly, don't confuse velocity with progress. Shipping fast only matters if you're shipping something that works.

The best developers I know treat AI tools like a powerful assistant, not an autopilot. They stay in control, make the important decisions themselves, and use AI to handle the parts of coding that don't require creativity or judgement. That's where the real productivity multiplier lives—no asterisk required.