Immersion into a codebase is a seriously underrated productivity modifier. Every line of code I write brings me closer to a complete understanding of each moving part and how they interact with each other. Reviewing code does help, albeit to much less of an extent.
This is why I don't see much, if any value in using AI to write out my code for me. It detracts from my overarching view of the whole system. If I use an LLM to write some code, I have to review it to make sure I understand what it's doing, and verify that there are no issues. Not only is this less enjoyable for me, it is overall less beneficial to the codebase, and as a consequence, whoever owns of the application.
The tech industry has a high-turnover rate for developers. I can't cite any hard data on it, and I can't be bothered to do a deep-dive into that, but I've seen it in person. It's no secret that it's far easier to make more money by hopping jobs than it is through positive performance reviews and raises. This is very short-sighted, because it can take years for a developer to gain a significant understanding of a codebase, the connected parts, and even cross-team dynamics to help solve complex issues.
Anything which detracts from these deeper insights hurts the codebase. The more shallow your understanding, the harder it is to implement complex new features. It takes longer to do, and it hurts code scalability. Making things scalable is difficult, and sometimes even futile because we just can't predict the future, but we should still try. Having a deeper understanding of the codebase gives us a better shot at doing it right. We should be avoiding small gains in other areas when they hinder this goal.
LLMs are an amazing technology, and I am very excited for the future. I am also very concerned over the trend of LLMs being pushed as tools to write code for us. Maybe some day they will be able to contain the full context of a complex application, and the code they generate will be irrelevant. And when that day comes, the entire codebase can be regenerated whenever it needs to fix a bug or add a new feature. But as far as I can tell, right now there is no actual developer bottleneck for writing code because it pays dividends in a deeper understanding for anything which comes up. We don't really need AI code generation at this time, and it is not a productivity modifier.
I do see productivity modifiers in the non-coding use cases for LLMs. I love bouncing ideas off of ChatGPT. It kicks my brain into overdrive, and allows me to make better decisions. I also love being able to use LLMs as a replacement for official documentation. Some docs are pretty good, but some are terribly written, and sometimes lacking important information. LLMs can not only provide answers that may not be present in the docs, but also allow me to ask follow-up questions. It's not perfect, and it makes plenty of mistakes, but any half-decent developer shouldn't blindly follow it.
There are certainly compelling reasons to use AI to complement my workflow, but code-generation is rarely one of them. Perhaps in the case where an obscene amount of boilerplate code is being written, but as with all things, LLM code generation is not happening in a vacuum. It's a value proposition that must be weighed against the not-so-obvious downsides. The seconds it saves you probably add up to some arguably significant number, but at what cost?