Agents Are Changing Everything. Learn to Code Anyway.

The Seductive Shortcut

I asked a CTO recently, someone running an engineering team right in the middle of all this, a simple question: how do you know if someone is actually good? He didn’t hesitate. “I don’t know,” he said.

That stopped me. Because it used to be straightforward. You give someone a take-home. They build something real. What came back told you everything. Not because the code was perfect, but because delivering on something like that required thought patterns you can’t fake. You had to understand the framework. You had to make trade-offs under constraints. You had to debug your own mess. The output was the signal.

Agents broke that signal. And everybody knows it, even if nobody’s saying it clearly enough.

Look, I’m not here to argue against agents. I use them. The productivity is real. I’ve watched features ship in forty minutes that would’ve taken a full day two years ago. CTOs are restructuring teams around this. The tooling is maturing fast. That debate is over.

The question that’s bothering me is quieter than that. It’s not “should we use agents?” It’s: what happens to the developers who never really learned to code before they handed it off?

That’s what this post is about.

Skill Atrophy Is Real. But It’s Not the Whole Story

Here’s what’s actually happening. When you delegate coding to an agent consistently, certain skills fade. Anthropic’s own research is direct about this. In a controlled trial with mostly junior software engineers learning a new library, developers using AI for code generation scored 17% lower on comprehension tests than those who coded manually, with the steepest decline showing up in debugging.¹ The muscle of writing code by hand, holding a mental model of every line you’re producing, weakens when you stop using it. That’s not a hot take. That’s just how skills work.

But that framing misses something.

Senior engineers have been making this exact trade for years. The staff engineer who stops writing most of the code isn’t getting worse. They’re getting better at something with more leverage. System design. Architectural judgment. Knowing which problems are worth solving and which ones are traps. They traded syntax fluency for something harder to learn and harder to replace. Nobody calls that atrophy. They call it growth.

What agents are doing is making that trade available earlier, and to more people. The developer who uses agents well isn’t just shipping faster. They’re being pushed up the abstraction stack. Forced to think about what to build and why, instead of burning cognitive energy on how. The research backs this up too. Developers who used AI for conceptual inquiry, asking it to explain, to reason, to explore, scored 65% or higher on comprehension.² The tool didn’t make them worse. How they used it did.

Of course agents were going to change the skill stack. The real question is whether the skills being built on top are real. Or whether there’s a foundation at all.

The Skill That Survives: Reading Code

Writing code atrophies. Fine. What doesn’t?

Reading code. Actually reading it. Following the logic, understanding why a decision was made three layers deep. That skill holds up. And it matters more than people think, because reading code is where most of the real work happens. Research shows developers spend around 58% of their time on program comprehension activities.³ Not writing. Reading, understanding, navigating. The writing is almost the easy part.

There’s a deeper reason this skill survives. Literacy researchers have spent decades studying the relationship between reading and writing, and the finding that keeps surfacing is that the two skills share way more cognitive infrastructure than most people assume. Depending on the level of analysis, somewhere between 50% and 85% of the variation in reading and writing ability is shared. Same underlying capacity for pattern recognition, structural reasoning, understanding how things connect. At the word level, the overlap is even higher.⁴ Reading doesn’t just complement writing. It actively sustains it.

Think about it this way. A novelist takes two years off from writing but reads constantly the whole time. When they sit back down to write, they’re not starting from zero. Their ear for language, their sense of structure, their instinct for what works. All of that was being maintained through a different channel.

Same thing with code. The developer who stops writing most of their own code but spends real time reading. Studying open source codebases. Reviewing pull requests with actual depth. Tracing through how systems behave in production. They’re not decaying as fast as the comprehension gap numbers suggest. They’re maintaining the underlying reasoning patterns, just through a different input.

Agents produce a lot of code. The developers who read that code carefully, who interrogate it, who build a mental model of what it’s actually doing. Those are the ones still building real skill. They’re not just reviewing. They’re training.

The Bike Riding Principle

If you learned to code, really learned it, built things, debugged things, developed the thought patterns that let you deliver on a real project. That knowledge doesn’t disappear when you stop writing code every day. It idles.

Procedural memory is the reason you can get on a bike after five years and not fall over. It’s the part of your brain that stores skills through repetition until they become automatic. Below the level of conscious thought, wired into long-term memory in a way that’s incredibly resistant to decay.⁵ Driving a car. Playing an instrument. Typing without looking at the keys. These don’t evaporate with disuse the way a phone number does when you stop dialing it. They wait.

Coding works the same way once it’s truly internalized. Not the syntax. That’s declarative memory, and yes, you’ll forget the exact API signature you used to know cold. But the deeper patterns: how to break a problem down, how to trace unexpected behavior back to its source, how to reason about what a system is actually doing versus what you intended it to do. Those are procedural. They were built through repetition. They live in a part of your brain that doesn’t clean house easily.

This is why senior engineers who’ve spent years in architecture and away from day-to-day coding can still sit down, orient quickly, and contribute meaningfully to a complex debugging session. The rust is real. The foundation isn’t gone.

Delegating to agents is not going to leave experienced developers stranded. The skill atrophy is real, the comprehension gap is documented, and there will be a reorientation period when you go back to writing code by hand. But you’ll get back on the bike.

The catch is that this only works if you ever learned to ride in the first place.

The Threshold

Everything I just argued assumes one thing: you crossed the line first.

The bike riding principle only holds for developers who built things, struggled through debugging sessions, developed the thought patterns before agents existed to do it for them. The reading-compensates argument only holds for developers who already have enough internalized understanding to make sense of what they’re reading. Both of those optimistic framings rest on a foundation that not every developer entering the industry today is going to have.

There’s a concept in language acquisition research called the threshold hypothesis.⁶ The idea is simple: a minimum level of proficiency has to be reached before skills can transfer and compound. Below that threshold, reading a second language doesn’t make you a better writer in it. You don’t have enough foundation for the input to attach to. Above it, reading accelerates everything. It’s not a finish line. It’s the point where learning starts to compound instead of stall.

The coding equivalent is showing up in the data. Anthropic’s research showed that junior developers using AI for code generation scored 50% on comprehension tests compared to 67% for those who coded manually. A 17-point gap, most pronounced in debugging.⁷ The pattern becomes sharper when you place this alongside Anthropic’s earlier observational research, which found AI can cut task completion time by 80%, but only when developers already have the relevant skills.⁸ The implication is hard to miss: experienced developers use AI to accelerate what they already know. Developers still building foundational skills try to use it as a substitute for learning. The outcomes diverge sharply.

The developer who reaches for agents before crossing that threshold doesn’t just miss out on some syntax practice. They miss the thing that makes recovery possible. There’s no procedural memory to idle. There’s no foundation for reading to build on. The bike riding principle doesn’t apply if you never learned to ride.

This is the part nobody’s talking about clearly enough. The debate has been about whether AI will replace developers. That’s the wrong question. The real question is what happens to the developers who let AI replace the learning process itself. And whether we’re building teams, curriculums, and hiring frameworks that take that seriously.

The Argument, Left Open

I’m not arguing against agents. Use them. The productivity is real, the leverage is real, and the developers who figure out how to work alongside them well are going to have an advantage that compounds over time.

But there’s a sequencing question here that nobody has answered yet. Not when in a developer’s career they should adopt agents. That’s up to them. But when in their growth the handoff should happen. When is enough foundation enough? When have the thought patterns been internalized deeply enough that idling them is a trade-off rather than a forfeit?

Agents can’t answer that. Hiring managers don’t have a framework for it yet. Bootcamps and CS programs aren’t asking it loudly enough.

What we do know is this: the developers who crossed the threshold before the agent era, who built things the hard way, who debugged without a copilot, who developed the instinct for how systems actually behave. They have something that isn’t showing up on any resume or take-home assignment anymore. It used to be the baseline. Now it might be the differentiator.

Learning to code is the competitive advantage again. Not because writing code by hand is what the job requires. But because the developers who did it built something in their thinking that agents can accelerate but cannot install.

— Iverson


References

  1. Shen, J. H., & Tamkin, A. (2026). How AI impacts skill formation. arXiv:2601.20245. https://doi.org/10.48550/arXiv.2601.20245
  2. Shen & Tamkin, 2026. See reference 1.
  3. Xia, X., Bao, L., Lo, D., Xing, Z., Hassan, A. E., & Li, S. (2018). Measuring program comprehension: A large-scale field study with professionals. IEEE Transactions on Software Engineering.
  4. Shanahan, T. (1984). Nature of the reading–writing relation: An exploratory multivariate analysis. Journal of Educational Psychology. See also: Fitzgerald, J., & Shanahan, T. (2000). Reading and writing relations and their development. Educational Psychologist.; Berninger, V. W., Abbott, R. D., Abbott, S. P., Graham, S., & Richards, T. (2002). Writing and reading: Connections between language by hand and language by eye. Journal of Learning Disabilities.; Shanahan, T. (2017). How should we combine reading and writing? Shanahan on Literacy. https://www.shanahanonliteracy.com/blog/how-should-we-combine-reading-and-writing
  5. Cohen, N. J., & Squire, L. R. (1980). Preserved learning and retention of pattern-analyzing skill in amnesia: Dissociation of knowing how and knowing that. Science. See also: Romano, J. C., Howard, J. H., & Howard, D. V. (2010). One-year retention of general and sequence-specific skills in a probabilistic, serial reaction time task. Memory.; Squire, L. R. (1987). Memory and Brain. Oxford University Press.
  6. Cummins, J. (1976). The influence of bilingualism on cognitive growth: A synthesis of research findings and explanatory hypotheses. Working Papers on Bilingualism. See also: Cummins, J. (1979). Linguistic interdependence and the educational development of bilingual children. Review of Educational Research.
  7. Shen & Tamkin, 2026. See reference 1.
  8. Shen & Tamkin, 2026. See reference 1.