Leverage, Skills, and AI
When I was 13 or 14, I learned to code in Assembly Language for the 6502 processor that came with most of the personal computers of the time: the Ataris, Commodores, and Apples. Assembly Language is the set of mnemonics used to program the CPU, that are converted to machine language, the raw bytes that CPUs actually understand.
I learned tricks to save a few bytes and machine cycles here and there—computers were pretty limited at the time. I knew about byte order, most-significant and least-significant bits. You wrote pixels directly to screen memory. If your program crashed, you’d probably have to reboot the machine. I had some books that were precious to me at the time, with titles like Beneath Apple DOS and Mapping the Atari.
A few years later, however, computers had a lot more memory to run programs. Software was written using high-level languages like C or Pascal. A compiler converted your C code to a machine language executable.
Did the developers of those later days care about the machine code the compiler generated? Not particularly. Maybe during the transition from coding in assembler to coding in a higher-level language. But soon you didn’t care. C enabled you to solve bigger and more interesting problems. Assembly code was reserved for edge cases where, for example, speed was crucial, or you needed some quirk that existing high-level languages could not resolve. Also, high-level languages lead to fewer and less obscure bugs.
Then came app generators, today’s no-code equivalent. They didn’t replace high-level languages, but they allowed you to build applications in a very short time. Thanks to Windows and Visual Basic, you could create user interfaces using an on-screen editor instead of by writing code directly. You could lay out buttons, text boxes, and other widgets directly on screen, and connect them to user actions, and populate them with information queried from databases.
You didn’t learn assembly language any longer, unless you were particularly nerdy or into hardware devices.
Today, if I want to code an app, I’ll use a coding agent like Claude Code. I find I spend more time than before thinking about the app, its architecture, technical choices, and probable bottlenecks before writing a single line of code. I need to make all this explicit, because I need to explain them to the LLM. Part of this thinking takes place in a conversation with the LLM itself.
At some point, I ask it to recollect our conversation and technical decisions, and I feed the result into Claude Code. Claude Code then divides the project into phases, asks questions if it needs clarification, and then generates most of the code, testing, and documentation. I’ll ask Claude to do security audits and code reviews. I still look at the plumbing, but the most recent LLMs get the code right most of the time, if you know how to ask.
I also use LLMs to understand the code they generate. Besides documentation, I ask the coding agent to provide diagrams, explain information flow, and other things until I’m sure I understand what’s going on. For larger projects, even if current AI models are excellent, you probably still need to get your hands dirty.
I’ve had to let go of the way I coded a couple of years ago. I sometimes look back with certain nostalgia, but it’s necessary. I had the same feeling when switching from assembly to C and Pascal. I realized then that I would soon forget tricks like self-modifying code, or the specifics of BIOS calls that I knew by heart. But it was time to move on then, and it’s time to move on again now.
Do I miss typing code? I enjoy coding, but what I’m really good at and enjoy most is using technology to solve problems. LLMs allow me to tackle more complex projects that I wouldn’t have thought of doing by myself before.
Stewart Brand writes in his excellent book Maintenance of Everything, Part One:
Each new product of greater precision has to provide affordances for maintenance and repair, and every new user has to learn them—how to tune a carburetor, how to replace a typewriter ribbon, how to back up a computer—and then, in time, has to forget them as even more precise devices come along that require different skills.
Every time technology makes a significant advance, we lose skills and gain new leverage. What I care about preserving and improving are not specific skills tied to specific tools or technology, but the ability to solve problems itself.
If you are in software development, it’s clear you should be mastering the use of coding agents and this rapidly changing field. But you need to judge what the LLM produces. You need criteria and taste to know if what the coding agent is writing is reasonable or just overcomplicated nonsense. You need to understand the trade-offs between different approaches to the same problem.
Also, you need to think about your growth path and how you are actually going to learn things, because nothing’s more dangerous than working with an LLM outside your circle of competence. LLMs allow you to build things without actually knowing what’s going on under the hood. We can mistake recklessness for competence until it’s too late. Stability, maintainability, and efficiency still matter.
If you were starting into Mechanical Engineering today, you would need to learn the core concepts and principles before using advanced software like Mechanical Desktop or trying finite-element analysis. The same applies to software development: you need to understand the core concepts to use LLMs effectively.
The question is whether the industry will actually enforce that path, or whether the economics will push people to skip it. If you can ship a product with an LLM next week, the incentive to spend six months writing bad code and learning from it is weak. The market rewards output, not understanding.