Blog
Autopilot, Copilot, and Software Developers
2025-10-12
Language models (LLMs) are rapidly becoming essential developer tools—but the metaphors we use to describe them matter. One I often hear is: "LLMs are the autopilot of coding." As both a developer and a pilot, I think that metaphor misses the mark on several levels. In this post, I'll pen down why I believe "Copilot" is a far better analogy than "autopilot" when it comes to LLMs and vibe coding.
From Punch Cards to LLMs: A Rough Timeline
Punch Cards
Batch-based programming. Write punch cards, feed them into a mainframe, and wait—sometimes days—to see if anything worked. No feedback loop, no second chances.
Time-Sharing Systems
Terminals allowed faster interaction, but computing was still slow and expensive. Developers had to mentally simulate programs to avoid mistakes.
Personal Computers & IDEs
Code became interactive—write, run, debug, iterate. Feedback became instant. Programming shifted from theoretical to tactile.
Stack Overflow
A shared memory for developers. Instead of flipping through manuals, we learned from peers. It changed how we access help.
Large Language Models
Trained on massive corpora of code, documentation, and Q&A threads. These models remix what they've seen to suggest plausible code—fast.
AI Copilots
Like having a copilot who's read every manual, seen every edge case, and always has a suggestion. But not the same as having a machine fly the plane for you.
Autopilot vs. LLMs: A Misleading Metaphor
When people say "LLMs are the autopilot of coding," it sounds intuitive—but it doesn't hold up under scrutiny. Why? Let's go through what it is vs. how it is often perceived.
Aviation Autopilot
In aviation, autopilot is a rule-based system designed to perform specific tasks—maintaining heading, altitude, or following a route—under known, stable conditions. It doesn’t handle novelty, ambiguity, or failure modes well. Pilots routinely hand-fly the plane when conditions deviate from the plan. It’s deterministic, limited, and entirely under human supervision, i.e., the human attention and supervision is not optional but required for safe operation of the flight.
Popular Perception
In popular zeitgeist, “autopilot” has taken over the connotation of something closer to a magical end-to-end automation — push a button, and the system takes over everything. No human needed. Absolute delegation.
LLMs match neither definition.
They aren’t rule-based. They aren’t deterministic either. And they don’t “take over.” They’re statistical—generating output by remixing patterns from massive training data. That means they can hallucinate functions, misuse APIs, or reproduce outdated or insecure practices.
They're fast and creative—but not always right. You can, should you choose to, hand over the entire flow of software development to them. However, the mileage will certainly vary. There is no doubt that the capabilities of LLMs are growing rapidly, but in my opinion there is still a long way to go for them to be able to autonomously build production systems end-to-end. The astonishing dexterity of LLMs in some (and growing...) domains within software development should not be mistaken for AGI as it pertains to software development.
Thus, "developer attention" in software development just like pilot attention in aviation is not optional but required for safe and effective use of LLMs in software development.
Why I Find "Copilot" a Better Analogy
A copilot is a human assistant. They help manage workload, monitor systems, spot inconsistencies, and suggest alternatives. But they don’t fly the plane alone—nor do they set the destination.
They help with boilerplate, autocomplete, refactoring, and exploration. They can unblock you. But they don’t build production systems end-to-end. You still make the decisions.
LLMs Are Like Confident Interns
They’re fast. They’re eager. They sound convincing—even when they’re wrong.
Sometimes the code they suggest won’t compile. Other times, the issues are subtler: performance problems, security flaws, bad architecture. These require technical judgment.
They’re also non-deterministic. Ask the same thing twice—you might get different answers. That’s a feature, not a bug - but it complicates debugging and reproducibility.
And they reflect their training data. If outdated practices or more recently overly subservient responses as RLHF loops were common in what they learned from, they might repeat those mistakes. Not malicious—just statistical.
But all is not bad - they can be a huge productivity boost when used wisely. Like any junior developer, they need supervision, review, and mentoring. We just have to be mindful of the limitations and thus recalibrate our fears and expectations accordingly.
One core skill that we need to cultivate is judgment.
For instance, in a not so distant past in a not so far away galaxy, writing a file in Java meant choosing between BufferedWriter
, FileWriter
, IOStreams
, etc. I often had to memorize the API surface and know which one fit. So my attention was divided between remembering the API and writing the logic.
Today, I can ask a model. It suggests an option. But I still have to verify it. I still need to exercise judgment in evaluating whether the API is apt for the use case, but I do not need to recall the entire API surface. LLMs don't remove the need for engineering skill—they shift it. Memorization matters less. Judgment matters more, and in my opinion will be an increasingly critical and distinguishing skill for developers ambiently relying on LLMs vs. those who partner with the LLMs effectively.
Final Thoughts
LLMs are here to stay. The barrier to entry is lower—but in the right hands, the returns are exponential.
The best developers won’t be the ones who write every line by hand. They’ll be the ones who know how to prompt, evaluate, and refine—just like they would with any teammate.
So the next time someone says:
Ask them:
“Do you really want your code flying itself?”
Or would you rather have a smart partner in the right seat—spotting issues, offering ideas, and helping you stay sharp—while you remain pilot-in-command?
But the pilot is still
YOU.
WDYT, I would love to know your thoughts. Share your insights!