Skip to content

The Airbus A320 problem: why the pilot matters

The A320 revolutionized aviation by letting computers fly while keeping pilots in command. The same lesson applies to coding agents: the goal is not to remove the human but to make them more effective.

opinion Leadership 8 min read

On June 8, 1988, an Airbus A320 took its maiden demonstration flight at an airshow in Habsheim, France. The plane was the most advanced commercial aircraft in the world, the first to use full fly-by-wire controls where computers mediated every input from the pilot to the control surfaces. The pilot planned a dramatic low pass over the runway. The computers, sensing the plane was too low and too slow, began overriding the pilot’s inputs to prevent what they calculated as a dangerous condition. The result was a crash at the edge of the airfield.

The incident launched decades of debate about the relationship between human authority and automated systems. It also produced one of the most important frameworks in automation design: the idea that the goal is not to remove the human from the loop, but to define clearly what the human controls and what the machine controls, and to ensure the human always has the information needed to intervene.

Software development is entering the same conversation.

What fly-by-wire actually changed

Before fly-by-wire, pilots had direct mechanical connections to the aircraft’s control surfaces. Pull the stick, and cables physically moved the ailerons. The pilot felt the aircraft through the controls. Every input was direct, immediate, and physical.

Fly-by-wire replaced that mechanical link with computers. The pilot’s inputs became requests that the flight computer interpreted, modified for safety, and then executed. The pilot still flew the plane, but the computer could refuse dangerous inputs, smooth turbulence, and optimize fuel consumption in ways no human could manage manually.

The result was transformative. Fly-by-wire aircraft are safer, more fuel-efficient, and easier to fly than their mechanical predecessors. But they introduced a new category of risk: the pilot who stops paying attention because the computer “has it handled.”

Automation complacency

Aviation researchers call it automation complacency: the tendency for humans to reduce their vigilance when they believe an automated system is performing well. Pilots who rely too heavily on autopilot lose situational awareness. They stop actively monitoring the aircraft’s state because the computer has been reliable for hours. When something unusual happens, the pilot who was not paying attention takes longer to diagnose and respond than the pilot who was actively flying.

This is not a weakness of individual pilots. It is a predictable consequence of how human attention works. We are wired to conserve cognitive effort. When a system performs a task reliably, we redirect attention elsewhere. This is adaptive in many contexts, but it is dangerous when the automated system encounters a situation it was not designed to handle.

The parallel to coding agents is uncomfortable but important. A developer who runs agents all day and merges the output without careful review is experiencing automation complacency. The agent has been producing good code for weeks, so the developer stops reading every line. When the agent makes a subtle architectural mistake, the developer misses it because their attention was elsewhere.

Why full autopilot fails

Aviation learned early that full autopilot, where the human is removed entirely, fails in precisely the situations where human judgment matters most: novel scenarios, conflicting signals, and edge cases that fall outside the system’s training data.

An autopilot system is excellent at maintaining altitude, following a flight plan, and managing routine operations. It is poor at handling a bird strike, a sudden instrument malfunction, or an unexpected weather pattern that requires creative decision-making. These situations demand the kind of contextual reasoning, experience-based intuition, and adaptive problem-solving that humans do well and automated systems do not.

Coding agents face the same boundary. They are excellent at well-defined tasks: writing tests for existing code, refactoring modules according to clear patterns, implementing features from detailed specifications. They struggle with ambiguous requirements, cross-team architectural decisions, and the kind of product judgment that requires understanding user intent, business context, and organizational politics simultaneously.

The lesson from aviation is not that automated systems are unreliable. It is that their reliability has boundaries, and those boundaries are exactly where human judgment becomes critical.

The cockpit model

Modern aviation resolved this tension with a model that neither removes the pilot nor ignores the capabilities of automation. The pilot is the captain. The computers are the crew. Each has defined responsibilities, and each has mechanisms to communicate with the other.

The cockpit instruments exist specifically to maintain pilot awareness. They show what the autopilot is doing, why it is making certain decisions, and when it is reaching the edge of its operating envelope. The pilot does not need to fly every second, but the pilot always needs to know what the plane is doing.

This is the model that software teams should adopt for coding agents. The developer is the captain. The agent is the crew. The agent handles the routine work, the tasks that are well-defined and where automated execution is more consistent than manual effort. The developer handles architecture, judgment calls, edge cases, and review.

But the model only works if the developer knows what the agent is doing. And that requires instruments.

Building the cockpit for coding agents

In aviation, the cockpit instruments are non-negotiable. No pilot would fly a plane with blacked-out instruments, regardless of how good the autopilot is. The instruments are not a nice-to-have; they are the mechanism through which the pilot maintains the awareness needed to intervene when it matters.

Most coding agents today ship without instruments. The developer launches an agent, the agent works in relative silence, and the developer sees the output only when it is complete. There is no real-time awareness of what the agent is doing, no structured reporting of decisions made, and no alerting when the agent encounters conditions that might warrant human intervention.

This is the gap Dailybot fills. By capturing agent progress reports and surfacing them in the team’s shared timeline, Dailybot provides the cockpit instruments for human-agent development teams. Managers and developers see what agents are doing, can assess whether the agent is on track, and can intervene when judgment is needed, all without hovering over every keystroke.

The pilot still matters

The Airbus A320 went on to become one of the most successful commercial aircraft in history. Not because they removed the pilot, but because they redefined the pilot’s role: less manual execution, more strategic oversight, better information, and clearer authority over the decisions that matter.

The agentic era in software development will follow the same arc. The developers who thrive will not be the ones who write every line of code manually. They will be the ones who know when to let the agent fly, when to take over the controls, and how to maintain the situational awareness that makes those decisions possible.

The pilot matters. The instruments that keep the pilot informed matter just as much.

FAQ

What is the Airbus A320 analogy for coding agents?
The Airbus A320 uses fly-by-wire technology where computers handle routine flying while the pilot retains authority for critical decisions. Similarly, coding agents handle routine development tasks while human developers guide architecture, review critical paths, and make judgment calls. The goal in both cases is augmenting the human, not replacing them.
What lessons from aviation automation apply to AI coding agents?
Three key lessons: First, automation complacency is real. When computers handle routine work, humans can lose situational awareness. Second, full autopilot fails in novel situations because automated systems struggle with edge cases they were not designed for. Third, the most effective model is shared authority where the human and the automated system each handle what they do best.
How does Dailybot help maintain 'pilot awareness' when using coding agents?
Dailybot acts like the cockpit instruments that keep pilots informed about what the autopilot is doing. By surfacing agent progress reports, blockers, and decisions in the team's shared timeline, Dailybot ensures that the humans overseeing agents maintain situational awareness without needing to manually monitor every action.