Asiana Crash: More Likely Than Pilot Error? Cockpit Miscommunication

  • Share
  • Read Later

At least in the media echo chamber, fault for the crash of the Asiana flight 214 in San Francisco last weekend is already pretty clear: some kind of screw-up by the pilot who’d never flown a Boeing 777 into San Francisco before and had only 43 hours’ experience in that plane.

Sure, the official NTSB investigation will take months, every jot of electro-data from black boxes will be carefully parsed, and there will be interviews and lawyers and hearings and reports. But all the instant experts have noted that the weather at the time of the crash was benign, there had been no distress calls from the plane, and, at least according to the airline, no mechanical problems. (The 777 has a very strong safety record.) In short, there is no sign that the crew faced anything out of the ordinary when their aircraft swiped the runway seawall trying to land at a speed far, far too slow.

But even if the conventional wisdom about pilot error turns out to be right, that’s far from the end of the story. The bigger air safety question the Asiana crash may well re-ignite is one that’s been lurking for years in the background of some of the most troubling aviation incidents: how crew members on the flight deck interact and communicate with one another and with their super-sophisticated machines. In other words, not just how individual pilots fly, but how cockpits work as complex, dynamic teams and systems.

In aviation safety lingo, the term “cockpit error” is sometimes used interchangeably with “pilot error” – and for good reason: While the captain of the aircraft bears ultimate responsibility for the safety of the flight, everybody behind that steel-reinforced door up front plays a key role. Typically, that means a captain and a first officer, or pilot and co-pilot. On very long flights, including transpacific, there are three or four pilots who rotate duties to minimize fatigue. The guy — 95% of major airline pilots are still guys — who is not actually flying the plane used to be called the “pilot not flying.” Now he’s the “pilot monitoring,” to underscore his active role in the safe conduct of the flight.

During critical phases like take off and landing, his attention is fixed on gauges showing speed and altitude, and helping the flying pilot stay “situationally aware” as the plane descends to land. On take-off, he’s calling out key speed thresholds to identify the precise moment when the plane should “rotate” and lift off the runway. When something looks or feels wrong, his job is to make that known, clearly and forcefully, and, as needed, to suggest a fix.

(MORE: Why the No-Frills, Cattle-Herding, Fee-Crazy Airline Business of Today Is Here to Stay)

In other words, operating a big commercial jet, especially on takeoff and landing, isn’t a solo flying experience. Pilots will tell you that it’s more about managing a complex system of inputs from both human and non-human sources. The industry jargon for this dynamic is “cockpit resource management,” and it covers a lot of ground: how pilots interact with one another and with the airplane; how they communicate and make piloting decisions, prioritize problems, and delegate tasks; and how flying problems get solved — or not.

The upshot is that bad things can and do happen when pilots and co-pilots don’t work well together, or collectively lose sight of what their aircraft, with all its automated systems, is doing. (It turns out that today’s super-sophisticated cockpit automation doesn’t always save the day, and that, in fact, the opposite is sometimes true. Pilots say they can be tempted to rely too much on the machine and too little on what they’re seeing and feeling.) Sometimes cockpit dysfunction comes from too much deference to the captain – for instance, when a subordinate junior pilot hesitates to forcefully question the actions of his superior that he believes to be wrong. Cultural mores in deference-based societies, including in Asia, can amplify that hesitancy, experts say. In other cases, it’s just a lack of cockpit communication until it’s too late.

Unfortunately, these communications problems have led to disasters:

Approaching Portland in 1978, the crew of a United DC-8 got so preoccupied with a failure of an indicator to light up to confirm the landing gear was down and locked that they left the plane to circle the airport for an hour until it ran out of fuel and crashed.

Leaving Washington, D.C. in a 1982 snowstorm, an Air Florida Boeing 737 failed to reach adequate take off speed when ice-clogged sensors gave inaccurate readings of engine thrust. Though the co-pilot tried to warn of the apparent problem, he ultimately deferred to the captain’s misjudgment.

In 1997 a Korean Air 747 crashed in Guam short of the runway when the plane’s senior captain mistook a glide slope indicator and essentially ignored the stated concerns of other cockpit crew.

We don’t yet know if some failure of cockpit communication or coordination was behind last week’s Asiana crash. Maybe a failed sensor misstated speed or altitude. Maybe the pilot was fatigued or one of the automated systems was misread. Or maybe something more basic happened, like somebody not saying something when they should have. Thankfully, there’s every reason to believe the ongoing NTSB investigation will find the answer to what went on in Asiana’s cockpit before it hit the San Francisco seawall.

But as our flying machines themselves become more sophisticated, automated, and “error-tolerant” (as Boeing puts it), a renewed safety focus on the way humans in the cockpit interact with one another, and with the machine, is in the cards.

(MORE: One Airline That Stubbornly Refuses to Pile On Fees — at Least for Now)