Flight AI 330 was not a commercial service. It was a certification test. The aircraft, registration F-WWKH, was simulating a takeoff with one engine intentionally inoperative. The pilots retracted the landing gear and applied full power to the remaining engine. The aircraft began to roll. It banked 12 degrees, then 14. The pilots could not correct it. The left wingtip struck the runway. The A330 cartwheeled and broke apart, erupting in fire 1,200 meters from the air traffic control tower. The seven occupants—three Airbus crew and four test engineers—died instantly.
The crash investigation pinpointed a software problem, not pilot error. The aircraft’s fly-by-wire system had two primary control modes: ‘normal law’ for regular flight and ‘alternate law’ for degraded operations. The test was conducted in alternate law, which lacked an automatic bank angle protection feature. When the pilots applied asymmetric thrust, the rolling moment exceeded their ability to manually counteract it with sidestick input. The system did not stop them. Airbus had assumed pilots would recognize and manage the risk.
This assumption was wrong. The crash forced a fundamental reassessment of human interaction with automated flight systems. Airbus modified the A330’s and A340’s flight control software to retain some bank angle protection even in alternate law. The philosophy shifted: automation should not abandon pilots entirely, even in abnormal configurations.
The impact was on every fly-by-wire aircraft designed since. The tragedy underscored that software certification is as critical as structural testing. It demonstrated that protecting pilots from themselves, even during simulated failures, is a necessary function of complex automation. The seven deaths in Toulouse led to a subtle but profound rewrite of the rulebook for how computers and humans share the sky.
