Re: Flight envelope protections

From: (Michael T. Palmer)
Organization: NASA Langley Research Center, Hampton, VA  USA
Date:         04 Dec 92 22:30:33 PST
References:   1 2
Next article
View raw article
  or MIME structure (Robert Dorsett) writes:

> (Michael T. Palmer) wrote:

>> This has some serious consequences.  For example, in the China Airlines
>> B-747 incident 300 nm northwest of San Francisco in 1985 (NTSB/AAR-86-03),
>> the crew was forced to overstress (and structurally damage)
>               ^^^^^^

>That might be overstating the case a bit. :-) The NTSB report suggests
>they didn't have a clue how to recover from the spiral, once they entered
>it, lacking military aerobatic training and being completely disoriented.  I
>don't believe the report distinguishes the tailplane's damage as being
>incidental or intentional.

Agreed.  I didn't mean to imply necessarily that they KNEW they needed to
overstress the airframe, and it is *possible* that this occurred during
control inputs that did not actually contribute to the recovery.  It's been
awhile since I read that report, and I didn't have it handy to refer to.

>> crew recovered control with about 10,000 ft of altitude left (from an
>> original high-altitude cruise).  It is very likely that if the aircraft
>> had prevented the crew from initiating control commands that would lead
>> to aircraft damage, the aircraft (and passengers) would have been lost.

>Your point's well taken, and the risks are certainly worth considering.  But
>allow me to play devil's advocate, for a minute, without diluting your argu-
>ment, and suggest that the EFCS would have prevented an A3[2-4]0 from getting
>into the unusual attitude to begin with.  The protections are both aerodynamic
>and input-filtering (and configuration-evaluating, and...).  In the China
>Air incident, the flip-over was caused by a "dumb" autopilot/autothrottle
>design configuration oversight, following an engine abnormality.  If a similar
>event had occurred on an A3[2-4]0, the EFCS would probably have limited both
>the authority of the FMS to put the airplane into the steep bank, *and* would
>have provided maximum corrective action, using opposing controls, to keep the
>airplane in the prescribed operating envelope.

Well... given the recent post here about the A310 in Moscow going 88 degrees
nose-up, I'm not sure that I agree that the Airbus EFCS would necessarily
prevent the aircraft from attaining "unusual" attitudes.  In fact, it was the
"smarts" of the A310 autopilot that actually contributed to that incident.
As that poster also mentioned, though, I would like VERY MUCH to see more
documentation and a fuller description of exactly what happened.

>This is from the Federal Register 54:17, January 27, 1989, pages 3989 and

>P. 3989, the oh-so-enlightening, explanatory commentary:

>    "The second commenter believes that the flightcrew must be aware of any
>    failure conditions which affect the structural capability of the
>    airplane, whether or not a compensating procedure exists.  The FAA does
>    not concur with this comment.  It is not necessary for the flight crew
>    to be aware of a failure in the active control system during the flight
>    on  which the failure occurs if there is no available corrective
>    action; however, the airplane should not be exposed to the failure
>    condition for an extended period of time.  The flightcrew must
>    therefore be alerted to the failure condition prior to the next flight."

Oh, I get it!  Just because a condition exists that may affect OTHER choices
I make about how to respond to OTHER occurrences during that flight, that
doesn't mean that I have the right to know what is going on with my aircraft.
Hmm, seems reasonable... NOT!

>This is from the FAA, the agency in charge of establishing airworthiness and
>certification practices in the United States!  In reality, the A320 likely
>*does* provide enough feedback: but the FAA, apparently unnecessarily, has
>certainly opened the door for the practice to be introduced in subsequent

I agree completely.  I work in the Human/Automation Integration Branch in
the Flight Management Division at NASA Langley.  We have worked for some
time examining the complicated interrelationships between events that lead
to accidents, and have even constructed software prototypes that try to
determine these relationships and make them more explicit.

What really scares us is the prevalent attitude of many in the industry
that they can anticipate ALL the "important" ways that things will interact,
and provide procedures for dealing with them.  And whenever you point to
an example of how they failed and how that lead to an accident, they respond
"Oh, but we've already fixed that."  Sure.  But what about the NEXT one
that you haven't "fixed" yet!?!

By the way, the charter of our organization (as if you couldn't tell from
what I've said so far) is NOT to solve problems in the cockpit by increasing
the amount of automation.  Rather, we seek to propose better ways of using
the capabilities of both the automation and the flight crew, which may even
mean rethinking many of the traditional tasks that automation is used for
now.  And we do NOT see the "pilot as manager" scenario as being necessarily
ideal.  Humans tend to make lousy system monitors.  Ask the Nuclear people.
Human-machine systems work best when the humans are actively *involved*.

>> If nothing else, I hope I have brought up some topics that deserve
>> discussion among readers of this newsgroup.  After all, aren't we the
>> ones in positions to influence our industry (all in our own way, of
>> course)?

>Especially in software, of particular relevance to the net.  A lot (if not
>most) of the people writing this code--4M on the A320, 10M+ on the
>A330 and A340--are *not* aero engineers: just programmers, ostensibly with
>CS backgrounds (a more frightening thought I can't imagine! :-)), performing
>under strictly governed, structured, controlled environments: to specif-

>Airbus even mentioned the "CS" types it brought in from "outside" to
>buttress a comment on its quality-control practices, in an article, as if
>to make the point that mere engineers weren't writing this stuff: the
>"pros" are doing it. :-)  Yeah, we know what we're doing, SURE... :-)

Ummm... this point came up in a Newsweek article (now THERE'S an accurate
and unbiased source of information!) about digital flight control systems.
They were shocked that programmers, not pilots, were writing the software.
I feel at least somewhat qualified to address this issue, since my undergrad
is Aerospace Engineering, my master's is Computer Science, and I'm working
on the Ph.D. in Human-Machine Systems.

Pilots and engineers tend to be experts in specifying how things should
happen.  My experience with their programming ability is that they tend to
not be aware of most of the advances in Computer Science that have occurred
over the past 25 years.  The result is poorly designed and implemented code
that takes Herculean efforts to get working properly and maintain.  On the
other hand, programmers do not necessarily make good system designers... they
tend to think in terms of how things will be implemented (and the limitations
of that implementation) rather than in terms of what the system MUST be able
to do.  I have met only a few people who can combine both talents, to become
very good system designers AND software designers.

These people have the ability to hear what the pilots and engineers say, and
translate that into a total system design, including software design, that
meets the requirements and can be implemented.  At THIS point, the actual
programmers become involved.  If changes need to be made due to, say, hardware
limitations, then these can be incorporated by either a requirements OR an
implementation change.

So, I don't think you should be afraid that CS people are writing the code.
In fact, you should be glad that they are.  You just need to make sure that
they are filling in the pieces of a software design that was put together
by a competent person like I described above.

>Robert Dorsett

I hope I get to meet you at a conference sometime soon!  It's great to see
that other people are grappling with the same issues.

Michael T. Palmer, M/S 152, NASA Langley Research Center, Hampton, VA 23681
Voice: 804-864-2044,   FAX: 804-864-7793,   Email:
PGP 2.0 Public Key now available -- Consider it an envelope for your e-mail