Re: ETOPS

From:         lou@alumnae.caltech.edu (Louis K. Scheffer)
Organization: California Institute of Technology, Pasadena
Date:         05 Nov 96 04:13:57 
References:   1 2 3 4 5 6
Followups:    1
Next article
View raw article
  or MIME structure

Malcolm Weir <malc@deltanet.com> writes:

>Louis K. Scheffer wrote:
  [...]
>>
>> I would suspect that a modern ETOPS plane, where the automated cockpit does
>> the 'right thing' upon an engine failure (where the right thing was thought
>> through by experts who were not under time pressure), is probably safer
>> than a 4 engine plane where an engine failure needs to be treated with
>> a long checklist and modified procedures.

Malcom replied:

>[...]  I certainly agree that a B777 is almost unquestionably as
>good, if not better, than a 1969 vintage B747-100 in terms of
>reliability.  But is that the question?

>I've heard many folks in the industry claim that there is no level of
>"acceptable risk", and, as a member of the travelling public, I think
>this is A Good Thing!  But the whole ETOPS concept seems to stem from a
>"safe enough" philosophy, as opposed to a "safest possible" one.

This is absolutely true, and the basis of any sort of risk management. A
good book on this subject is 'Technological Risk", by H. W. Lewis.  The
problem with 'safest possible' is that it takes no account of cost. As an
example, a safest possible strategy is "don't fly in bad weather".  But
this makes aviation unpredictable and sometimes unavailable, which will
force people to drive more, which results in more overall deaths and
accidents.

Another classic example of 'safest possible' failing was the 'Delaney
Clause' of the food and Drug administration.  This said that no substance
could be added to food if there was  any evidence of causing cancer in
any animal at any dosage.  This sounds reasonable at first, but resulted
in the artificial sweetener saccharin being banned, since it is
carcinogenic to rats in high concentrations.  The worst case estimates
were that it might increase your cancer risk by at most 10 parts per
million over your lifetime (assuming the rat data is applicable to humans,
and that the observered effects are linear over a range of 3000 or more
in dosage per kg. of body weight).  On the other hand, something like a
third of all people in the US die of obesity related conditions.  So the
attempt to make soemthing as safe as possible on one front (cancer) results
in making a minor improvement on that front while making a related (and
more serious) problem worse.

So almost every safety field has to decide what 'safe enough' means, when
it is time to stop improving that aspect and use your finite resources to
solve some other problem.  In the aviation world, safe enough is usually
one failure in 10^9 hours (this was set so that there would be no expected
failures in a popular type of airplane over the life a fleet of them).
In the cancer world, it's one additional cancer in 1,000,000 people (on
top of the 200,000 of them you would expect to die of cancer anyway.).
Setting a limit and using it to order your priorities is crucial to
reducing the overall risk of a complex system.  So in fact the concept of
'safe enough' is preferable in practice to the concept of 'as safe as
possible', as odd as that may seem at first.

   -Lou Scheffer