VR COCKPIT QUESTION 14/04

From:         EGEISELMAN@FALCON.AAMRL.WPAFB.AF.MIL
Date:         12 Feb 93 11:12:01 PST
View raw article
  or MIME structure

This post is the first collection of responses to the following post.

>This exercise is intended to demonstrate that the collective creativity and
>expertise of net participants can be harnessed via a specific methodology.  I
>think the net should be exploited as a population of subject matter experts
>and a source of user input from which the extracted information may be applied
>to solving real design problems.  Through an iterative process of concept
>refinement, using the collective knowledge-base of the net, it may be possible
>that otherwise undiscovered design questions, problems, concepts, 
>capabilities, and etcetera be revealed.  

>I am going to take a look at this idea by doing the following:  I will post a
>purposely vague design question to the participants of the net.  For those of
>you who choose to participate in the exercise, e-mail your individual
>responses to me.  Feel free to submit questions to the net for clarification
>and discussion but I will not extract information directly from the net.  If
>you need a definition or have a question, ask the net first.  Some weeks after
>the original post I will submit an edited compilation of the net responses. 
>This post will hopefully act to spark more ideas, make clarifications, and
>identify problem areas.  The refinement process will continue.  The net may
>then respond to the new description in order to patch holes, make corrections,
>and propose changes.  This iterative process will continue until responses die
>off and/or the concept is solidified. 

>I will document this process and report the results (I will post the report). 

--------------------------------------------------------------------------------
>Sample question:

>Let's say you are given a virtual reality system.  Your system consists of a
>high resolution wide field-of-view full color head mounted display devise
>display), an extremely accurate head tracking system (transducer), and a 60Hz
>graphics generator (image generator).  Given this technology, how should it be
>applied to the flight deck of a commercial airliner? 

--------------------------------------------------------------------------------

>Note:  Please indicate, in your e-mail responses, if you would like your name
>and/or affiliation to be excluded from any publication which may result from
>this exercise.  Any information on personal background or experience you want
>to include may be of some use.  All credit and acknowledgements will be made
>as appropriate.  My thanks to all who participate.

e-mail to:  EGEISELMAN@FALCON.AAMRL.WPAFB.AF.MIL

The following is a compilation of the responses to the original December 16th
post (above).  The responses have been minimally edited to reduce redundancy
and to organize the information into logical categories.  To date, the
response categories include: flight segment (pre-flight, departure, enroute,
and approach), general, communications/air traffic control, hardware, and
criticisms/concerns/comments.  Except for reasons of clarity, responses in the
criticisms/concerns/comments category were not edited. 

There were 16 respondents to the original post from which the following 
descriptions were derived.  Please feel free to respond to this post in 
accordance with the original post.    
--------------------------------------------------------------------------------


FLIGHT SEGMENT:

PRE-FLIGHT:

I propose the system be utilized for pre-flight and in-flight visual
inspection of internal aircraft structures and systems monitoring.  A 3-D
model of the airplane with all of its systems and components laid out and
accessible. The pilot could visually monitor vital systems via video pickups
located in strategic places (wheelwells, etc).  Included would be monitoring
the exterior of the airplane for ice.  The inputs to the HMD would come from
"rats" build into various structures.  A structure would be selected with a
data glove, which would also control the action of the rat: halt, go backward,
go forward.  With the help of a head tracker, the movement of the rat's head
(two video cameras and a light) would mimic the movement of the operator's
head. 
  
An alternative application would be to enable pilots to visit destination
airports and environments in order to gain pre-flight familiarization.  This
would be a simple simulation, with basic control requests from a data glove. 
The operator could select location, light conditions, and weather prior to
moving around the area. 


DEPARTURE:

No responses were specific to the departure flight segment.


ENROUTE:

This system could be used as sort of a super-HUD.  The system should have a
transparent image display.  An "overlay" on reality could be useful:  Outline
the active traffic center in a red box; draw an arrow to the destination
airport; display aircraft status, and so on.  A display failure would merely
force the pilot to go back to "regular" instruments, and since he or she would
already have "situational awareness," the disorientation would be minor. 

The system could superimpose airspace symbology onto the real world.  The
pilot could look outside and see the airway stretching out ahead.  An
intersection would be visible as it is approached.  The TCA would appear in
front of the aircraft (although this probably would be of less importance for
a commercial airliner since they're IFR all the time). 

For traffic avoidance, the system could display superimposed targets on the
flight deck and highlight them when the pilot turns his/her head(eyes) in that
direction.  Essentially this would be a virtual see and avoid system.  One
could further add to the system by selectively displaying the correct
altitude, heading, distance, speed, and closure rate of any particular target
within the field-of-view when prompted by the pilot.  The system could provide
some type of warning when a target is a potential threat to the "source"
aircraft. 

The system could generate an image of the pilot's aircraft in 3-D or 2-D, in
any scale.  Position relative to other aircraft, airports, and holding
patterns could be monitored from any angle.  Also, the system could show the
aircraft path, with time markings, through the airspace (a highway or tunnel
in the sky). 

The system may enable the pilot to look to the right and have a list of
alternate destinations available, select "on" with the blink of an eye, and
have the computer automatically alter the flight plan. 


APPROACH:

Once a descent has begun into the terminal environment, and workload
increases, the system would be useful.  The system could display all the usual
HUD info - airspeed, altitude, etc.  It could display superimposed weather
information, radar cross-sections, allowing the pilot to steer around cells -
he or she could see inside the cells. 

The system could simulate the outside environment, real-time, in 1-1 scale, as
well as the entire approach and landing.  This imagery could be generated in a
number of ways:  1) Stored maps of well known airports.  The stored imagery
would be displayed based on the plane's current position derived from GPS,
INS, and radar altimeter.  2) Imagery created on real-time by other remote
sensing instruments on the plane.  These could include FLIR, conventional
radar, MMW radar, etc.  The VR generator could superimpose the runway as it
would appear in perfect VFR daylight conditions (including VASI, etc.).  This
could reduce the amount of data needed for presentation/human processing and
could offer improved methods for presenting what is necessary.  The pilot
would have the option to display the ILS needles superimposed over the outside
scene.  With this system the aircraft could conceivably land in any visibility
condition. 

Another interesting possibility on approach and departure would be to merge
doppler radar data into the virtual view to (hopefully) visualize wind shear.
The doppler data might best be transmitted from the ground. 

Initially, perhaps one pilot would wear the VR system (maybe the (younger)
copilot) all the way down to landing, while the captain watches the approach
flown the old-fashioned way.  Provided with precise enough information from
the nav equipment and data bases, the flight crew could taxi in fog. 


GENERAL:
 
If you have a virtual reality system in the cockpit of a commercial airliner,
there is one obvious application: Combat Simulator. Each airliner would have a
system which provides simulated combat capability for use against other
airliners.  Missiles, guns, bombs, all could be simulated in software.  A hit
could be indicated by the target bursting into flames and going down, all
virtually simulated, with no ecologically-unfriendly smoke trails or craters. 
The primary benefit of this is that it increases situational awareness.  If
you know that the other guy is liable to gun you, you'll keep careful watch
over all local traffic.  See and don't be seen. 

The pilot should be given the ability to select the relative intensity of the
various forms of information which are overlaid in the field-of-view.  For
instance, the pilot might opt for the primary image to be the live camera
feed, but to also have the computer generated imagery present at a lower
intensity.  The computer generated imagery would appear as a "ghost" image
behind the main feed.  This could be useful for daylight patchy fog or
low-ceiling operations. 

It would probably be possible to also combine all the sources of input into an
aggregate composite image.  The computer could then compare the various forms
of input and reject the ones that don't match.  Suppose the camera has a clear
view of a building ahead, but the on-board stored data place the building at a
position 300 meters away.  This could be assumed to be a failure in the
GPS/locating system and the map database would be ignored or given a very
low-weighting in the composite image. 


COMMUNICATIONS/AIR TRAFFIC CONTROL:

At major airports there could be a great deal of weather sensing instruments.
The data from the instruments could be processed by ground computers into a
real-time weather model, this information could then be transmitted to the
aircraft and onto the helmet display to give the pilot an actual visual image
of wind patterns. 

The computer could stay in communication with airspace systems, so the pilot
remains appraised of clearances, altitudes, airpseeds and so forth; and
controllers remain appraised of the pilot's current situation and position. 
Also, the computer might be able to data link auto-pireps. 


HARDWARE:

One application may be selective visual enhancement.  LCD's exist that can
turn from translucent to clear.  The computer would mask out the windows and
superimpose appropriate displays for glide-path insertion, runway enhancement,
annotated traffic, etc. Note that the cockpit instruments would **NOT** be
masked out: The pilot would actually see the gauges. 

The system requires more than just a good VR in-the-cockpit system.  It also
requires GPS, LORAN, radar/pressure altimeters, TCAS, and a host of other
sensors and communications devices. We must not forget that some of that
hardware might need to be modified to accommodate the VR pilot. 

To apply this technology to an actual flight deck, rather than just a
simulator, would require piping in a live visual feed, in addition to any
computer generated imagery.  A 360 degree bubble mounted either under the
cockpit, above the cockpit, in the nose, or possible a combination of all 3,
would contain a camera system that would track the pilot/operator's head
position.  This "live" video could be fed directly into the "helmet" or it
could be processed into a computer model and then adjusted to compensate for
the position of the pilot in the cockpit versus the position of the camera to
prevent pilot/operator disorientation.  This would provide the pilot with not
only a better panoramic view of the environment, but also the ability to look
back at parts of his plane that are not usually visible.

A camera per eye would be a distinct advantage, not only could expanded stereo
be computer generated from the map database, but with dual cameras, one on
each wingtip, the pilot could be given a much wider/deeper depth perception of
the approach.  The cameras mounted in the afore-mentioned bubble enclosures on
the cockpit, should be dual-cameras mounted at normal human-eye separation. 

CRITICISMS/CONCERNS/COMMENTS (not edited):

[Given this technology, how should it be applied to the flight deck of a
commercial airliner?] Enroute, not a heck of a lot.  Terrain avoidance is not
usually a problem at FL370, and neither are VFR a/c w/o transponders.  And,
you're usually over most of the weather, or using the old eyeball, steering
around the mushrooming heads.  Navigation isn't too tricky, either, as the
autopilot is flying you towards the next VOR. 

My personal opinion is that VR technology (WFOV head mounted display, head
tracker, 60 hz graphics generator) should not be applied to an airliner
cockpit at all.  I do not feel that the technology is anywhere near mature
enough to be safe in that situation. Some similar work has been done for
military aircraft, but military aircraft have a much different mission and a
higher level of acceptable risk. 

Another thing that MUST be considered is what to do if the system fails, as
ANY system will eventually fail.  It seems to me that making the transition
from VR to RL (real life) could be disorienting to the pilot, possibly at the
worst possible time. 

--

I don't want to discourage any experiments or research along these lines, but
thinking about having it deployed in the general air carrier fleet makes me
nervous.  There's still controversy about the Airbus A320's fly-by-wire
system, for goodness sake; this would be an order of magnitude more complex. 

--

Well now, the fundamental use of a VR system that you are describing above is
that it allows you to see what you normally cannot with the usual eyeballs. 
This sounds stupid... 

--

Expect resistance from flight crews - if they don't like computers, they sure
won't like VR. 

--

Early experimentation probably would disclose some methods of information
delivery that prove unsatisfactory.  Hopefully all would be tried in
simulators before discovering which create problems ranging from the subtle to
the blatantly gross.  Early VR systems produced a high risk of vomiting even
without having a ride in turbulence, and I'd wonder if airborne VR could trip
the same response. 

--

On a less serious note, my personal taste in airborne virtual reality would
need more transducers -- I'd like to feel the wind in my feathers! 

--

....the traditional HUD information could also be displayed in the helmet. I
think the greatest danger in this system, would be creeping featurism or
information overload. The urge to put TOO MUCH information in the display. 

--

It shouldn't. The reliability cannot be shown to be adequate.

--

My concern is that (a) glass cockpit systems cannot be shown to be
adequately reliable - but at least they can be easily cross-checked against
conventional instruments. A VR system would (I imagine) inhibit reference to
conventional instruments.

(b) the difficulty of assurance and the probability of error rises with
complexity and with novelty (probably more than linearly). I expect a VR
system to be both novel and complex.

(c) - this is more contentious - I don't think that there are major problems
in civil aviation that need VR as a solution. Safety is already higher than
other forms of transport. The main aviation problem seems to be in ATC
(perhaps VR for ATCOs would be a better idea). I prefer to identify the
problem, then look for a technology to solve it, rather than take a technology
and look for application areas. 


ACKNOWLEDGMENTS:

My thanks to the following contributors:  Anonymous, Bernd-Burkhard Borys,
Andrew Boyd, Greg Cronau, Terrell D. Drinkard, Chuck Gallagher, Richard
Johnson, Berry Kercheval, David M. Palmer, Dan Pearl, Paul Raveling, Martyn
Thomas, Kendall L. White, and Steve Wolf. 

E.G.