Go to the NASA Homepage
 
Human Centered Systems Lab Left-Side Header Image
Research Sidebar Image
Human Centered Systems Lab Image Collage
HCSL Research Header
Flight Deck Displays: Head-Up Displays (HUDs) and Attention Research

These research themes investigated:
1) Head-Up Display (HUD) designs that minimize potential problems associated with operator attention such as attentional tunneling or cognitive capture;
2) Human performance issues related to night vision devices, sensor imagery devices, and infrared technologies; and,
3) Design issues and display usage issues with head-down panel mounted displays.

Flight Deck Displays: Head-Up Displays and Operator Attention


Image of a Head-Up Display (HUD)The head-up display (HUD) is a collimated, transparent display medium upon which graphical information, or superimposed symbology, can be presented. Since it is transparent, the main feature of the HUD is that it is located above the instrument panel, allowing the pilot to simultaneously view the out-the-window scene and the superimposed symbology, without refocusing the eyes or making large eye-scan movements. HUDs have been shown to be a superior presentation method for flight path symbology over that of traditional flight director displays (Boucek, Pfaff & Smith, 1983).

Fixed-Location Symbology - The Display Location Effect

Fixed-location HUD symbology appears to lead to attentional tunneling, which reduces the pilot's ability to maintain awareness of instrument information and information in the far visual scene (Foyle, Sanford & McCann, 1991). With the conventional head-down configuration, attentional tunneling may be disrupted by the eye and head movements necessary to scan back and forth between the panel display and the far visual scene, so joint awareness is improved (Weintraub, Haines & Randle, 1984; Sanford, Foyle, McCann, & Jordan, 1993).

Screen capture of a HELO Head-Up DisplayIn our laboratory facility, we have developed an experimental testbed to explore various possibilities for reducing or eliminating attentional tunneling. The testbed consists of a PC-generated graphic representation of the outside visual scene, on which HUD imagery is superimposed. A flight simulation task is used to evaluate the influence of fixed superimposed symbology location on information integration. Subject pilots are simultaneously required to follow a ground track and to maintain an altitude of 100 ft. Altitude information is available via the out-the-window (terrain) visual cues only, or by a superimposed digital altitude indicator (i.e., simulated HUD symbology). The HUD symbology can be presented at various distances from flight-relevant terrain path information. Root mean squared error (RMSE) altitude and RMSE heading are measured.

The results of these simulation studies indicate that subject pilots were not able to attend simultaneously to both the fixed-location HUD information and the outside world terrain when directly superimposed over the center of flight path. Directly superimposed placement did not result in information integration, and in fact, encouraged attentional fixation. Simultaneous processing of both the HUD information and the outside world information was best in conditions that encouraged attentional or visual scanning, specifically when the symbology was at least 8 degrees away from the center of the flight path.

"Scene-linked" Head-Up Display (HUD) symbology.

Screen capture demonstrating Scene-linked Head-up Display symbology"Scene-linked" symbols are drawn, and move, as virtual objects in the out-the-window scene. As the aircraft moves through the world, the scene-linked symbols undergo the same visual transformations as real objects. There are no differential motion cues to cause the visual system to interpret the virtual symbols as separate from the world. In the absence of these cues, attentional tunneling should be prevented, enhancing the ability to process scene-linked HUD symbology in parallel with real-world information

Our research has found that when the HUD altitude indicator is converted into a "scene-linked" or virtual indicator, attention tunneling no longer occurs, but out-the-window information processing actually improves. Studies indicate two findings, extending the results and conditions of scene-linked HUD symbology. The first part-task simulation showed that the placement of virtual indicators do not require any special placement in the environment in order to facilitate efficient simultaneous processing of cockpit displays and the out-the-window environment. A second simulation optimized scene-linked route symbology on the HUD, an inherently limited field-of-view (FOV) display device. Results indicated that scene-linked symbology showing the route edges is insufficient, and that additional route markers must be added that are offset from the route effectively cueing the route edges that are not visible due to the limited HUD field of view.

Examples of Scene-Linked Symbology:

- Scene-Enhancements (of real objects)
- Scene-Augmentations(virtual objects)
- Virtual Instruments (Projecting virtual instrument gauges in world)

Go to: Flight Deck Displays: Head-Up Displays and Operator Attention Publications Available For Download

HCSL Research: Enhanced Vision Systems

Image of an Enhanced Vision System IllustrationElectro-optical imaging systems are being introduced into military and civilian operation allowing pilots to fly at very low levels, to avoid obstacles in reduced visibility, and to maintain geographical orientation. These systems transduce energy that is normally not perceptible to the human visual system into a viewable image on a display. Such systems include infrared imagery (thermal energy), night vision goggles (intensified visible light), and passive millimeter wave sensors. However, pilots are generally not able to achieve the same performance levels as with direct vision in daylight conditions. Despite the problems pilots are encountering with recently fielded systems, surprisingly little information is available about human capabilities and limitations with these devices.

Human Factors researchers have been conducting research in this area with NASA funding. The overall goals of the Night Vision Devices research are to: (1) Determine the critical human factors issues associated with pilots' military and civil use of night vision devices such as night vision goggles (NVGs) and infrared (IR) sensors, (2) Conduct field studies and simulation research to determine pilots' capabilities and problems related to both military and civil use of night vision devices, (3) Identify and evaluate potential technological solutions for identified problems, and (4) Create a knowledge base of relevant research to allow for informed decisions to be made in certifying night vision devices for military and civil use.

Another goal of this research is to contribute to improved system specifications and training requirements by identifying and studying the most significant human factors problems. Problems associated with image quality (e.g., contrast, resolution, field of view, signal/noise ratio) reflect hardware limitations which are likely to be improved, but not solved, in the foreseeable future. The offset sensor location and changes in relative contrasts, shadowing and shading inherent in all night vision devices create another class of problems. Research has been carried out to identify attentional problems and distortions in motion perception and range estimation caused by an offset eye point. Laboratory and simulation research is being conducted to identify the perceptual and cognitive costs and benefits of monocular and binocular display formats. While the monocular display format used in the infrared systems leaves the unaided eye "free" to view peripheral motion cues and cockpit instruments and to verify the identity of objects directly, differences in information available to the two eyes may create binocular rivalry; pilots must selectively focus their attention on one visual field or simultaneously process different visual images. Surprisingly, initial results suggest that people can use their eyes as separate information channels, selectively focusing their attention on information presented to either eye. However, they have difficulty dividing their attention between two (different) visual fields.

Simulation research has been performed to evaluate pilots' abilities to fly using a monocular helmet display while monitoring panel displays or a projected visual scene with the other eye, as well as with stereoscopic displays. A series of studies have been conducted to identify the characteristics of thermal images that contribute most to the problems pilots are encountering. Simulation research will compare performance with simulated out-the-window and infrared imagery to further investigate the relative importance of display parameters. A knowledge base of relevant research to allow for informed decisions to be made in certifying night vision devices for civil use will be created.

Go to: Flight Deck Displays: Enhanced / Synthetic Vision Publications Available For Download

For other work on flight deck displays, go to: Flight Deck Displays Publications Available For Download
Go to the First Gov Homepage
Go to the NASA - National Aeronautics and Space Administration Homepage
Curator: Phil So
NASA Official: Dave Foyle
Last Updated: August 15, 2019