Thursday, February 18, 2010

A-10 Full Mission Trainer (FMT) and JTAC simulator



Today, from Richard M. Rybacki, MetaVR, Inc., San Antonio, Texas, I got the following e-mail:

Hello J.J.,

Correlating visuals across the A-10 Full Mission Trainer and JTAC networked simulators is described in the article "A question of correlation" in Training and Simulation Journal's December 2009- January 2010 issue. Both simulator programs use MetaVR's databases and image generator (VRSG).

http://www.tsjonline.com/story.php?F=4367570
http://www.metavr.com/technology/papers/L3-MetaVR-SieverdingR2.pdf

Richard M. Rybacki


Thanks for hooking up me, Richard. And here's the news article:

A question of correlation


Correlating visuals across networked simulators is a tricky business. Engineers at the U.S. Air Force Research Laboratory have devised a method to increase correlation and improve training realism across an A-10 aircraft full-mission trainer and a joint terminal attack controller simulator. Amos Kent of AFRL, Rich Rybacki of MetaVR and Mike Sieverding of L-3 Link Training & Simulation describe their work.

By Amos Kent, Rich Rybacki and Mike Sieverding
December 01, 2009

A considerable challenge facing distributed virtual simulation is to minimize correlation differences between networked simulations so that humans in the loop perceive and respond to the same stimulus — as they would in the real world. There are many causes or domains of correlation differences, including appearance, behavior and time.

Considerable work has been done across the services to develop methods of reusing environmental/spatial datasets that not only reduce the schedule and cost of database generation, but also achieve greater correlation between differing simulations. However, even if all simulations were to share the same database geometry, textures, colors and rendering engine, how they would look through the simulations' different display systems could vary dramatically.

This briefing presents a novel algorithmic approach of modifying database colors and intensities. The principal variable within the algorithm is the difference in measured display system contrast ratios between two simulator systems. Contrast ratio test methods, tools and results are also presented to provide objective and repeatable measures. We also describe a method used to remap all pixel colors and intensities with the adjustment algorithm during run-time, using plug-in shader techniques.

The method described here offers the potential for application across any simulation network where the environment model is built from common, shared datasets, where different types of display systems with widely varying contrast ratios are employed and where correlated (or at least more similar) perceptions are required.

The U.S. Army, Navy, Air Force and Special Operations Command have established database standards programs of widely varying scope, but with a remarkably similar selection of in-process dataset formats. The Navy and Air Force have conducted studies to show that dataset investments shared at the in-process format level can yield cost and schedule savings of 60 percent to 95 percent, assuming somewhat consistent database content requirements. File exchange at the in-process dataset format level (OpenFlight, Shape and GeoTIFF) can allow database investments to be shared across services and programs. This will also improve correlation, since differing programs can start their database development process using more similar value-added source data packages rather than from varying raw-source data packages with varying pedigrees. This will not necessarily ensure sufficient correlation in networked simulation but can serve as an excellent initial step to achieve it. Other methods must be developed to help further improve network correlation.

Network correlation is a hugely complex theme. Correlation differences between networked simulations can occur across the domains of appearance, behavior and time. Appearance relates to the location, size, color, contrast, material, orientation, etc., of objects within a spatial environment. Behavior relates to how those objects move, spawn, emit, absorb, change state, etc. Time relates to objects' duration, recency, sequence, frequency, latency and when they start/stop. Behavior causes changes in appearance over time. Correlation differences in any domain between networked simulations can be significant enough to limit the validity of network events.

A-10 CASE STUDY

The Air Force's A-10 Full Mission Trainer (FMT) program has an extensive, nearly global database, with many higher-resolution insets to support training close-air support (CAS) tasks and skills. The program includes an extensive DataBase Generation System (DBGS) that adds and modifies its database as requirements change and technologies evolve. The A-10 FMT out-the-window display system uses eight rear-projected facets or channels to provide a full 360-degree horizontal and nearly 120-degree vertical field of view. The program participates in the overarching Air Force Distributed Mission Operations (DMO) program, to include network simulation events.

The joint terminal attack controller (JTAC) simulator program is funded by Air Combat Command and managed by the U.S. Air Force Research Laboratory/Warfighter Readiness Research Division in Mesa, Ariz., to prototype, demonstrate and evaluate simulators to train JTAC operators in CAS tasks and skills. As a cost savings method, the JTAC program uses the same database built for the A-10 FMT program (and the same image generator) and includes no provisions for a DBGS. The JTAC program consists of two types of simulators with two developmental display systems. One is an internally projected dome with a full 360-degree horizontal and 120-degree vertical field of view, using 14 display channels; the other is an internally projected concatenated dome with an approximate 200-degree horizontal and 120-degree vertical field of view, using 13 display channels. Although developmental, the JTAC program has also participated in DMO network simulation events.

Although both programs use the same database and image generator vendor — MetaVR's Virtual Reality Scene Generator (VRSG) — and their display systems use somewhat similar DLP projectors from different vendors, the scenes viewed through their display systems are considerably different and can compromise or limit the types of scenarios used during CAS training in DMO. Differences in display resolution account for some of the correlation differences, but the majority comes from large differences in displays system contrast ratios. For example, when at the same location in the same database and with the same viewing conditions (time, day, visibility, etc.), an A-10 FMT scene will appear to have good contrast and strong chroma differences, but the JTAC scene will appear washed out, with little ability to distinguish contrast and chroma between objects or within textures.

During networked operation, the ground-based JTAC may direct the A-10 FMT to a target using plain-language feature descriptions based on what the JTAC sees in his simulator, but the A-10 FMT pilot sees a different scene and the JTAC's description may make no sense to him. Because of large display scene differences, target scenarios must be carefully chosen and scripted to ensure that what the JTAC sees is similar to what the A-10 pilot sees. A method to compensate for display system contrast ratio differences and improve apparent scene correlation between the A-10 FMT and JTAC was desired.

Also, the JTAC operators would not only use unaided eyes to determine targets or objects in the dome-displayed scene, but they would also use binoculars, night-vision goggles and laser range-finder designators while within the JTAC dome. These JTAC tools have their own display channels that are not affected by the dome's display system contrast ratio. Objects were often much more discernable through the tools than when using unaided eyes — to a degree that it was felt to be unrealistic. A method to improve scene correlation between dome scene and the scene viewed through JTAC tools was also desired.

ALGORITHM DEVELOPMENT
First and foremost, an algorithm cannot change a display system's contrast ratio. It is what it is. However, if the algorithm were to artificially adjust colors and intensities before going through the display system, the resultant scene could appear more similar to scenes viewed on other display systems with differing contrast ratios.

Since a low contrast ratio takes away luminance contrast and depletes chroma purity, a method is desired to modify database colors and intensities to increase luminance contrast and chroma purity, where reds appear redder, blues bluer, etc. Also, it is desired that full black should remain full black and full white should remain full white.

Since differences in displayed scenes between the A-10 FMT and JTAC systems are principally caused by differences in display system contrast ratios, it is also desired that the algorithm be sensitive to contrast ratio differences.

Many different methods could be used to modify database colors and intensities for this purpose. Since equal units of the three color primaries, red, green and blue (RGB), do not result in equal perceived intensities, methods were considered that reflected those differences, as were methods to reflect nonuniform gamma correction for the primaries. A square root luminance adjust function was also considered.

Various algorithmic solutions were analyzed for acceptability by applying them to Microsoft PowerPoint color swatches having a wide range of chroma and intensities. Following PowerPoint color swatch analysis, a method was selected as the most promising.

The required algorithm inputs are the original RGB from the producer program (in this case, A-10 FMT) and the display system contrast ratios of the producer (A-10) and user (JTAC) programs. The output is corrected RGB for the user (JTAC) program. Since the algorithm requires display system contrast ratio as an input, the A-10 and JTAC display systems' contrast ratios must first be measured before tests and experiments can be conducted. Neither program had previously captured that data.

CAPTURING CONTRAST RATIOS

Contrast ratio is commonly defined as the ratio of the luminance of the brightest color (white) to that of the darkest color (black) that the display system is capable of producing. The greater the ratio, the greater the dynamic range of the display system luminance and its ability to mimic the real world. This definition describes what the display system can attain, not what the projector can produce. Although projectors are frequently described by their vendors as having many-thousands-to-one contrast ratios, when the projectors are integrated into a display system with all projectors/channels operating, it is quite difficult to attain better than a 35:1 contrast ratio with display systems having very large fields of view.

There are numerous ways to measure contrast ratio. The method described here relies heavily upon ANSI Static Contrast Ratio methods and is based on the very similar Federal Aviation Administration test methods.

To measure contrast ratio, a test sphere 3-D model 10 meters in diameter was constructed using 800 emissive polygons in 9-degree-high rows and 9-degree-wide columns at the horizon, with the columns becoming narrower toward the zenith and nadir. The polygons were colored in a checkerboard pattern of alternating black and white. Half of the display system was composed of white polygons, the other half black polygons. The A-10 and JTAC computational/display eyepoints were then positioned at the sphere center. A spot photometer was then used to "shoot" the luminance (in foot-lamberts) at the center of the two rows of black and white polygons just above and below the horizon line (+9 to -9 degrees elevation), ranging from -108 to +108 degrees azimuth. Sampled polygons (a total of 24 white and 24 black) ranged across several display channels for all three tested systems. The average white polygon luminance value was then divided by the average black luminance value, yielding a contrast ratio for the display system being tested.
Using the same test sphere, test equipment and test methods in all three display systems yielded contrast rations of 33.65:1 for the A-10 FMT, 5.91:1 for the JTAC concatenated dome and 2.05:1 for the JTAC 360 dome. The results confirmed the anticipated low contrast ratios in the JTAC domes, especially in the 360 dome. JTAC display system design was selected to satisfy higher-priority performance requirements knowing that lower priority performance characteristics would suffer.
Internally projected dome display systems have historically had very low contrast ratios, unless the dome surface is coated or treated with a material having increased specular reflectance, or high gain. The downside of high gain is a very small viewing volume without objectionable intensity falloff. Both JTAC displays require a very large viewing volume, with multiple observers able to roam within the domes using tactical equipment. For this reason, the JTAC dome surfaces are nearly Lambertian (diffuse) in their reflectance, having a gain of about 1. In the case of the JTAC dome design, contrast ratio was traded off in favor of a large viewing volume. Also, facility size restrictions prevented use of alternative display technologies for JTAC. Tradeoffs always occur during the selection of any display system. That's just the way it is.

The next step was to input the contrast ratio values into the algorithm and adjust color tables and palettes for all polygons and all textures. This step was known to be the most time-consuming, since considerable offline processing would be required.
During informal discussions with MetaVR personnel, they suggested that offline color adjustment may not be necessary and that the image generator itself could be programmed to make the adjustment during runtime. That seemed very appealing since several algorithm iterations were anticipated before any type of optimal result could be attained.

One of the most significant advances in computer graphics in recent years is the development of the programmable vertex and pixel shader. The highly programmable nature of per-vertex and per-pixel operations has opened the door to stunning advances in realism. These advances have been realized in both the commercial gaming sector as well as military visual simulation. While the benefits of shaders for advanced lighting and shading are well understood, the utility of the programmable Graphics Processing Unit as a general image processing engine offers many other capabilities as well.

Since both the A-10 FMT and JTAC programs use a MetaVR VRSG image generator, MetaVR developed a plug-in interface that allows the contrast adjust algorithm to be inserted into the scene generation pipeline during run-time as a dynamic link library (DLL) call. The DLL implements a "user-draw" function which VRSG calls after it has rendered the 3-D scene. All pixels are redefined using the DLL in accordance with the contrast adjust algorithm described previously. The processing overhead caused by this additional function is estimated at approximately one millisecond.

The MetaVR VRSG implementation served as a specific example, but this algorithm and DLL method could be easily adapted to other image generators that offer a similar plug-in interface.

MEASURED CHROMA INCREASE

The spot photometer used to collect data for contrast ratio measurements (Konica Minolta CS-100A with a 1-degree aperture) also captured X,Y CIE color space. The A-10 and the JTAC 360 dome were initialized to the same location, elevation and attitude in the same database. Nine easy-to-identify-and-repeat objects in the database scene were selected, and X,Y measures were collected from them. Before application of the DLL algorithm, the average distance of the JTAC X,Y coordinates from the A-10 X,Y coordinates was 0.053 X,Y units. After application of the DLL algorithm, the average distance from the A-10 color coordinate location decreased to 0.048 X,Y units, or about 10 percent closer — a small but measurable amount.

The 800-polygon test sphere was modified to include a row of 12 colored polygons, each with a different color but of generally mixed chroma. With the JTAC 360 dome initialized to the modified test sphere center, the same photometer was used to capture luminance values from each colored polygon and from a reference full-white polygon, both before and after application of the DLL algorithm. After application of the DLL algorithm, the average luminance ratio of the full-white polygon to the colored polygons increased from 1.44:1 to 1.63:1, a 14 percent increase in contrast ratio.

Using measured display system contrast ratios as input to the DLL algorithm should be considered only as a starting point. This algorithmic approach is not based on physics, color science or what is known of human visual perception. It is based upon desired algorithm characteristics. The final setting must consider subjective assessments of scene improvements. Tweak it until the scene looks best and scene correlation differences appear least objectionable.

The JTAC systems use the A-10 FMT database without change, to save cost and schedule during development and prototyping. The A-10 FMT database colors and intensities have been tailored and tweaked to satisfy the subjective opinions of A-10 pilots for several years.

Algorithmic options exist to improve scenes in display systems with low contrast ratios and to improve correlation between networked systems having large display system contrast ratio differences.

Use of modern graphics processors' programmable vertex and pixel shader functions can simplify and speed up algorithm development and testing.

It is recommended that additional tests be conducted by modifying all color palettes (not using the DLL plug-in) to ensure optimum system performance with minimal risk of artifacts.

It is also recommended that standard tests, methods and tools to measure display system contrast ratio be developed for networked simulation programs that include devices with a significant range of display system contrast ratios.

Amos Kent is a computer engineer supporting the U.S. Air Force Research Laboratory/Warfighter Readiness Research Division’s Rehearsal Enabling Simulation Technologies (REST) project in Mesa, Ariz. Rich Rybacki is co-founder and chief technology officer at MetaVR, His primary responsibilities include the development and support of VRSG, MetaVR’s image generator product. Mike Sieverding is a senior engineer with L-3 Link Simulation & Training. He supports the REST project. This briefing is an edited version of a paper the authors presented at the IMAGE 2009 conference in St. Louis in August, www.image-society.org.

Source

For more details please check A Method to Compensate for Display System Contrast Ratio Differences in Distributed Simulation

No comments:

Post a Comment