Soldiers See Battlefield Clearly With Advanced Imaging Systems

The Daily Journalist.

 

On the military battlefield, atmospheric turbulence significantly deteriorates the performance of tactical and long-range imaging systems used for intelligence, surveillance and reconnaissance operations. In addition, it creates difficulties for target acquisition, identification and recognition.

Traditionally, turbulence mitigation methods are applied to imagery in post-detection to improve image quality. However, such approaches are unable to process live data and images in real-time.

To advance capabilities for modern warfare and support decision-making, it is critical that commanders receive clear imagery and video data in real-time.
The scenario of real-time long-range imaging and processing through atmosphere for battlefield ISR and target tracking.
To meet this challenge, scientists and engineers at the U.S. Army Research Laboratory have developed an intelligent adaptive optics imaging system that uses deformable mirrors in conjunction with post-detection processing to remove turbulence-induced wavefront distortion while imagery is collected.

Over recent years, ARL researchers have developed technologies to fabricate DMs with different geometries including large aperture, multi-section (pocket) and obscuration-free.

These different geometries are necessary to satisfy specific imaging requirements, for example, range, field of view and resolution. They have been successfully applied to real-time imaging through atmospheric turbulence and mitigating turbulence effects in terrestrial free-space communication systems.

The Army Research Laboratory (ARL) can currently fabricate DMs with actuators with a response bandwidth up to 25 KHz and mechanical stroke movement up to +/- 15?m, or the equivalent of several tens of wavelengths potential correction.

The Block diagram for the LRF hardware acceleration and image fusion sequence at the full frame rate.
The Block diagram for the LRF hardware acceleration and image fusion sequence at the full frame rate.
These actuator parameters are two to three times faster in response speed and three to five times more in stroke range than previous devices and commercial products.

The scenario of real-time long-range imaging and processing through atmosphere for battlefield ISR and target tracking.

The improved response bandwidth enables a wavefront compensation rate that is over 100 times faster than the atmospheric turbulence variation (~200Hz) and the increased stroke range provides compensation for more wavefront distortion and optical aberration than previously possible. Further, for present applications, the DMs provide three to five pixels/cm2 resolution.

For the AO system control software, ARL researchers developed a delayed-stochastic parallel gradient decent control algorithm and tested it on an experimental testbed with a 2.3-km nearly-horizontal path.

Researchers used a far-field laser beacon as the metric signal for the SPGD control program.

The D-SPGD algorithm takes the travel time of the light from that distance into account and runs two DMs asynchronously to compensate the wavefronts of received images.

To further enhance the quality of the images, an advanced digital synthetic processing technique called lucky-region fusion was used.

The LRF algorithm, developed previously by ARL, enhances image resolution over a large field-of-view by extracting only those regions from each intake image frame that present high resolution and fusing the individual regions into a single image.

The AO compensated and LRF hardware processed real-time video image captured from 2.3 km distance with an output speed of 100 frame per second.

Like other computational imaging systems, which combine pre-detection compensation with post-detection processing to generate imagery with enhanced information, system performance is improved when the LRF algorithm operates on a high-performance computer.

Conventional processors, and even graphics processing unit, are incapable of providing real-time extraction, processing and reconstruction of information.

To accelerate processing speed, ARL researchers collaborated with the University of Delaware to exploit the parallel processing capability of field programmable gate arrays.

Together, ARL and university researchers integrated the lucky region extraction element of the LRF algorithm into a VIRTEX-7 FPGA processor. Image fusion was performed on a GPU processor.

With this hardware acceleration, ARL demonstrated 100 frames/sec of real-time imaging and processing with a latency of less than 10 millisecond, or only one frame, as compared to processing speed of only one frame/sec a few years ago.

As compared to conventional ISR systems, wherein data and imagery are first collected and then processed off-line in data or command centers, ARL’s real-time system significantly reduces delays in providing useful imagery to commanders.

It provides them with a new capability in real-time long-range atmospheric imaging for situational awareness, target identification and tracking, and allows them to capitalize on opportunities that they would not have previously had.

What Next?

Recent Articles