Volume 6, Number 1

6-1 cover

An Overview of Automatic Target Recognition
Dan E. Dudgeon and Richard T. Lacoss

In this article we introduce the subject of automatic target recognition (ATR). Interest in ATR is increasing in the defense community as the need for precision strikes in limited warfare situations becomes an increasingly important part of our defense posture. We discuss reasons for the difficulty of the ATR problem and we survey the variety of approaches that try to solve the problem. We conclude by introducing the remaining articles in this special issue of the Lincoln Laboratory Journal

Performance of a High-Resolution Polarimetric SAR Automatic Target Recognition System
Leslie M. Novak, Gregory J. Owirka, and Christine M. Netishen

Lincoln Laboratory is investigating the detection, discrimination, and classification of ground targets in high-resolution, fully polarimetric, synthetic-aperture radar (SAR) imagery. This paper summarizes our work in SAR automatic target recognition by discussing the prescreening, discrimination, and classification algorithms we have developed; data from 5 km2 of clutter and 339 targets were used to study the performance of these algorithms. The prescreener required a low threshold to detect most of the targets in the data, which resulted in a high density of false alarms. The discriminator and classifier stages then reduced this false-alarm density by a factor of 100. We improved target detection performance by using fully polarimetric imagery processed by the polarimetric whitening filter (PWF), rather than by using single-channel imagery. In addition, the PWF-processed imagery improved the probability of correct classification in a four-class (tank, armored personnel carrier, howitzer, or clutter) classifier.

Discriminating Targets from Clutter
Daniel E. Kreithen, Shawn D. Halversen, and Gregory J. Owirka

The Lincoln Laboratory multistage target-detection algorithm for synthetic-aperture radar (SAR) imagery can be separated into three stages: the prescreener, the discriminator, and the classifier. In this article, we focus on the discrimination algorithm, which is a one-class, feature-based quadratic discriminator. An important element ofthe algorithm design is the choice of features. We examine fifteen features that are used in the discrimination algorithm—three features developed by Lincoln Laboratory, nine developed by the Environmental Research Institute of Michigan, two developed by Rockwell International Corporation, and one developed by Loral Defense Systems. The set of best features from this pool of fifteen was determined by a theoretical analysis, and was then verified by using real SAR data. Performance was evaluated for a number of different cases: for fully polarimetric data and HH polarization data and for 1-ft resolution data and 1-m resolution data. In all cases the theoretical performance analysis closely matched the real data performance. This closeness demonstrates a good understanding of the discrimination algorithm. In addition, we formulate a set of criteria for best feature choice that apply to quadratic discrimination algorithms in general.

Improving a Template-Based Classifier in a SAR Automatic Target Recognition System by Using 3-D Target Information
Shawn M. Verbout, William W. Irving, and Amanda S. Hanes

In this article we propose an improved version of a conventional template-matching classifier that is currently used in an operational automatic target recognition system for synthetic-aperture radar (SAR) imagery. This classifier was originally designed to maintain, for each target type of interest, a library of 2-D reference images (or templates) formed at a variety of radar viewing directions. The classifier accepts an input image of a target of unknown type, correlates this image with a reference template selected (by matching radar viewing direction) from each target library, and then classifies this image to the target category with the highest correlation score. Although this algorithm seems reasonable, it produces surprisingly poor classification results for some target types because of differences in SAR geometry between the input image and the best-matching reference image. Each reference library is indexed solely by radar viewing direction, and is thus unable to account for radar motion direction, which is an equally important parameter in specifying SAR imaging geometry. We correct this deficiency by incorporating a model-based reference generation procedure into the original classifier. The modification is implemented by (1) replacing each library of 2-D templates with a library of 3-D templates representing complete 3-D radar-reflectivity models for the target at each radar viewing direction, and (2) including a mathematical model of the SAR imaging process so that any 3-D template can be transformed into a 2-D image corresponding to the appropriate radar motion direction before the correlation operation is performed. We demonstrate experimentally that the proposed classifier is a promising alternative to the conventional classifier.

Neural Systems for Automatic Target Learning and Recognition
Allen M. Waxman, Michael Seibert, Ann Marie Bernardon, and David A. Fay

We have designed and implemented several computational neural systems for the automatic learning and recognition of targets in both passive visible and synthetic-aperture radar (SAR) imagery. Motivated by biological vision systems (in particular, that of the macaque monkey), our computational neural systems employ a variety of neural networks. Boundary Contour System (BCS) and Feature Contour System (FCS) networks are used for image conditioning. Shunting center-surround networks, Diffusion-Enhancement Bilayer (DEB) networks, log-polar transforms, and overlapping receptive fields are responsible for feature extraction and coding. Adaptive Resonance Theory (ART-2) networks perform aspect categorization and template learning of the targets. And Aspect networks are used to accumulate evidence/confidence over sequences of imagery.

In this article, we present an overview of our research for the past several years, highlighting our earlier work on the unsupervised learning of three-dimensional (3-D) objects as applied to aircraft recognition in the passive visible domain, the recent modification of this system with application to the learning and recognition of tactical targets from SAR imagery, the further application of this system to reentry-vehicle recognition from inverse SAR, or ISAR, imagery, and the incorporation of this recognition system on a mobile robot called the Mobile Adaptive Visual Navigator (MAVIN) at Lincoln Laboratory.

Multidimensional Automatic Target Recognition System Evaluation
Paul J. Kolodzy

We are developing an evaluation facility that includes an electronic terrain board (ETB) to provide an effective test environment for automatic target recognition (ATR) systems. The input to the ETB, which is a high-performance computer graphics workstation, is very high-resolution data (15 cm in 3-D) taken with pixel registration in the modalities of interest (laser radar, passive IR, and visible). The ETB contains sensor and target models so that measured imagery can be modified for sensitivity analyses. In addition, the evaluation facility contains a reconfigurable suite of ATR algorithms that can be interfaced to real and synthetic data for developing and testing ATR modules.

A first-generation hybrid-architecture (statistical, model based, and neural network) ATR system is currently operating on multidimensional (laser radar range, intensity and passive IR) sensor, synthetic, and hybrid databases to provide performance and validation results. A recent study determined the sensor requirements necessary for target classification and identification of eight vehicles under various view aspects, resolutions, and signal strengths.

This article presents a description of the infrared airborne radar used to gather sensor data, a discussion of sensor fusion and the hybrid ATR measurement system, and a review of the ATR evaluation facility. This article also discusses the computer manipulation and generation of laser-radar and passive-IR sensor imagery and the processing modules used for target detection and recognition. We give results of processing real and synthetic imagery with the ATR system, with an emphasis on interpreting results with respect to sensor design.

An Efficient MRF Image-Restoration Technique Using Deterministic Scale-Based Optimization
Murali M. Menon

A method for performing piecewise smooth restorations on images corrupted with high levels of noise has been developed. Based on a Markov Random Field (MRF) model, the method uses a neural network sigmoid nonlinearity between pixels in the image to produce a restoration with sharp boundaries while providing noise reduction. The model equations are solved with the Gradient. Descent Gain Annealing (GDGA) method—an efficient deterministic search algorithm that typically requires fewer than 200 iterations for image restoration when implemented as a digital computer simulation. A novel feature of the GDGA method is that it automatically develops an annealing schedule by adaptively selecting the scale step size during iteration. The algorithm is able to restore images that have up to 71% of their pixels corrupted with non-Gaussian sensor noise. Results from simulations indicate that the MRF-based restoration remains useful at signal-to-noise ratios 5 to 6 dB lower than with the more commonly used median-filtering technique. These results are among the first such quantitative results in the literature.

Machine Intelligent Automatic Recognition of Critical Mobile Targets in Laser Radar Imagery
Richard L. Delanoy, Jacques G. Verly, and Dan E. Dudgeon

A variety of machine intelligence (MI) techniques have been developed at Lincoln Laboratory to increase the performance reliability of automatic target recognition (ATR) systems. Useful for recognizing targets that are only marginally visible (due to sensor limitations or to the intentional concealment of the targets), these MI techniques have become integral parts of the Experimental Target Recognition System (XTRS)—a general-purpose system for model-based ATR Using laser radar images collected by an airborne sensor, the prototype system recognized a variety of semi-trailer trucks with high reliability, even though the trucks were deployed in high-clutter environments.

Machine Intelligent Gust Front Detection
Richard L. Delanoy and Seth W. Troxel

Techniques of low-level machine intelligence, originally developed at Lincoln Laboratory to recognize military ground vehicles obscured by camouflage and foliage, are being used to detect gust fronts in Doppler weather radar imagery. This Machine Intelligent Gust Front Algorithm (MIGFA) is part of a suite of hazardous-weather-detection functions being developed under contract with the Federal Aviation Administration. Initially developed for use with the latest-generation Airport Surveillance Radar equipped with a wind shear processor (ASR-9 WSP), MIGFA was deployed for operational testing in Orlando, Florida, during the summer of 1992. MIGFA has demonstrated levels of detection performance that have not only markedly exceeded the capabilities of existing gust front algorithms, but are competitive with human interpreters.

Extracting Target Features from Angle-Angle and Range-Doppler Images
Su May Hsu

For diffuse targets, features such as shape, size, and motion can be determined from a time series of images from either angle-angle passive telescopes or range-Doppler radars. The extracted target features can then be used for automated target recognition and identification.

An algorithm that uses scene-analysis techniques has been developed to perform the feature extraction. The algorithm first processes the images to suppress noise, then applies a two-dimensional slope operation for edge detection to determine the target boundaries. Next, Hough transforms are used on the target edges to detect straight lines and curves, which are subsequently refined with line and curve fits. Groups of the fitted lines are then examined to form cylinders and cones representing typical target components. After these shapes have been identified, the target configuration, size, location, and attitude can be estimated. The target motion can then be inferred from a time series of attitudes that have been extracted from a sequence of images.

top of page