Volume 14, Number 1

14_1 cover

Spectral Imaging for Remote Sensing
Gary A. Shaw and Hsiao-hua K. Burke

Spectral imaging for remote sensing of terrestrial features and objects arose as an alternative to high-spatial-resolution, large-aperture satellite imaging systems. Early applications of spectral imaging were oriented toward ground-cover classification, mineral exploration, and agricultural assessment, employing a small number of carefully chosen spectral bands spread across the visible and infrared regions of the electromagnetic spectrum. Improved versions of these early multispectral imaging sensors continue in use today. A new class of sensor, the hyperspectral imager, has also emerged, employing hundreds of contiguous bands to detect and identify a variety of natural and man-made materials. This overview article introduces the fundamental elements of spectral imaging and discusses the historical evolution of both the sensors and the target detection and classification applications.

Compensation of Hyperspectral Data for Atmospheric Effects
Michael K. Griffin and Hsiao-hua K. Burke

Hyperspectral imaging sensors are used to detect and identify diverse surface materials, topographical features, and geological features. Because the intervening atmosphere poses an obstacle to the retrieval of surface reflectance data, algorithms exist to compensate the measured signal for the effects of the atmosphere. This article provides an overview and an evaluation of available atmospheric compensation algorithms for the visible-through-shortwave infrared spectral region, including comparison of operational characteristics, input requirements, algorithm limitations, and computational requirements. Statistical models based on empirical in-scene data are contrasted with physics-based radiative transfer algorithms. The statistical models rely on a priori scene information that is coupled with the sensor spectral observations in a regression algorithm. The physics-based models utilize physical characteristics of the atmosphere to derive water vapor, aerosol, and mixed gas contributions to the atmospheric signal. Treatment of aerosols in atmospheric compensation models varies considerably and is discussed in some detail. A three-band ratio approach is generally used for the retrieval of atmospheric water vapor. For the surfaces tested in this study, the retrieved surface reflectances from the two physics-based algorithms are similar under dry, clear conditions but differ under moist, hazy conditions. Sensitivity of surface-reflectance retrievals to variations in scene characteristics such as the solar zenith angle, atmospheric visibility, aerosol type, and the atmospheric temperature profile is presented in an effort to quantify the limitations of the models.

A Survey of Spectral Unmixing Algorithms
Nirmal Keshava

Spatial pixel sizes for multispectral and hyperspectral sensors are often large enough that numerous disparate substances can contribute to the spectrum measured from a single mixed pixel. Consequently, the desire to extract from a spectrum the constituent materials in the mixture, as well as the proportions in which they appear, is important to numerous tactical scenarios where subpixel detail is valuable. With this goal in mind, spectral unmixing algorithms have proliferated in a variety of disciplines that exploit hyperspectral data, often duplicating and renaming previous techniques. This article distills these approaches into a unique set and surveys their characteristics through hierarchical taxonomies that reveal the commonalities and differences between algorithms. A set of criteria organizes algorithms according to the philosophical assumptions they impose on the unmixing problem. Examples demonstrate the performance of key techniques.

Hyperspectral Image Processing for Automatic Target Detection Applications
Dimitris Manolakis, David Marden, and Gary A. Shaw

This article presents an overview of the theoretical and practical issues associated with the development, analysis, and application of detection algorithms to exploit hyperspectral imaging data. We focus on techniques that exploit spectral information exclusively to make decisions regarding the type of each pixel—target or nontarget—on a pixel-by-pixel basis in an image. First we describe the fundamental structure of the hyperspectral data and explain how these data influence the signal models used for the development and theoretical analysis of detection algorithms. Next we discuss the approach used to derive detection algorithms, the performance metrics necessary for the evaluation of these algorithms, and a taxonomy that presents the various algorithms in a systematic manner. We derive the basic algorithms in each family, explain how they work, and provide results for their theoretical performance. We conclude with empirical results that use hyperspectral imaging data from the HYDICE and Hyperion sensors to illustrate the operation and performance of various detectors.

Hyperspectral Imaging System Modeling
John P. Kerekes and Jerrold E. Baum

To support hyperspectral sensor system design and parameter trade-off investigations, Lincoln Laboratory has developed an analytical end-to-end model that forecasts remote sensing system performance. The model uses statistical descriptions of scene class reflectances and transforms them to account for the effects of the atmosphere, the sensor, and any processing operations. System-performance metrics can then be calculated on the basis of these transformed statistics. The model divides a remote sensing system into three main components: the scene, the sensor, and the processing algorithms. Scene effects modeled include the solar illumination, atmospheric transmittance, shade effects, adjacency effects, and overcast clouds. Modeled sensor effects include radiometric noise sources, such as shot noise, thermal noise, detector readout noise, quantization noise, and relative calibration error. The processing component includes atmospheric compensation, various linear transformations, and a number of operators used to obtain detection probabilities. Models have been developed for several imaging spectrometers, including the airborne Hyperspectral Digital Imagery Collection Experiment (HYDICE) instrument, which covers the reflective solar spectral region from 0.4 to 2.5 µm. This article presents the theory and operation of the model, and provides example parameter trade studies to show the utility of the model for system design and sensor operation applications.

Active Spectral Imaging
Melissa L. Nischan, Rose M. Joseph, Justin C. Libby, and John P. Kerekes

With the ability to image a scene in tens to hundreds of spectral bands, multispectral and hyperspectral imaging sensors have become powerful tools for remote sensing. However, spectral imaging systems that operate at visible through nearinfrared wavelengths typically rely on solar illumination. This reliance gives rise to a number of limitations, particularly with regard to military applications. Actively illuminating the scene of interest offers a way to address these limitations while providing additional advantages. We have been exploring the benefits of using active illumination with spectral imaging systems for a variety of applications. Our laboratory setup includes multispectral and hyperspectral sensors that are used in conjunction with several laser illumination sources, including a broadband white-light laser. We have applied active spectral imaging to the detection of various types of military targets, such as inert land mines and camouflage paints and fabrics, using a combination of spectral reflectance, fluorescence, and polarization measurements. The sensor systems have been operated under a variety of conditions, both in the laboratory and outdoors, during the day and at night. Laboratory and outdoor tests have shown that using an active illumination source can improve target-detection performance while reducing false-alarm rates for both multispectral and hyperspectral imagers.

Multisensor Fusion with Hyperspectral Imaging Data: Detection and Classification
Su May Hsu and Hsiao-hua K. Burke

We present two examples that show how fusing data from hyperspectral imaging (HSI) sensors with data from other sensors can enhance overall detection and classification performance. The first example involves fusing HSI data with foliage-penetration synthetic aperture radar (FOPEN SAR) data; the second example involves fusing HSI data with high-resolution imaging (HRI) data. The fusion of HSI and SAR data exploits different phenomenology from the two different sensors. The fusion of HSI and HRI data combines their superior respective spectral and spatial information. Fusion of HSI and SAR data is accomplished at the feature level. HSI data provide background characterization and material identification; HSI-SAR fusion allows us to reduce false detections and confirm target detection in the SAR image. Fusion of HSI and HRI data is implemented at both data and feature levels, resulting in a combined spatialspectral analysis that enhances target identification.

top of page