Exploitation of Multi-Temporal SAR and EO Satellite - Semantic Scholar

Report 10 Downloads 94 Views
Exploitation of Multi-Temporal SAR and EO Satellite Imagery for Geospatial Intelligence Jeff Secker Defence R&D Canada – Ottawa 3701 Carling Avenue Ottawa, ON K1A 0Z4 Canada [email protected] Abstract – The simultaneous exploitation of multisensor imagery for geospatial intelligence applications is a challenging problem, and Image Analysts would benefit from tools that introduce automation and fusion into the exploitation process in a suitable manner. These tools should take advantage of the human cognitive ability to fuse and assimilate multiple sources and types of information; image fusion tools should be judged successful if they trigger new insight for the Image Analyst. This paper describes some relativelysimple methods for the exploitation of synthetic aperture radar (SAR) and electro-optical (EO) commercial satellite imagery, with a focus on their integration into the geospatial intelligence workflow. Two examples of multi-temporal satellite imagery products produced by a DRDC Ottawa test-bed system, Image Analyst Pro, are presented. It is demonstrated that a high-resolution panchromatic EO image colourized by a SAR-derived degree of change is an effective intelligence product.

Paris W. Vachon Defence R&D Canada – Ottawa 3701 Carling Avenue Ottawa, ON K1A 0Z4 Canada [email protected] would be applicable include facility monitoring, National security, Arctic surveillance, maritime surveillance, search and rescue, disaster management and damage assessment. While intelligence can in some cases be gathered from a single image, methods that rely on data from a single image generally fall short of producing useful GEOINT products [2]. To increase the probability that a given GEOINT tasking will be successful requires: multiple sensors; a variety of collection strategies; and multiple and varied exploitation techniques. As illustrated in Figure 1, Image Analysts (IAs) working with raster and vector data will be more successful at generating and distributing GEOINT products if they have tools for exploitation of multi-temporal and multisensor imagery integrated into the operational workflow.

Keywords: satellite imagery, synthetic aperture radar, electro-optical, geospatial intelligence, image fusion, coherent change detection, RADARSAT-1, QuickBird-2.

1 Introduction The National Geospatial-Intelligence Agency (NGA) defines geospatial intelligence (GEOINT) as “The exploitation and analysis of imagery and geospatial information to describe, assess, and visually depict physical features and geographically referenced activities on the earth.” GEOINT consists of imagery, imagery intelligence (IMINT) and geospatial information. Time spent on GEOINT is seldom wasted, as the prior knowledge of an enemy’s intentions and capability, coupled with a detailed understanding of the operational terrain, are invaluable. One example is the intelligence preparation of the battlefield (IPB), defined as “The systematic, continuous process of analyzing the threat and environment in a specific geographical area to support military operations” [1]. Other examples of intelligence, surveillance and reconnaissance (ISR) applications for which GEOINT

Figure 1. Successful geospatial intelligence requires raster and vector data, experienced image analysts, and a workflow that incorporates automation and exploitation tools for multi-sensor imagery.

2 Foundations for GEOINT 2.1 Commercial Satellite Imagery Data from synthetic aperture radar (SAR) and electrooptical (EO) sensors are complimentary. SAR sensors are active (essentially weather independent), emitting pulses of polarized electromagnetic (EM) radiation at microwave frequencies and recording the characteristics of the backscattered radiation as a function of antenna polarization. The resultant SAR images have

characteristic slant-range effects, such as layover and foreshortening. On the other hand, EO sensors are passive (weather and daylight constraints), recording characteristics of the solar EM radiation in the visible near-infrared through to short-wave infrared (VNIRSWIR) wavelengths and recording the characteristics of the reflected radiation. The resultant EO images can resemble aerial photographs, providing the IA with a more intuitive understanding of the scene content. In addition to the above complimentary factors, the acquisition modes and geometries differ. For acquisition modes, a given SAR sensor may support both strip-map and spotlight modes, while a given EO sensor may have both panchromatic (PAN) and multi-spectral imagery (MSI) modes. For acquisition geometries, SAR sensors are side looking, while EO sensors are often nadirlooking, and operate at a relatively small off-nadir angle. The result is that the resulting SAR and EO images have significantly different spectral properties, resolutions and pixel spacings. By way of example, Figure 2 illustrates SAR and EO images acquired over the Bay of Fundy (Canada). The upper panel shows a C-band (5.3 GHz) SAR image acquired (08-Aug-2005) by RADARSAT-1 using the ScanSAR Narrow A (SCN A; 50-m resolution) mode. The lower panel shows an EO image acquired (08-Aug2005) by the Moderate-resolution Imaging Spectroradiometer (MODIS; 36 spectral bands) instrument aboard NASA’s Aqua satellite. This is a truecolour image formed from bands 1, 3 and 4, and it has 250-m resolution.

In Figure 2, both images have been adjusted to emphasize ocean features, which include an Algae bloom (turquoise) visible in the MODIS image, and natural surfactants, wind speed variations and internal waves visible in the RADARSAT-1 image [3]. Considering the differences in the sensor and image data characteristics, it is not surprising that the simultaneous exploitation of SAR and EO imagery is a challenging problem. Two aspects are worth emphasizing. First, it is difficult – even under ideal and controlled conditions – to simultaneously acquire this multi-sensor dataset, due to scheduling conflicts and weather prediction. This needs to be understood when methods are proposed to exploit multi-sensor data. Second, the manual co-registration of SAR and EO imagery is a complex and time-consuming task, due to the differences in target and clutter signatures between SAR and EO images, and as a result it can be difficult to perform this co-registration correctly. Because of this complexity, accurate and robust automatic algorithms for the co-registration of SAR and EO images are not widely available.

2.2 The Image Analyst An Image Analyst (IA) works with multi-sensor and multi-temporal images acquired over multiple sites of interest. An IA is expected to produce detailed information on the significance of the event that is recorded in the imagery. The IA’s ability to produce GEOINT products can be affected by the “data-deluge” environment in which they work: the information content in a single image (e.g., a 2 GB digital file) can be very large, and the volume of imagery per unit time can be overwhelming. The result of this situation is that the IA is often over tasked, and that some of the available imagery may go unanalyzed. Observations of IA in action reveal the following: IAs are well trained, experienced and good at their work; IAs prefer to work with EO imagery, and there is a reluctance by some IAs to use SAR imagery; some IAs find coregistration too time consuming, instead preferring to use a “nudge layer” tool to locally align two layers (a quick and dirty solution); and some IAs are wary of Automatic (or Assisted) Target Detection (ATD) systems and previous claims of their certain success.

3 Enhancements to the GEOINT Workflow

Figure 2. RADARSAT-1 image (upper) and MODIS true-colour image (lower) acquired over the Bay of Fundy (Canada), with the vector shoreline shown in red. Both images have been enhanced to emphasize ocean features of interest.

This section describes the requirements leading to the design of a test-bed system for exploitation of multisensor imagery. The focus is on the integration of relatively-simple algorithms and tools into the GEOINT workflow.

3.1 Requirements

3.2 Design of a Test-Bed System

Given the potential for “data-deluge”, IAs would benefit from tools that: introduce automation into the workflow to reduce repetitive tasks; introduce algorithms (including fusion) that permit exploitation of multisensor data; assist with the interpretation of SAR images; and assist with the exploitation of advanced SAR products. Implementation of these new tools can be done in an R&D environment. Specific capabilities include:

The following principles were used in the design of the DRDC Ottawa test-bed system. First, automation should be used to assist (not replace) the IA. Second, all final decisions should rest with the IA. For example, consider the interactive validation of ATD results and the assessment of the usefulness of fusion products; these are both judgment calls by the IA as to whether or not the results should be accepted. Third, a well-designed User Interface (UI) is essential; interface, key strokes, and layout are as important as algorithms; and algorithms and tools should be implemented for speed and ease of use. Fourth, multi-sensor exploitation tools should function with orthorectified imagery, especially for the case of SAR and EO images, as a requirement for co-registered imagery is too constraining. Also, these tools should permit the manual alignment of images for a userspecified AOI. Fifth, the system should provide the resultant image products (e.g., fused SAR and EO) to the IA for exploitation, as certain features may be better defined in the combined product. Sixth, the tools within the system should take advantage of the human cognitive ability to fuse and assimilate multiple sources and types of information.

a.

Automation: Implement simple and effective automatic batch processing for computation of reduced resolution images (i.e., for rapid manipulation of large images) and for orthorectification processing (i.e., the process of correcting for perspective and terrain distortion to obtain a common perspective for crossplatform sensors).

b.

Pre-screening: Use ATD and change detection algorithms that prescreen the imagery data, and an interface that cues the IA to the image(s) that contain targets or other features of interest; these algorithms need to be trained using data from the same sensor and acquired over the same area of interest (AOI). Provide an interface that the IA can use to interactively and rapidly assess the ATD and change detection results and in doing so reduce the falsealarm rate.

c.

Multi-sensor exploitation: Implement tools to view and compare multitemporal and multi-sensor imagery, and image fusion tools for feature enhancement and SAR interpretation.

d.

Assisted GEOINT product creation and reporting: Provide tools that assist the IA in creating GEOINT products for a specific target and/or AOI, that take advantage of multi-sensor imagery when available, and that conform to standards (refer to (e) below).

e.

Interoperable: Use standard formats for the input and output of raster data (e.g., National Imagery Transmission Format – NITF – 2.0) and vector data (e.g., ESRI shapefile format) data, ATD results (e.g., Over The Horizon – OTH – Gold format) and reports (e.g., XML and HTML).

3.3 Implementation of a Test-Bed System: Image Analyst Pro Image Analyst Pro (IA Pro) is DRDC Ottawa’s test bed system for validation and demonstration of new algorithms for GEOINT. The focus of IA Pro development is on tools that introduce automation, feature extraction and image combination/fusion to the exploitation of SAR and EO imagery. IA Pro provides support for single- and multi-channel SAR and EO imagery, and it permits the IA to incorporate geospatial information (i.e., thematic vector layers and digital maps) for contextual awareness and increased accuracy. IA Pro was built using C++, Python, OpenEV and the Geospatial Data Abstraction Library (GDAL). Python is an open-source object-oriented interpreted programming language, OpenEV is an open-source library of raster and vector classes and functions, and GDAL is a translator library for raster geospatial data formats. IA Pro provides a multitude of tools for the manipulation and exploitation of multi-sensor and multi-temporal imagery for GEOINT. In the balance of this paper, the IA Pro image fusion tools will be described. The IA Pro Image Fusion tool permits the utilization of information extracted from SAR and EO imagery via combination with other raster and vector data. Furthermore, it permits the fusion of SAR and EO imagery into a combined imagery product. The key design feature of the Image Fusion tool is not the complexity of the algorithms, but is its flexibility and

ease of use, and its acceptance of orthorectified images (i.e., co-registered images are not required); prior manual alignment by the IA is also accepted. The Image Fusion tool provides the IA with the following capabilities: a.

Pixel-level fusion of imagery using the standard hue-saturation-value (HSV) transform, for the fusion of single-channel SAR imagery with multi-spectral EO imagery: In this implementation, the low-resolution image is resampled to match the pixel grid of the higherresolution image for the current AOI (i.e., the area in the main IA Pro view window). Then a red-greenblue (RGB) to HSV transform is applied to the multi-spectral EO imagery. The average intensity of the single-channel SAR image is then scaled to match the average intensity of the Value band. The Value band intensities are then replaced with the intensities from the single-channel SAR image, and then a HSV to RGB transform is performed.

b.

Multi-resolution image fusion using wavelet analysis, for the fusion of a single-channel SAR image with a panchromatic EO image: This method applies to two single-channel images that have very different resolutions (a factor of two or more) and spectral properties; the algorithm is described in Du et al. [4]. Briefly, this method aims to preserve information content of the two images during the fusion process, to preserve spatial information and to minimize image artefacts by minimizing the degree of resampling.

c.

Construction of a false-colour RGB composite from the combination of arbitrary images and image products: In this implementation, information extracted from a SAR or EO image or a fused image product is saved as raster layer. The IA is then provided with tools to colourize this raster layer (a choice of uniform colour or symbols), which in turn can be overlaid on and/or blended with any other raster layer.

A standard discussion of image fusion will state something like ‘image fusion refers to the combination of two or more images to form a new image product containing more complete or more accurate information’ [4,5]. This would suggest that a quantitative metric is required to assess the information content of the images and the fusion product. In the approach adopted for the IA Pro test-bed system, the fused image product is considered a success if it helps the IA, for example, by triggering a more complete understanding of the AOI. Thus the quality of the fused

product is a judgment call by the IA, and it will be specific to the application, the AOI and/or the sensors.

4 Example Workflows and GEOINT Products This section provides two examples of multi-sensor and multi-temporal imagery products produced using IA Pro, with a description of how the activity and tools fit into the GEOINT workflow.

4.1 Comparison against a Reference Image It is quite common in IMINT analyses for an IA to compare one image against another. Common methods of doing this include blinking or blending the two images; both blink and blend tools are implemented in IA Pro. Example applications include change detection, in which a new image is compared against a baseline image, image interpretation, in which an existing EO image is used to provide context for a new SAR image, and feature enhancement, in which both SAR and EO images are required for measurement or for a full understanding of a specific feature. It should be noted that in the above cases, comparison of one image against another can be useful even in the case of non-concurrent acquisitions. The upper panel of Figure 3 shows a RADARSAT-1 Fine beam mode image chip of a harbour region (27May-2000), the middle panel shows an IKONOS-2 MSI image chip of the same harbour region (27-May-2000), and the lower panel shows a SAR-EO blended product produced by IA Pro. Looking at features in the images (e.g., the buildings and the fishing vessels), it can be seen that the blended product combines the bright backscatter from the SAR image with the aerial-photograph-like nature of the EO image. Note that the SAR and EO images in this example were orthorectified and manually aligned but were not co-registered. Recognizing that the blend tool is used routinely, it was implemented in IA Pro as an “always active” tool; there is no need for the IA to select a menu item or toolbar button to access this common function.

Figure 4 shows the main IA Pro window with the Insert New Target tool active for manual target detection. In this example, two raster data sets are loaded: a RADARSAT-1 Fine-beam mode image (8-m resolution) and an IKONOS-2 multi-spectral image (4-m resolution). A vector layer (road network) is also loaded and displayed. In this example, the IA is performing target detection and inspection in the SAR image, and has selected a very bright target. The image chip windows (right-hand side) show zoomed views of the SAR and the EO images at that location, while the graph region (bottom right) displays the spectral data for the target and clutter regions. The text display region (centre bottom) displays the point target statistics, including RCS, peakto-clutter ratio (PCR), and the 3 dB width and length. The IA can view the image chips, the spectra and the statistics, as well as the target within the wider-area context, and decide whether this is a target of interest.

4.3 Assessment of SAR Coherent Change Detection Products

Figure 3. RADARSAT-1 image (upper) and IKONOS-2 image (middle) acquired over a harbour. A blended SAR-EO image product is shown in the lower panel.

4.2 Visual Assessment of Targets IAs routinely examine targets in imagery – length, shape, spectrum, proximity to a road and radar cross section (RCS) are examples of characteristics – for GEOINT analyses such as facility monitoring, broad area search and change detection analysis. Target inspection tools have been implemented in IA Pro, and are available to the IA during three different workflows: manual target inspection using the Insert New Target tool; systematic scanning of an image using the Autopan tool; and interactive validation of ATD results using the Interactive Target Validation tool. The key feature of the IA Pro implementation of these target inspection tools is that the images, image chips, spectral data, vector information and calculation results are presented in such a way that the IA can easily decide whether or not the current object is a target of interest. As well, mensuration (measurement) is a crucial task during all three workflows, so the mensuration mode is active by default in the image chip windows.

Interferometric SAR (InSAR) measures the phase difference between two images acquired from slightly different positions or times. Repeat-pass InSAR, the technique used with RADARSAT-1, acquires images on two different satellite passes from the same relative orbit (i.e., with nearly identical geometry). The interferometric phase coherence is estimated over a spatial kernel, and can be used to detect changes on the Earth’s surface down to the scale of the radar wavelength. If the coherence is high, then the interferometric phase may be used to measure topography, land subsidence, and glacial flow. Note that coherence is reduced by vegetation, surface moisture, snow drifts and sand drifts; thus high coherence can be achieved for environments that are both vegetation free and dry, such as desert and Arctic. Coherent Change Detection (CCD) is the application of InSAR phase coherence to change detection. One example would be detection of the tracks of a vehicle which traveled over a hard-packed, dry surface sometime between the two image acquisitions, and for which the tracks are not visible on either of the individual SAR intensity images. CCD is complimentary to EO Change Detection (EOCD) and SAR Non-Coherent Change Detection (NCCD) in so far as CCD can reveal what has occurred between the time of image acquisitions, while EOCD and NCCD can reveal changes occurring at the times of acquisition. To acquire SAR imagery for CCD applications, one uses the highest-resolution mode available (to minimize change at the pixel level), and minimizes the time between the images.

Figure 4. Screen capture showing the main window of Image Analyst Pro system in manual target inspection mode.

To evaluate tools for InSAR processing and CCD generation, DRDC Ottawa has acquired Fine-beam mode RADARSAT-1 imagery over a test site near Kandahar, Afghanistan. The collection of two InSAR-compatible time series (one ascending and the other descending) are on-going, with 12 and 14 Fine-beam mode images respectively having been collected over the last 18 months. Figure 5 illustrates the application of phase coherence to change detection. The upper panel shows an image chip taken from the first RADARSAT-1 image (13-Aug2005, master); the AOI is approximately 2.2 km in the North-South direction. The middle panel shows an image chip taken from the second image (06-Sep-2005, slave) acquired one cycle later. The coherence image is shown in the lower panel of Figure 5, in which the dark areas indicate low coherence and therefore areas of change. In this image, the broad regions of low coherence correspond primarily to vegetated areas and areas of radar shadow, but in some cases could also be due to drifting sand. Linear features in the coherence image may be of more interest, as these often correspond to roads and paths that were used for travel between the times of the two image acquisitions.

Figure 5. RADARSAT-1 InSAR compatible image pair and the resulting coherence image. Upper panel – master image; middle panel – slave image; lower panel - coherence image.

In some cases the coherence image can be difficult to interpret, and in these cases two changes would be helpful. The first change is to provide an integrated reference to a high-resolution EO image. The second change is to present the IA with (1 - coherence) instead of the coherence, and to use a colour other than white as the modulation colour. In this new representation, bright green will correspond to areas of change.

5 Conclusions Image Analysts would benefit from tools that introduce automation and fusion into the operational workflow. DRDC Ottawa has developed a test-bed system, Image Analyst Pro, which demonstrates several new tools for exploitation of SAR and EO imagery. These tools facilitate the human cognitive ability to fuse and assimilate multiple sources and types of information, and feature image fusion tools that are implemented for ease of use (ortho-rectified, not co-registered, imagery is required), resulting in fused products which may trigger new insight for the Image Analyst. As examples, this paper presents three types of tools and products relevant to exploitation of SAR and EO imagery for GEOINT: the first for comparison of one image against a reference baseline image for context and for change detection; the second for manual target inspection; and the third for coherent change detection analysis. The products shown in this paper (with the exception of the InSAR-derived grey-scale coherence image) were generated using the Image Analyst Pro system.

2.2 km

Figure 6. Generation of an EO image colourized by the SARderived degree of change for an AOI in the northwest part of Kandahar. Upper panel – QuickBird-2 panchromatic image chip; middle panel – RADARSAT-1 derived (1 – coherence) image; lower panel – blend of the upper and middle panels.

To illustrate this, the upper panel of Figure 6 shows a panchromatic image chip taken from a QuickBird-2 image (0.62 m resolution) acquired 07-Sep-2005. The middle panel of Figure 6 shows the RADARSAT-1 derived (1 - coherence) product. The lower panel of Figure 6 shows the composite image product, which provides a high-resolution EO image colourized by a SAR-derived degree of change. This type of GEOINT product presents the change detection results in a unique manner. A second view of this type of GEOINT product is provided in Figure 7, which illustrates the results over a wider area.

Figure 7. Blend of the SAR-derived degree of change with the panchromatic EO image over a wider area of the Kandahar region. The red box shows the region presented in Figure 6.

Acknowledgements The authors would like to thank: the current IA Pro development team of S. Gong, W. Hughes, A. Meek and M. Robson; D. Wilson for his work with the InSAR processing; and D. Schlingmeier for helpful comments on the initial version of this paper. The RADARSAT-1 InSAR data acquired over Kandahar were processed using EV-InSAR and the Coherent Target Monitoring (CTM) module. RADARSAT-1 imagery is copyright by the Canadian Space Agency; MODIS imagery is copyright by NASA; IKONOS-2 imagery is copyright by Space Imaging; QuickBird-2 imagery is copyright by DigitalGlobe.

References [1] J.M. Irvine, M.A. O’Brien and P. Hofmann, User Performance Evaluation of the eCognition System, Second International eCognition Users Conference, Munich, Germany, 04 – 05 March 2004. [2] M.A. O’Brien and J.M. Irvine, Information fusion for feature extraction and the development of geospatial information, Proceedings of the 7th International Conference on Information Fusion, Stockholm, Sweden, pp. 976-982, 28 June – 01 July 2004. [3] P.W. Vachon, B.G. Whitehouse, W.M. Renaud, R. De Abreu and D. Billard, Polar Epsilon MODIS and fused MODIS MetOc products for national defence and domestic security, DRDC Ottawa TM 2006-067, Defence R&D Canada – Ottawa, 2006. [4] Y. Du, P.W. Vachon and J. van der Sanden, Satellite image fusion with multiscale wavelet analysis for marine applications: preserving spatial information and minimizing artifacts (PSIMA), Can. J. Remote Sensing, Vol. 29, No. 1, pp. 14-23, 2003. [5] R.S. Blum and Z. Liu (eds.), Multi-sensor image fusion and its applications, CRC Press, Taylor & Francis Group, Boca Raton, FL, 2006.