Fundamental SAR Principles

Popular opinion has it that SAR is a difficult specialty, perhaps designed with devious intent to baffle newcomers. One could easily conclude that there are too many aspects to consider, including a few tricks known only to those versed in the art. On the contrary, SARs are well behaved. Synthetic aperture radars are elegant systems whose fundamentals may be summarized rather compactly. Six basic principles are sufficient to derive most important aspects of SAR. Stripped of its more familiar appearance, the essential features of a SAR system may be compactly modelled as one "black box", which in effect is a microwave transducer that converts an extensive complex input field into a suitable output mapping. This transducer implies two sequential operations: data collection, the task of the radar; and image formation, the task of the processor. The observation space (scene) is probed by a microwave signal, to produce the reflections that are collected by the radar. The output space (image) is a distribution of reflected power. Imagery usually has two dimensions, range and azimuth. The Magellan imagery of Venus is a most dramatic example from this class of radar [Johnson, 1991; Pettengill, 1993],

To provide a workable basis for the rather formal treatment of this section, it is convenient to make certain realistic assumptions. These are: 1) the system uses only one transmit and receive polarity; 2) input signals from the scene combine linearly; 3) signal manipulations on the output are linear in power; and 4) the system uses a magnitude squared method of detection. The single polarimetric channel admits a scalar model, which simplifies both the mathematics and the attendant discussion. The scalar results extend readily to full polarimetric systems. The assumption of input linearity for a SAR is true outside of the radar, and almost always true inside [.Harger, 1970], Signal combination prior to detection is coherent. The dominant and essential nonlinearity is signal detection. Detection is the process through which certain radiometric information carried by the electromagnetic (em) field, such as the power of the radar echo, may be extracted. For SARs, this step takes place in the processor after the complex image is focused. Magnitude squared detection, one of several possible methods, is convenient for mathematical reasons, and finds justification in nature as well. Following detection, additional averaging may be performed, known as incoherent integration since it occurs in the power domain.

The principles introduced in this article establish that radar systems have an underlying logical order, which is appealing in its own right to applications specialists as well as to systems theoreticians. At a more practical level, these principles provide a relatively simple basis for understanding SAR system performance, which may be helpful when more detailed questions are encountered. The discussion that follows each of the principles is meant not only to provide a first order explanation, but also to serve as an enticement for the reader to probe more deeply into these and related concepts.

Principle 1. Coherent scene illumination. Input to a remote sensing radar from the scene is a linear summation of voltages from reflecting elements that are illuminated by a coherent em wave field.

This principle, while obvious at one level, immediately sets radar remote sensing apart from optical remote sensing. Most optical remote sensing systems use natural radiation that is polychromatic, and emitted from an extended source. The radiation has random phase structure between any two arbitrary points in the illuminating field. Such em fields are incoherent. This means that the underlying phases are random, which implies that the combinations of reflections from more than one scene element add in power.

In distinct contrast, for most radars the illumination from each transmitted pulse is essentially monochromatic, emitted from a point source. This implies that the radiating field has structured phase fronts, which may be well represented for most purposes by spherical surfaces, centered at the radar. Thus, the em illumination incident on the scene is coherent. This means that the reflections from more than one scene element combine through vector addition.

Imagery is generated by processing over a signal ensemble comprised of reflections from many scatterers and from many pulses. If the radar is controlled so that the pulse sequence is mutually coherent, then the phase structure within the reflections from each scatterer can be exploited during processing. This is the case with a SAR.

If any element of the scene has a reflectivity that changes during the radar observation interval, then the relative phase of the reflected field will change. A systematic phase change in the scene may be observable by a coherent radar, which is the basis for moving target detection and radar interferometry, for example. Random phase changes in the scene, in the radar, or in the propagation path may lead to loss of image resolution.

Principle 2. Image power mapping. Image domain output from a remote sensing radar is a mapping of power, derivedfrom scene reflectivity, and presented as a two-dimensional array of numbers.

It is customary to think of a radar image as a visually observable entity, such as a photographic print, whose brightness (whiteness in a positive rendition) at each location is proportional to the strength of the corresponding radar echo. As was the case with the first principle, the statement seems obvious, but there are significant implications. No matter what kind of radar is being used, the microwave transducer must estimate the power from the sum of echo voltages for each location in the image. For coherent fields, the power is given by the magnitude squared of the vector voltage sum, not by the sum of the powers of individual echoes. Constructive and destructive interference takes place between the members of the signal ensemble. This leads to statistical variation of the image power estimates.

In the simplest scenario, the scene is composed of a set of stationary scatterers, and no phase errors are introduced through wave propagation between the scene and the radar. In this case, coherence within a group of signals is determined primarily by the coherence of the radar. SAR systems maintain phase coherence between multiple pulses, whereas for real aperture radars the pulses each have random reference phase, and so they are incoherent pulse to pulse. SAR processors usually are designed to introduce a controlled amount of incoherence, in order to improve image quality. The image power mapping from a SAR is described quite well by a partially coherent system model. The statistical properties of a SAR image are functions both of the scene and of the system.

Most SAR systems use digital processing, and hence their images may be presented as a discrete numerical array, in which the digital number at each datum point (pixel) is representative of image brightness. The image domain digital numbers may be converted to estimates of the scene's reflectivity coefficients through the applicable calibration rule. Note that pixel separation is bounded above by the Nyquist sampling limit. In a real image, at least two pixels are required to support the stipulated resolution distance, although often this constraint is relaxed because the image spectra are weighted.

Principle 3. Well behaved transformation. The transformation from scene data to image data by a remote sensing radar may be characterized if and only if both the response to an isolated point scatterer, and the response to a uniform Gaussian distributed scattering field, are determined.

This important principle applies to all partially coherent systems that may be represented with the mathematics of quadratic filters, a wide class that includes optical and radar sensors [Raney, 1968]. The definitive feature of such systems is that they operate on an input signal field in voltage to produce an output data field in power, using linear filters and square law detection, combined according to the degree of coherence in use by the system, and by the prevailing coherence of the scene.

It is well known that a linear filter may be characterized by its response to an arbitrarily short test signal, leading to the so-called impulse response function [Woodward, 1957], Application of the Fourier transform to the impulse response function produces a frequency domain description of the linear filter, known as its transfer function.

It is also true that the frequency distribution of a linear filter may be determined by observation of its frequency response to white Gaussian noise [Papoulis, 1968], The filter's impulse response may be found by inverse Fourier transformation of the frequency function. Thus, for linear systems, the impulse response and the system frequency transfer function are logical equivalents of each other. This simple relationship is not sufficient for radar systems.

For a SAR, as with all quadratic filters, measurement (or specification) is needed of both the impulse response and the frequency function [Raney, 1985]. The purpose of the impulse response test is to estimate the system imaging function, analogous to the point spread function encountered in optics. A SAR normally is operated in a partially coherent mode, as noted under principle 2. The purpose of the Gaussian response test is to estimate the extent of system partial coherence. It turns out that the Gaussian test also determines the system frequency distribution, which is useful in certain image analysis situations.

The essential lesson in this principle is that a SAR is well behaved in the best sense of system theory. Coherent input field summation in combination with the partially coherent operations of the radar and processor results in imagery that has features that are robust and predictable, even if their properties are less familiar than those of incoherent optical devices that behave linearly in power.

The following principles highlight two radiometric rules for SAR images that follow from the well behaved principle, and from the benefits of large time-bandwidth product.

Principle 4. Conservation of energy. For any neighborhood, if the scene is a Gaussian field, and all available data are used in each case, then the tone of the corresponding area in the image is a constant, independent of system focus, system coherence, or scene coherence.

A Gaussian field (mathematically speaking) is an important idealized scene for imaging studies [Goodman, 1976]. A wheat field (from the point of view of a farmer) is the classic practical example, having spatial features that are far too small to be resolved by the radar, and having a uniform average reflectivity over an area much larger than the resolution cell {Brown, 1967]. The radar is to estimate the power reflected from the wheat field by observing the corresponding average brightness value, or tone, in the image. The Gaussian property in this case derives from the fact that for each element in the image, there are many individual scatterers, such as the heads of the wheat shafts, contributing to the total scattered signal.

One might think that the tone would depend on the details of the field, including the number, size, and location of each scattering element. Mean reflectivity could be changed by random motion in the scattering field, due perhaps to a passing breeze. Further, knowledge of all radar and processor phase errors could be a reasonable requirement. Even more tempting, one might think that an error in system focus, or a change in the degree of partial coherence used in the processor, would lead to a change in the image tone. Wrong on all counts! Within very broad constraints, image tone is robust even when certain parameters change, which is the point of the compact wording of this principle.

The principle of conservation of energy is very powerful. Among other things, it establishes a solid basis for radiometric calibration of a SAR whereby image brightness values can be transformed to scene reflectivity estimates. For a given SAR, conservation of energy provides assurance that image average brightness (tone) is preserved independent of the number of degrees of freedom (looks) in the image data. Furthermore, given the same viewing geometry, polarization, wavelength and system bandwidth, two different (calibrated) radars should provide equivalent mean reflectivity estimates for the same scene. This fact allows measurements from scatterometers and other radars that are not "SAR systems" to provide reflectivity data that are directly applicable to SAR observations.

There is a higher order effect. One may show that the image energy actually is weakly dependent on SAR processor focus. For a focus error of 5, the image energy is proportional to 1/(1 + 8/TBP). To put this into perspective, impulse response broadening begins to be observable for 8 ~ 1. Since SAR systems typically have large TBP, it follows that energy conservation is robust even in the face of small focus errors.

Principle 5. Conservation of confusion. For any neighborhood, if the scene is a Gaussian field, and all available data are used in each case, then the uncertainty associated with each corresponding area in the image is a constant, independent of system focus, system coherence, or scene coherence.

At each neighborhood in the SAR image of a uniformly reflecting random field, there is a statistical distribution of the brightness, even though the input scene has constant average reflectivity. This component of image texture is known as speckle {Goodman, 1976], which is one consequence of the coherent illumination of the radar, as noted in principle 1. To first order, there are three measures of speckle: its statistical variation at a given pixel; its correlation with adjacent pixels in range; and its correlation with adjacent pixels in azimuth. The expected change in radiometric response at each pixel usually is measured by its covariance. (The covariance often is called the variance in SAR literature, a source of potential confusion.) The covariance is defined as the square of the second moment with respect to the mean value. The normalized covariance, which is the covariance divided by the local mean squared, is a measure of the amount of averaging in the data. The statistical number of degrees of freedom (or "looks" in SAR parlance) is given by the mean, squared, divided by the standard deviation.

Speckle radiometric variation at any image pixel is correlated with that at neighboring positions. The width of the covariance function is defined as speckle correlation length. The speckle correlation length is equal to the inverse of the coherent bandwidth of the system, which is true in both the range and the azimuth dimensions.

The uncertainty at each point is measured by the volume under the normalized two-dimensional image covariance function. The principle of conservation of confusion states that the uncertainty volume is a constant of the system, under a wide variety of circumstances.

The importance of this may be illustrated by two special cases. First, the width of the covariance function (or equivalently the width of the peak of the auto-correlation function) is equal to the inverse system bandwidth in each of the two image dimensions. As is true for any imaging system, the width of the impulse response is reduced as the focus is adjusted towards the ideal setting. At perfect focus, the impulse response width is minimized. This minimal width is defined as the system (ideal) resolution, which is equal to the width of the speckle covariance function. For a SAR, the covariance function is essentially invariant under a change in system focus. This means that the covariance function may be used to estimate the system bandwidth. This establishes the potential system resolution, which serves as an independent check on the results of an impulse response test. Also, this approach provides a practical means to estimate the system response function using real data from an arbitrary scene.

Second, a corollary of this principle states that there is a trade-off between resolution and the amount of averaging that may be used to reduce speckle variance [.Porcello et al, 1976; Moore, 1979], In brief, the ratio of the number of (statistically independent) looks divided by the product of the corresponding resolutions is a constant of the system, which is proportional to its (two-dimensional) Shannon channel capacity. System bandwidth may be used either to maintain fine resolution (the fully coherent case), or to generate statistically independent images to be averaged together to form the final image (the partially coherent case). For the latter, the reduction in image variance is equal to the proportionate increase in resolution cell size. A decrease in radiometric uncertainty implies an offsetting increase in spatial uncertainty. Uncertainty, or confusion, is conserved. This consequence is an example of the uncertainty principle for energy conservative systems [Papoulis, 1968].

This principle states that multi-look processing is information conservative when confronting diffuse scattering from area-extensive objects. This should not be surprising : information is bandwidth, and the statement of the problem does not admit that the two-dimensional system bandwidth should change. The trade-off is between radiometric resolution and spatial resolution. For distributed diffuse scattering, the trade-off is a toss-up. For point targets, however, for which resolution has merit in its own right, and there is less statistical variation to suppress, the trade-off is always in favor of resolution (and fewer looks), up to the limit dictated by either the linearity of the system or the coherence of the scene and the propagation path [Brown and Palermo, 1965; Raney, 1968],

There is a higher order dependence on processor focus, similar to that found above. One may show that the normalized covariance is proportional to 1/(1 + 8/TBP). Since SAR systems typically have large TBP, it follows that the image covariance is robust in the face of small focus errors.

The principle of conservation of confusion is a natural counterpart to the principle of conservation of energy. Together, these two principles describe the fundamental SAR performance of the first order image statistic, tone, and the second order image statistic, texture, in response to a uniformly reflecting distributed scene. These radiometric principles are complemented by one that undergirds a SAR's spatial performance.

Principle 6. Conservation of coordinates. For a rangelDoppler radar, the output mapping has constant range and azimuth coordinate locations on the imaged surface, independent of angular rotations of the sensor.

This principle follows from the first and third principles, and the illumination geometry customarily used with SAR systems. It implies that there is a fundamental spatial mapping accuracy available from a range/Doppler radar such as a SAR that is inherently less sensitive to sensor pointing precision than for most other imaging systems. For example, optical instruments derive mapping information from offsets relative to the angular orientation of the sensor itself. Mapping accuracy for those sensors is limited by the angular pointing stability of the sensor, and degrades in proportion to the distance of the scene from the instrument.

For range/Doppler radars, the situation is very different. In range, the coordinate is proportional to time delay, which is not affected by sensor rotations. In azimuth, the coordinates are established by the Doppler frequency gradient across the scene, which in turn is determined by the relative vehicle-target velocity, and its the radar's instantaneous along-track position. In either dimension, an error in sensor pointing does not impact spatial location of the coordinate system with respect to the sensor.

Was this article helpful?

0 0

Post a comment