Ir

[A = 3 |im]

Passive Microwave [f s 10 GHz; As 3 cm]

From an orbiting spacecraft at h = 900 km

1 m

1.1 m

6.59 m

65.9 km = 40.9 miles

3 m

0.366 m = 14.4 in.

22 m

22 km = 13.6 miles

From a synchronous spacecraft

(h =-35.800, km)

1 m

43.7 m

262 m

2,620 km = 1,630 miles

3 m

14.6 m

87.4 m

874 km = 543 miles

From SR-71 at h = 20 km (70,000 ft)

0.3 m

0.081 m = 3.19 in.

0.488 m

4.88 km

9.4 Observation Payload Design

The electromagnetic radiation that forms the basis of remote sensing arises as a by-product of energy being transferred from one form to another. In general, transformation processes that are more random produce wider bandwidth signatures, while a more organized process produces a more coherent return [Elachi, 1987], For example, heat generated by a diesel motor is radiated over a wide bandwidth in the infrared spectrum, while a laser (a more organized energy transformation) generates narrow bandwidth radiation. In spacecraft remote sensing we are concerned with processing measurements from four primary spectral types.

Visible systems operate from the ultraviolet (-0.3 |xm) to the red end of the visual spectrum (~ 0.75 pm). They offer the potential for high spatial resolution because of their short wavelengths, but can only operate in daylight because they depend on reflected sunlight

Infrared systems operate in various bands throughout the infrared spectrum (-1-100 |im) subject to atmospheric transmission windows. Infrared sensors can operate both day and night since the detected signal is a function of the emissivity of the scene (although the signatures will be different by day and night).

Microwave radiometers operate in the radio frequency range, chiefly at the millimeter wavelengths (20-200 GHz). Their resolution is three to five orders of magnitude worse than visible wavelength sensors with the same aperture size, but they are capable of collecting unique information over large areas. Typically, microwave sensor require extensive ground-truth calibration data to interpret the measurements.

Radar systems are active instruments which provide their own illumination of the scene in the centimeter to millimeter bands. The reflected signals can be processed to identify physical features in the scene. Radar systems can be designed to penetrate most atmospheric disturbances, such as clouds, because only larger features can reflect signals at radar wavelengths. Cantafio [1989] provides an extended discussion of space-based radar.

There are a number of different approaches for linking the fundamental physics of the Planck function to the practical design of remote sensing systems. Hovanessian [1988] treats emitted radiation as a signal to be detected and considers remote sensing essentially as a special case of antenna and propagation theory (even in the visible spectrum). Elachi [1987] begins with Maxwell's equations and focuses on the features of electromagnetic radiation, such as quantum properties, polarization, coherency, group and phase velocity, and Doppler shift to derive strategies for exploiting these features in different parts of the frequency spectrum. McCluney [1994] draws on the parallels between the science of radiometry and remote sensing in general and the science of the hitman eye as expressed in the literature of photometry. These references provide detailed, application-specific derivations beginning with Planck's Law. Our focus for the remainder of this chapter will be on engineering applications and rules-of-thumb to define and design remote sensing payloads.

Observation geometry, effective aperture, integration time, detector sensitivity, spectral bandwidth, transmission through the atmosphere, and ground pixel size determine the radiometric performance of an optical instrument. Depending on the spectral range, we define three basic categories of Earth observation. In the first case, the optical instrument receives reflected radiation from the surface of the Earth when it is illuminated by the Sun. The thermal emitted radiation of the Earth's surface is negligible in this case. The frequency range covered by this case includes the visible wave length (0.4-0.78 |im), the near infrared wavelength (0.78-1 pm), and the short wavelength infrared (1-3 |xm).

The second case involves optical instruments receiving emitted radiation from the surface of the Earth when the reflected radiation of the sun is negligible. This condition holds for the long wavelength infrared region (8-14 |im). The third case applies to the mid-wavelength infrared spectral region (3-5 ¡xm) where we must consider contributions from direct and reflected sources. Figure 9-13 shows the radiance available from direct and reflected radiation sources. Corresponding to Planck's law, the thermal emitted radiance of the Earth (modeled at 290 K) increases with wavelength for the spectral region shown. The reflected radiance from the Earth's surface decreases with wavelength.

Fig. 9-13. Radiance from Direct and Reflected Sources. Radiance contribution in W per square meter per meter (wavelength) per unit solid angle of reflected sunlight from the Earth and emitted radiation from the Earth as a function of wavelength. The sum is shown as a dashed line. The Sun Is modeled as a blackbody with a temperature of 6,000 K, the reflection coefficient of the Earth's surface and the transmission of the atmosphere are modeled as constants for clarity.

Fig. 9-13. Radiance from Direct and Reflected Sources. Radiance contribution in W per square meter per meter (wavelength) per unit solid angle of reflected sunlight from the Earth and emitted radiation from the Earth as a function of wavelength. The sum is shown as a dashed line. The Sun Is modeled as a blackbody with a temperature of 6,000 K, the reflection coefficient of the Earth's surface and the transmission of the atmosphere are modeled as constants for clarity.

In the visual and near IR (0.7 to 1.0 (im) bands, we resolve images produced by energy (chiefly from the Sun) reflected from the target scene rather than energy from the very limited self-emissions that occur in the visible band. But in the infrared, we see things almost entirely by their self-emission, with very little energy being reflected, particularly at night. We may use the same optical train elements—lenses, prisms, minors, and filters—to collect infrared energy as for visible and UV, but we must apply them differently. For example, ordinary glass is opaque to IR beyond 3 pm. whereas germanium, which is opaque in the visible band, is transparent in the 1.8 to 25-jxm region. Further, we must consider atmospheric scattering caused by aerosols and particles in the air. The amount of scattered radiation is a function of the inverse fourth power of the wavelength. Thus, IR penetrates haze and dust much better than visible radiation because the IR wavelengths are four or more times those in the visible spectrum. The same phenomena explain the reddish color of the sky near dawn and sunset At these times, shorter green, blue indigo, and violet wavelength signals are gready attenuated as they travel farther through the atmosphere than when the Sun is overhead.

9.4.1 Candidate Sensors and Payloads

Electro-optical imaging instruments use mechanical or electrical means to scan the scene on the ground. Spacecraft in geostationary orbits perceive very little relative motion between the scene and the spacecraft, so an optical instrument needs to scan in two dimensions to form an image. A common approach for geostationary imaging spacecraft, such as ESA's meteorological spacecraft, METEOSAT, involves placing a large scan mirror in front of the instrument's optics to perform the north-south scan. Rotation of the spacecraft around a north-south axis performs the east-west scan. Three-axis stabilized spacecraft in geostationary orbits frequently use a two-axis scan mirror in front of the optics to scan the scene in two dimensions. Alternatively, we can use a two-dimensional matrix imager, which maps each picture element (pixel) in the imager to a corresponding area on the ground. Scanning the scene then becomes a process of sampling the two-dimensional arrangement of pixels in the imager.

Spacecraft in low-Earth orbits move with respect to the scene. The sub-spacecraft point moves along the surface of the Earth at approximately 7,(XX) m/s (see Chap. 5). This motion can replace one of the scan dimensions, so the scanning system of the optical instrument needs to perform only a one-dimensional scan in the cross-track direction. Whiskbroom sensors scan a single detector element that corresponds to a single pixel on the ground in the cross-track direction. Fig. 9-14A illustrates this technique. Whiskbroom scanners can also use several detectors to reduce the requirements compared to a single detector. Each detector element corresponds to a pixel on-ground (see Fig. 9-14B), and the dwell time per pixel is multiplied by the number of detector elements used.

Push broom scanners use a linear arrangement of detector elements called a line imager covering the full swath width. The name "push broom" comes from the read-out process, which delivers one line after another, like a push broom moving along the ground track. Each detector element corresponds to a pixel on-ground. Fig. 9-14C illustrates this technique. The ground pixel size and the velocity of the sub-spacecraft point define the integration time.

Step-and-stare scanners use a matrix arrangement of detector elements (matrix imager) covering a part or the full image. Each detector element corresponds to a pixel on-ground. Fig. 9-14D illustrates this technique. Step-and-stare systems can operate in two basic modes. The first mode uses integration times that are chosen as in the case of the push broom sensor for which the ground pixel size and the velocity of the sub-satellite point determine the integration time. Thus, no advantage with respect to the integration time is achieved, but a well known geometry within the image is guaranteed. We need a shutter or equivalent technique, such as a storage zone on the imager, to avoid image smear during read-out. The second mode allows a longer integration time if the image motion is compensated to very low speeds relative to the ground. We can do this by shifting the imaging matrix in the focal plane or by moving the line of sight of the instrument by other means to compensate for the movement of the sub-spacecraft point Step-and-stare sensors require relatively complex optics if they must cover the full image size. An additional complexity is that the fixed pattern noise has to be removed from the image, since each pixel has a somewhat different responsive-

Ground Pixel

Ground Pixel

Scan Direction

Ground

Scan Direction

Ground

V/ Scan Direction

A. Single-element Whlskbroom Sensor y

Linear arranged detector pixels corresponding to linear, arranged ground pixels

Linear arranged detector pixels corresponding to linear, arranged ground pixels

C. Push Broom Sensor

B. Multi-element Whlskbroom Sensor

arranged In a two-dimensional matrix corresponding to the ground pixel arrangement arranged In a two-dimensional matrix corresponding to the ground pixel arrangement

D. Matrix Imager

C. Push Broom Sensor

D. Matrix Imager

Scanning Techniques for Electro-Optical Instruments. (A) Shows a whlskbroom scanner with a single detector element which scans one line after another across the swath. The swath width must be scanned in the time interval the sub-spacecraft point moves down one ground pixel length. (B) Shows a whlskbroom scanner with multiple detector elements which scan multiple lines across the swath at a time. The swath width must be scanned In the time Interval the sub spacecraft point moves down the multiple ground pixel length. (C) Shows a push broom scanner with multiple linearly arranged detector element which scan one line across the swath per integration time. The integration time is usually set to the time interval the sub-spacecraft point moves down one ground pixel length. (D) Shows a step-and-stare scanner with detector eiemente arranged in a matrix which scan the fun image per Integration time. The Integration time is usually set to the time interval the sub-spacecraft point needs to move down one ground pixel length.

ness and dark signal. Table 9-10 summarizes the distinguishing features of optical scanning methods.

An alternate approach for capturing the scene using matrix imagers involves positioning of the scene with respect to the instrument To image the entire scene, the instrument shifts, or "steps," to the next part of the scene (along-track and/or across-track) after the integration period. This approach is referred to as a step-and-stare imager. If it only covers a part of die required scene, then moderately complex (and also moderately sized) optics is required. We can use highly agile and accurate pointing mirrors in front of the instrument's optics to adjust the line of sight For example, the Early Bird satellite avoids the complexity of large matrix imagers or sophisticated

TABLE 9-10. Comparison of Optical Sensor Scanning Methods. We list relative advantages and disadvantages of different scanning mechanisms.

Scanning Technique

Advantages

Disadvantages

Whiskbroom Scanners-Single Detector Eement

High uniformity of the response function over uie scene

Relatively simple optics

Short dwell time per pixel

High bandwidth requirement and time response of the detector

Whiskbroom Scanners-Multiple Detector Elements

Uniformity of the response function over the swath

Relatively simple optics

Relatively high bandwidth and time response of the detector.

Push Broom Sensor

Uniform response function in the along-track direction.

No mechanical scanner required

Relatively long dwell time (equal to Integration time)

High number of pixels per line Imager required

Relatively complex optics

Step-and-Stare Imager with Detector Matrix

Well defined geometry within the image

Long Integration time (if motion compensation Is performed).

High number of pixels per matrix Imager required

Complex optics required to cover the full image size

Calibration of fixed pattern noise for each pixel

Highly complex scanner required if motion compensation is performed.

butting techniques of several smaller matrix imagers in favor of pointing the mirror with high dynamics and fine pointing performance.

Optical instruments for space missions usually rely on existing detector and imager designs. Custom tailoring of these detectors and imagers is common, however, to optimize the design with respect to performance and cost of the instrument We make the distinction between detectors, which consist of one or a few radiation-sensitive elements without read-out electronics, and imagers, which usually consist of a considerable number of discrete radiation-sensitive elements combined with read-out electronics.

We must select the materials used for detector elements depending on the spectral range of the instrument being designed. The ability of detector elements to absorb photons relates directly to the energy of the incident photons (and consequently to the wavelength of the radiation as well) and the effective band gap of the material. All matter, including the detector material, generates thermal photons. Therefore, we must lower the temperature of the detector elements such that the self-generated photons do not degrade die signal-to-noise ratio of the instrument This requirement becomes more stringent as the wavelength of the radiation being detected increases. With few exceptions, detectors and imagers have to be cooled for wavelengths in the short wave infrared (SWIR) band and longer.

For the spectral range between 400 nm and 1,100 nm, silicon detectors and imagers are used most frequently. Silicon is attractive because it is possible to combine the detector elements and read-out electronics in a single monolithic chip. We can produce line imagers with a large number of elements through this process.

Incident photons on a line imager are converted to an electrical output signal in the imager. For charge-coupled device (CCD) line imagers with read-out electronics, the process begins when incident photons are converted by each pixel (detector element)

into electrons according to a conversion efficiency dictated by the characteristic spectral response. The electrons are thai collected for the entire integration time for each pixel. The read-out of the charge generated—up to one million electrons per pixel—is performed via an analog shift register into a serial output port. Figure 9-15 shows a typical spectral response function for a silicon imager.

0 0

Post a comment