Info

I I I I 1,000 2,000

I I 3,000

Fig. 3-1. Results of FireSat Altitude Trade. See Table 3-4 and Table 7-6 In Sec. 7.4 for a list of trade Issues. Political constraints and survivability were not of concern for the FireSat altitude trade.

Fig. 3-1. Results of FireSat Altitude Trade. See Table 3-4 and Table 7-6 In Sec. 7.4 for a list of trade Issues. Political constraints and survivability were not of concern for the FireSat altitude trade.

3.2.4 Performance Assessments

Quantifying performance demands an appropriate level of detail. Too much detail drains resources away from other issues; too little keeps us from determining the important issues or causes us to assess the actual performance incorrectly.

To compute system performance, we use three main techniques:

• System algorithms

• Analogy with existing systems

• Simulation

System algorithms are the basic physical or geometric formulas associated with a particular system or process, such as those for determining resolution in diffraction-limited optics, finding the beam size of an antenna, analyzing a link budget, or assessing geometric coverage. Table 3-5 lists system algorithms typically used for space mission analysis. System algorithms provide the best method for computing performance. They provide clear traceability and establish the relationship between design parameters and performance characteristics. Thus, for FireSat, we are interested in the resolution of an on-orbit fire detector. Using the formula for diffraction-limited optics in Chap. 9, we can compute the achievable angular resolution from the instrument objective's diameter. We can then apply the geometric formulas in Chap. 5 to translate this angular resolution to resolution on the ground. This result gives us a direct relationship between the altitude of the FireSat spacecraft, the size of the payload, the angles at which it works, and the resolution with which it can distinguish features on the ground.

TABLE 3-5. Common System Algorithms Used for Quantifying Basic Levels of Performance. These analyses use physical or geometrical formulas to determine how system performance varies with key parameters.

Algorithm

Used For

Where Discussed

Link Budget

Communications and data rate analysis

Sec. 13.3.6

Diffraction-limited Optics

Aperture sizing for optics or antennas; determining resolution

Sec. 9.3

Payload Sensitivity

Payload sizing and performance estimates

Sees. 9.4,9.5

Radar Equation

Radar sizing and performance estimates

[Cantatio,1989]

Earth Coverage, Area Search Rates

Coverage assessment; system sizing; performance estimates

Sees. 5.2,12

Mapping and Pointing Budget

Geolocation; instrument and antenna pointing; Image sensing

Sec. 5.4

System algorithms are powerful in that they show us directly how performance varies with key parameters. However, they are inherently limit«! because they presume the rest of the system is designed with fundamental physics or geometry as the limiting characteristic. For FireSat, resolution could also be limited by the optical quality of the lens, by the detector technology, by the spacecraft's pointing stability, or even by the data rates at which the instrument can provide results or that the satellite can transmit to the ground. In using system algorithms, we assume that we have correctly identified what limits system performance. But we must understand that these assumptions may break down as each parameter changes. Finding the limits of these system algorithms helps us analyze the problem and determine its key components. Thus, we may find that a low-cost FireSat system is limited principally by achieving spacecraft stability at low cost Therefore, our attention would be focused on the attitude control system and on the level of resolution that can be achieved as a function of system cost

The second method for quantifying performance is by comparing our design with existing systems. In this type of analysis we use the established characteristics of existing sensors, systems, or components and adjust the expected performance according to basic physics or the continuing evolution of technology. The list of payload instruments in Chap. 9 is an excellent starting point for comparing performance with existing systems. We could, for example, use the field of view, resolution, and integration time for an existing sensor and apply them to FireSat. We then modify the basic sensor parameters such as the aperture, focal length, or pixel size, to satisfy our mission's unique requirements. To do this, we must work with someone who knows the technology, the allowable range of modifications, and their cost For example, we may be able to improve the resolution by doubling the diameter of the objective, but doing so may cost too much. Thus, to estimate performance based on existing systems, we need information from those who understand the main cost and performance drivers of that technology.

The third way to quantify system performance is simulation, described in more detail in Sec. 3.3.2. Because it is time-consuming, we typically use simulation only for key performance parameters. However, simulations allow much more complex modeling and can incorporate limits on performance from multiple factors (e.g., resolution, stability, and data rate). Because they provide much less insight, however, we must review the results carefully to see if they apply to given situations. Still, in complex circumstances, simulation may be the only acceptable way to quantify system performance. A much less expensive method of simulation is the use of commercial mission analysis tools as discussed in Sec. 3.3.3.

33 Step 8: Mission Utility

Mission utility analysis quantifies mission performance as a function of design, cost, risk, and schedule. It is used to (1) provide quantitative information for decision making, and (2) provide feedback on the system design. Ultimately, an individual or group will decide whether to build a space system and which system to build based on overall performance, cost, and risk relative to other activities. As discussed in Sec. 3.4, this does not mean the decision is or should be fundamentally technical in nature. However, even though basic decisions may be political, economic, or sociological, the best possible quantitative information from the mission utility analysis process should be available to support them.

Mission utility analysis also provides feedback for the system design by assessing how well alternative configurations meet the mission objectives. FireSat shows how this process might work in practice. Mission analysis quantifies how well alternative systems can detect and monitor forest fires, thereby helping us to decide whether to proceed with a more detailed design of several satellites in low-Earth orbit or a single larger satellite in a higher orbit As we continue these trades, mission analysis establishes the probability of being able to detect a given forest fire within a given time, with and without FireS at, and with varying numbers of spacecraft For FireSat the decision makers are those responsible for protecting the forests of the United States. We want to provide them with the technical information they need to determine whether they should spend their limited resources on FireSat or on some alternative. If they select FireSat we will provide the technical information needed to allow them to select how many satellites and what level of redundancy to include.

3_3.1 Performance Parameters and Measures of Effectiveness

The purpose of mission analysis is to quantify the system's performance and its ability to meet the ultimate mission objectives. Typically this requires two distinct types of quantities—performance parameters and measures of effectiveness. Performance parameters, such as those shown in Table 3-6 for FireSat quantify how well the system works, without explicitly measuring how well it meets mission objectives. Performance parameters may include coverage statistics, power efficiency, or the resolution of a particular instrument as a function of nadir angle. In contrast measures of effectiveness (MoEs) or figures of merit (FoMs) quantify directly how well the system meets the mission objectives. For FireSat the principal MoE will be a numerical estimate of how well the system can detect forest fires or the consequences of doing so. This could, for example, be the probability of detecting a given forest fire within 6 hours, or the estimated dollar value of savings resulting from early fire detection. Table 3-7 shows other examples.

TABLE 3-8. Representative Performance Parameters for Fire Sat By using various performance parameters, we get a better overall picture of our FlreSat design.

Performance Parameter

How Determined

Instantaneous maximum area coverage rate

Analysis

Orbit average area coverage rate

(takes into account forest coverage, duty cycle)

Simulation

Mean time between observations

Analysis

Ground position knowledge

Analysis'

System response time (See Sec. 7.2.3 for definition)

Simulation

TABLE 3-7. Representative Measures of Effectiveness (MoEs) for FlreSat These Measures of Effectiveness help us determine how well various designs meet our mission objectives.

Goal

MoE

How Estimated

Detection

Probability of detection vs. time (milestones at 4, 8,24 hours)

Simulation

Prompt Knowledge

Time late = time from observation to availability at monitoring office

Analysis

Monitoring

Probability of containment

Simulation

Save Property and Reduce Cost

Value of property saved plus savings in firefighting costs

Simulation + Analysis

We can usually determine performance parameters unambiguously. For example, either by analysis or simulation we can assess the level of coverage for any point on the Earth's surface. A probability of detecting and containing forest fires better measures our end objective, but is also much more difficult to quantify. It may depend on how we construct scenarios and simulations, what we assume about ground resources, and how we use the FireSat data to fight fires.

Good measures of effectiveness are critical to successful mission analysis and design. If we cannot quantify the degree to which we have met the mission objectives, there is little hope that we can meet them in a cost-effective fashion. The rest of this section defines and characterizes good measures of effectiveness, and Sees. 3.3.2 and 3.3.3 show how we evaluate them.

Good measures of effectiveness must be

• Clearly related to mission objectives

* Understandable by decision makers

• Quantifiable

* Sensitive to system design (if used as a design selection criterion)

MoEs are useless if decision makers cannot understand them. "Acceleration in the marginal rate of forest-fire detection within the latitudinal coverage regime of the end-of-life satellite constellation" will likely need substantial explanation to be effective. On the other hand, clear MoEs which are insensitive to the details of the system design, such as the largest coverage gap over one year, cannot distinguish the quality of one system from another. Ordinarily, no single measure of effectiveness can be used to quantify how the overall system meets mission objectives. Thus, we prefer to provide a few measures of effectiveness summarizing the system's capacity to achieve its broad objectives.

Measures of effectiveness generally fall into one of three broad categories associated with (1) discrete events, (2) coverage of a continuous activity, or (3) timeliness of the information or other indicators of quality. Discrete events include forest fires, nuclear explosions, ships crossing a barrier, or cosmic ray events. In this case, the best measures of effectiveness are the rate that can be sustained (identify up to 20 forest fires per hour), or the probability of successful identification (90% probability that a forest fire will be detected within 6 hours after ignition). The probability of detecting discrete events is the most common measure of effectiveness. It is useful both in providing good insight to the user community and in allowing the user to create additional measures of effectiveness, such as the probability of extinguishing a forest fire in a given time.

Some mission objectives are not directly quantifiable in probabilistic terms. For example, we may want continuous coverage of a particular event or activity, such as continuous surveillance of the crab nebula for extraneous X-ray bursts or continuous monitoring of Yosemite for temperature variations. Here the typical measure of effectiveness is some type of coverage or gap statistics such as the mean observation gap or maximum gap under a particular condition. Unfortunately, Gaussian (normal probability) statistics do not ordinarily apply to satellite coverage; therefore, the usual measure of average values can be very misleading. Additional details and a way to resolve this problem are part of the discussion of coverage measures of effectiveness in Sec. 7.2.

A third type of measure of effectiveness assesses the quality of a result rather than whether or when it occurs. It may include, for example, the system's ability to resolve the temperature of forest fires. Another common measure of quality is the timeliness of the data, usually expressed as time late, or, in more positive terms for the user, as the time margin from when the data arrives until it is needed. Timeliness MoEs might include the average time from ignition of the forest fire to its initial detection or, viewed from the perspective of a potential application, the average warning time before a fire strikes a population center. This type of information, illustrated in Fig. 3-2, allows the decision maker to assess the value of FireSat in meeting community needs.

33.2 Mission Utility Simulation

In analyzing mission utility, we try to evaluate the measures of effectiveness numerically as a function of cost and risk, but this is hard to do. Instead, we typically use principal system parameters, such as the number of satellites, total on-oibit weight, or payload size, as stand-ins for cost Thus, we might calculate measures of effectiveness as a function of constellation size, assuming that more satellites cost more money. If we can establish numerical values for meaningful measures of effectiveness as a function of the system drivers and understand the underlying reasons for the results, we will have taken a major step toward quantifying the space mission analysis and design process.

Recall that mission utility analysis has two distinct but equally important goals—to aid design and provide information for decision making. It helps us design the mission by examining die relative benefits of alternatives. For key parameters such as payload type or overall system power, we can show how utility depends on design choices, and therefore, intelligently select among design options.

State Preparations Evacuation Fire of Alert Begin Period Hits

State Preparations Evacuation Fire of Alert Begin Period Hits

Measure of Effectiveness = Warning Time (hours)

Fig. 3-2. Forest Fire Warning Time for Inhabited Areas. A hypothetical measure of effectiveness for FlreSat

Measure of Effectiveness = Warning Time (hours)

Fig. 3-2. Forest Fire Warning Time for Inhabited Areas. A hypothetical measure of effectiveness for FlreSat

Mission utility analysis also provides information that is readily usable to decision makers. Generally those who determine funding levels or whether to build a particular space system do not have either the time or inclination to assess detailed technical studies. For large space programs, decisions ultimately depend on a relatively small amount of information being assessed by individuals at a high level in industry or government A strong utility analysis allows these high-level judgments to be more informed and more nearly based on sound technical assessments. By providing summary performance data in a form the decision-making audience can understand, the mission utility analysis can make a major contribution to the technical decisionmaking process.

Typically, the only effective way to evaluate mission utility is to use a mission utility simulation designed specifically for this purpose. (Commercial simulators are discussed in Sec. 3.3.3.) This is not the same as a payload simulator, which evaluates performance parameters for various payloads. For FireSat, a payload simulator might compute the level of observable temperature changes or the number of acres that can be searched per orbit pass. In contrast, the mission simulator assumes a level of performance for the payload and assesses its ability to meet mission objectives. The FireSat mission simulator would determine how soon forest fires can be detected or the amount of acreage that can be saved per year.

In principle, mission simulators are straightforward. In practice, they are expensive and time consuming to create and are rarely as successful as we would like. Attempts to achieve excessive fidelity tend to dramatically increase the cost and reduce the effectiveness of most mission simulators. The goal of mission simulation is to estimate measures of effectiveness as a function of key system parameters. We must restrict the simulator as much as possible to achieving this goal. Overly detailed simulations require more time and money to create and are much less useful, because computer time and other costs keep us from running them enough for effective trade studies. The simulator must be simple enough to allow making multiple runs, so we can collect statistical data and explore various scenarios and design options.

The mission simulation should include parameters that directly affect utility, such as the orbit geometry, motion or changes in the targets or background, system scheduling, and other key issues, as shown in Fig. 3-3. The problem of excessive detail is best solved by providing numerical models obtained from more detailed simulations of the payload or other system components. For example, we may compute FireSat's capacity to detect a forest fire by modeling the detector sensitivity, atmospheric characteristics, range to the fire, and the background conditions in die observed area. A detailed payload simulation should include these parameters. After running the pay-load simulator many times, we can, for example, tabulate the probability of detecting a fire based on observation geometry and time of day. The mission simulator uses this table to assess various scenarios and scheduling algorithms. Thus, the mission simulator might compute the mission geometry and time of day and use the lookup table to determine the payload effectiveness. With this method, we can dramatically reduce repetitive computations in each mission simulator run, do more simulations, and explore more mission options than with a more detailed simulation. The mission simulator should be a collection of the results of more detailed simulations along with unique mission parameters such as the relative geometry between the satellites in a constellation, variations in ground targets or background, and the system scheduling or downlink communications. Creating sub-models also makes it easier to generate utility simulations. We start with simple models for the individual components and develop more realistic tables as we create and run more detailed payload or component simulations.

Main Models Energy

Time utilization System performance Scheduling Background characteristics Search logic Data utilization

Simulator & Output Processors

Observation Types (FIreSat example)

Search mode Map mode Rre boundary mode Temperature sensing

Principal Inputs

Scenarios

System parameters Constellation parameters

Principal Outputs

Animation sequence Observation data System parameters Energy used

Tims used

Probabffity of detection/containment

Cloud cover Rre detection Mo Es

Fig. 3-3. Results of FIreSat Altitude Trade. See Table 3-4 and Table 7-6 In Sec. 7.4 for a list of trade issues. Political constraints and survivability were not of concern for the Rre Sat altitude trade.

Table 3-8 shows the typical sequence for simulating mission utility, including a distinct division into data generation and output This division allows us to do various statistical analyses on a single data set or combine the outputs from many runs in different ways. In a constellation of satellites, scheduling is often a key issue in mission utility. The constellation's utility depends largely on the system's capacity to schedule resource use appropriately among the satellites. At the end of a single simulation run, the system should collect and compute the statistics for that scenario, generate appropriate output plots or data, and compute individual measures of effectiveness, such as the percent of forest fires detected in that particular run.

TABLE 3-8. Typical Sequence Flow of a Time-Stepped Mission Utility Simulation. Following this sequence for many runs, we can create statistical measures of effectiveness that help us evaluate our design.

Phase 1-

- Data Generation

0 0

Post a comment