Info

Data on Map

Data End User

Orbit and attitude data are also part of the 'housekeeping' system

Fig. 2-2. FIreSat Data-Row Diagram. The purpose of the data flow is to view the space mission from a data-oriented perspective. We want to know where the data comes from, what processing must be done, and where the results are used. Our principal mission objective is to provide the necessary data to the end user at minimum cost and risk.

Data End User

Orbit and attitude data are also part of the 'housekeeping' system

Fig. 2-2. FIreSat Data-Row Diagram. The purpose of the data flow is to view the space mission from a data-oriented perspective. We want to know where the data comes from, what processing must be done, and where the results are used. Our principal mission objective is to provide the necessary data to the end user at minimum cost and risk.

To put the image on a map, we need to determine the spacecraft's orbit and attitude. The attitude will almost certainly be determined on board. The orbit may be determined either on board or by observations from the ground. In either case, the orbit and attitude information are combined to determine where on the ground the sensor is looking. We then select the map corresponding to the area we are looking at so we can correlate the sensor data with some physical location the fire fighters recognize.

Even though we are not certain yet how the data will be used, we can be fairly sure that bur end data from FireSat will have several applications other than immediate use by the fire fighters. We will want to archive it in some central location for recordkeeping and improving our capacity to analyze and interpret future data. Finally, we will sort out a large amount of ancillary data, such as statistics, reports, and forest-management information, and use it over an extended period. The need for this data does not have the real-time demand of the fire data itself.

Hie importance of the data-flow diagram is that it lets us see what has to happen in order to make our mission work. For FireSat, we need to combine the mission sensor with orbit and attitude information in order to make our system work in real time. However, the most difficult step is probably the one labeled "Image Interpretation and Analysis." Can an automated system quickly detect forest fires and send information directly to the user, or do we need extensive interpretation and analysis by trained people in mission operations? What type of experiments or information must we have to determine which of these is possible? Even after we have selected an approach, we should revisit it regularly to see that it still makes sense. If we decide that FireSat's real-time requirements demand data processing in a computer on board the spacecraft, we may dramatically drive up the cost because onboard processing is expensive. Our mission analysis may result in an automated FireSat which costs several times the annual budget of the Forest Service. If so, we need to reconsider whether it would be more economical to have an analyst on the ground interpret the data and then simply phone the results to an appropriate fire station. The data-flow diagram is valuable in helping to identify and track these central issues.

We will now look at two of the three principal trades associated with data delivery: space vs. ground processing and central vs. distributed processing. Section 2.1.2 and Chap. 23.3 discuss the level of autonomy.

Space vs. ground processing trades. In most earlier space missions, ground stations processed nearly all of the data because spaceborne processors could not do much. Chapter 16 describes several reasons onboard processing lags ground processing. But many onboard processors are now available with dramatically increased capacity. Consequently, a major trade for future missions is how much to process data on board the spacecraft vs. on the ground, either at a mission-operations facility or with the end user.

Section 3 2 describes how we undertake these and other system trades and compare the results. The main issues in the space vs. ground trade are as follows:

1. Autonomy—how independent do we want the system to be of analysis and control by a mission operator? If evaluation by people is critical, we must do much of the data processing on the ground. If autonomous processing is appropriate, it can be done on board the spacecraft, at a central ground facility, or among the end users. The level of autonomy is both a key trade in its own right and an element of the space vs. ground trade.

2. Data latency—how late can the data get to the end user? If we are allowed only fractions of a second, we must go to automated processes, probably on board the spacecraft For FireSat, although we need the data in "near real time," the delays associated with sending the data to the ground for processing are not critical.

3. Communications bandwidth—how much data needs to be transmitted? If we have large amounts of data from a sensor, we should process and compress it as near the source as possible. Bringing down all of the FireSat imaging data and then deciding what to process further on the ground will cause an enormous communications problem and will probably drive up the FireSat mission's cost needlessly.

4. Single vs. multiple users—if there are a large number of end users, as would be the case for FireSat, we may be able to save considerable money by doing a high level of processing on board the spacecraft and sending the results directly down to the individual users.

5. Location of end user—is the "end user" for any particular data element on the ground or in space? In a space-to-space relay or a system for providing automatic orbit maintenance, the end application is in space itself. In this case, sending data to the ground for processing and then returning the results to the space system can be very complex and costly. On the ground, the complexity of the system is strongly affected by whether there is one end user at the mission operations center or multiple, scattered users, as in the case of FireSat

Even if we choose to process data mostly in space, the basic system design should allow us to obtain or recreate selected raw data for analysis on the ground. A fully automated FireSat should have some means to record or broadcast the raw imaging data, so mission planners and analysts can evaluate how well the system is working, fix problems, and plan alternative and better techniques for later missions.

Traditionally, space software has been much more expensive than ground software. This suggests that processing on the ground is generally lower cost than processing on board the spacecraft We believe that this will change in the future and, therefore, software cost should not be a major trade element in the space vs. ground processing trade. The cost of software is a function of what is done and how reliable we need to make it, rather than where it is done. We can choose to make highly reliable software as nearly error-free as possible for our ground systems and this software will have the high cost inherent with most previous onboard software systems. On the other hand, simple software with many reusable components can be developed economically and used on the spacecraft as well as on the ground.

The space vs. ground processing trade will be a key issue and probably a significant stumbling block for most missions in the near future. For short-lived, nontime-critical missions, it will probably be more economical to work on the ground with little automation. For long-lived missions, or time-critical applications, we will have to automate the processing and then do space vs. ground trades to minimize the operation and end-user costs. In any case, we wish to use the data flow analysis to evaluate where the data is coming from and where it will be used. If possible, we would like to minimize the communication requirements and associate data (e.g., attach time or position tags) as early as possible after the relevant data has been created.

For FireSat the payload sensor generates an enormous amount of data, most of which will not be use fill. One way to effectively deal with large amounts of raw data on board the spacecraft is to compress the data (i.e., reduce the amount of data to be stored or transmitted) prior to transmitting it to the ground. The data is then recreated on the ground using decompression algorithms. There is a variety of methods for compressing data, both lossless and lossy. Lossless data compression implies that no information is lost due to compression while lossy compression has some "acceptable" level of loss. Lossless compression can achieve about a 5 to 1 ratio whereas lossy compression can achieve up to 80 to 1 reduction in data. Many of the methods of data compression store data only when value changes. Other approaches are based on quantization where a range of values is compressed using mathematical algorithms or fractal mathematics. By using these methods, we can compress the data to a single algorithm that is transmitted to the ground and the image is recreated based on the algorithm expansion. With the use of fractals, we can even interpolate a higher resolution solution than we started with by running the fractal for an extended period of time [Lu, 1997]. We select a method for data compression based on its strengths and weaknesses, the critical nature of the data, and the need to recreate it exactly [Sayood, 1996].

When we transmit housekeeping data we would generally use lossless compression for several reasons. First, raw housekeeping data is not typically voluminous. Second, it is important that none of the data is lost due to compression. However, when we transmit an image we might easily use lossy compression. We could either preview the image using lossy compression of we could say that the recovered image is "good enough." Alternatively, a high resolution picture may have so much information that the human eye can not assimilate the information at die level it was generated. Again, in this case a lossy compression technique may be appropriate.

In the FireS at example, we might use a sensor on board the spacecraft that takes a digital image of the heat generated at various positions on the Earth. The digital image will be represented by a matrix of numbers, where each pixel contains a value corresponding to the heat at that point on the Earth's surface. (Of course, we will need some method, such as GPS, for correlating the pixel in the image to the location on the Earth.) If we assume that the temperature at each location or pixel is represented by 3 bits, we can distinguish eight thermal levels. However, if we set a threshold such that a "baseline" temperature is represented with a 0, we might find that over many portions of the Earth, without fire, the image might be up to 70% nominal or 0. This still allows for several levels of distinction for fires or other "hot spots" on the Earth. Rather than transmit a 0 data value for each cold pixel, we can compress the data and send only those pixel locations and values which are not 0. As long as the decompression software understands this ground rule, the image can be exactly recreated on the ground. In this case, we can reduce our raw data volume to the number of hot spots that occur in any given area.

Central vs. distributed processing. This is a relatively new issue, because most prior spacecraft did not have sufficient processing capability to make this a meaningful trade. However, as discussed above, the situation has changed. The common question now is, "how many computers should the spacecraft have?" Typically, weight and parts-count-conscious engineers want to avoid distributed processing. However, centralized processing can make integration and test extremely difficult Because integration and test of both software and hardware may drive cost and schedule, we must seriously consider them as part of the processing trade.

Our principal recommendations in evaluating central vs. distributed processing are:

• Group like functions together

• Group functions where timing is critical in a single computer

• Look for potentially incompatible functions before assigning multiple functions to one computer

• Maintain the interface between groups and areas of responsibility outside of the computer

• Give serious consideration to integration and test before grouping multiple functions in a single computer

Grouping like functions has substantial advantages. For example, attitude determination and attitude control may well reside in the same computer. They use much of the same data, share common algorithms, and may have time-critical elements. Similarly, orbit determination and control could reasonably reside in a single navigation computer, together with attitude determination and control. These hardware and software elements are likely to be the responsibility of a single group and will tend to undergo common integration and testing.

In contrast, adding payload processing to the computer doing the orbit and attitude activities could create major problems. We can't fully integrate software and hardware until after we have integrated the payload and spacecraft bus. In addition, two different groups usually handle the payload and spacecraft bus activities. The design and manufacture of hardware and software may well occur in different areas following different approaches. Putting these functions together in a single computer greatly increases cost and risk during the integration and test process, at a time when schedule delays are extremely expensive.

Another problem which can arise from time to time is incompatible functions, that is, activities which do not work well together. One example would be sporadic, computationally-intensive functions which demand resources at the same time. Another example occurs when the initial processing of either spacecraft bus or payload sensors may well be an interrupt-driven activity in which die computer is spending most of its time servicing interrupts to bring in observational data. This could make it difficult for the same computer to handle computationally-intensive processing associated with higher-level activities. This can be accommodated either by having the functions handled in separate computers or using a separate I/O processor to queue data from the process with a large number of interrupts.

Finally, we must consider the groups who oversee different activities. Integration and test of any computer and its associated software will be much more difficult if two distinct groups develop software for the same computer. In this case, significant delays and risks can occur. This does not necessarily mean, however, that elements controlled by different groups cannot be accommodated in the same computer. One approach might be to have two engineering groups be responsible for development of specifications and ultimately for testing. The detailed specifications are then handed over to a single programming group which then implements them in a single computer. This allows a single group to be responsible for control of computer resources. Thus, for example, the orbit control and attitude control functions may be specified and tested by different analysis groups. However, it may be reasonable to implement both functions in a single computer by a single group of programmers.

2.1.2 Tasking, Scheduling, and Control

Tasking, scheduling, and control is the other end of the data-delivery problem. If the purpose of our mission is to provide data or information, how do we decide what information to supply, whom to send it to, and which resources to obtain it from? Many of the issues are the same as in data delivery but with several key differences. Usually, tasking and control involve very low data rates and substantia] decision making. Thus, we should emphasize how planning and control decisions are made rather than data management

Tasking and scheduling typically occur in two distinct time frames. Short-term tasking addresses what the spacecraft should be doing at this moment. Should FireSat be recharging its batteries, sending data to a ground station, turning to look at a fire over Yosemite, or simply looking at the world below? In contrast, long-term planning establishes general tasks the system should do. For example, in some way the FireSat system must decide to concentrate its resources on northwestern Pacific forests for several weeks and then begin looking systematically at forests in Brazil. During concept exploration, we don't need to know precisely how these decisions are made. We simply wish to identify them and know broadly how they will take place.

On the data distribution side, direct downlink of data works well. We can process data on board, send it simultaneously to various users on the ground, and provide a low-cost, effective system. On the other hand, direct-distributed control raises serious problems of tasking, resource allocation, and responsibility. The military community particularly wants distributed control so a battlefield commander can control resources to meet mission objectives. For FireSat, this would translate into the local rangers deciding how much resource to apply to fires in a particular area, including the surveillance resources from FireSat. The two problems here are the limited availability of resources in space and broad geographic coverage. For example, FireSat may have limited power or data rates. In either case, if one regional office controls the system for a time, they may use most or all of that resource. Thus, other users would have nothing left Also, FireSat could be in a position to see fires in Yosemite Park and Alaska at the same time. So distributed control could create conflicts.

For most space systems, some level of centralized control is probably necessary to determine how to allocate space resources among various tasks. Within this broad resource allocation, however, we may have room for distributed decisions on what data to collect and make available, as well as how to process it For example, the remote fire station may be interested in information from a particular spectral band which could provide clues on the characteristics of a particular fire. If this is an appropriate option, the system must determine how to feed that request back to the satellite. We could use a direct command, or, more likely, send a request for specific data to mission operations which carries out the request

Spacecraft Autonomy. Usually, high levels of autonomy and independent operations occur in the cheapest and most expensive systems. The less costly systems have minimal tasking and control simply because they cannot afford the operations cost for deciding what needs to be done. Most often, they continuously carry on one of a few activities, such as recovering and relaying radio messages or continuously transmitting an image of what is directly under the spacecraft What is done is determined automatically on board to save money. In contrast, the most expensive systems have autonomy for technical reasons, such as the need for a very rapid response (missile detection systems), or a problem of very long command delays (interplanetary missions). Typically, autonomy of this type is extremely expensive because the system must make complex, reliable decisions and respond to change.

Autonomy can also be a critical issue for long missions and for constellations, in which cost and reliability are key considerations. For example, long-duration orbit maneuvers may use electric propulsion which is highly efficient, but slow. (See Chap. 17 for details.) Thruster firings are ordinarily controlled and monitored from the ground, but electric propulsion maneuvers may take several months. Because monitoring and controlling long thruster burns would cost too much, electric propulsion requires some autonomy.

As shown in Fig. 2-3, autonomy can add to mission reliability simply by reducing the complexity of mission operations. We may need to automate large constellations for higher reliability and lower mission-operations costs. Maintaining the relative positions between the satellites in a constellation is routine but requires many computations. Thus, onboard automation—with monitoring and operator override if necessary—will give us the best results.

With the increased level of onboard processing available, it is clearly possible to create fully autonomous satellites. The question is, should we do so or should we continue to control satellites predominantly from the ground?

Three main functions are associated with spacecraft control: controlling the payload, controlling the attitude of the spacecraft and its appendages, and controlling the spacecraft orbit Most space payloads and bus systems do not require real-time control except for changing mode or handling anomalies. Thus, the FireSat payload will probably fly rather autonomously until a command changes a mode or an anomaly forces the payload to make a change or raise a warning. Autonomous, or at least semi-autonomous payloads are reasonable for many satellites. There are, of course, exceptions such as Space Telescope, which is an ongoing series of experiments being run by different principal investigators from around the world. In this case, operators control

Traditional Approach

Traditional Approach

• Operations Intensive

• Look Point deteimlned atter tho fact

Tbl» ami

0 0

Post a comment