The Stratospheric Observatory for Infrared Astronomy (SOFIA) will use CORBA in several different environments--the airborne mission systems (MCS), the ground support system (DCS), and a Facility Science Instrument (FLITECAM). A review of CORBA development experiences on the MCS reflects the challenges and choices made, while comparison with other SOFIA implementations shows the variety of CORBA applications and benefits.
The Stratospheric Observatory for Infrared Astronomy (SOFIA) is a major infrared and submillimeter observatory scheduled to begin operations within two years. A joint collaboration of NASA and DLR (the German Air and Space Agency), the Boeing 747-SP aircraft will carry the 2.5-meter telescope at or above 12.5km, where the telescope will collect radiation primarily in the wavelength range from 0.3micrometers to 1.6millimeters. With a 20-year operational lifetime, SOFIA is designed to maximize astronomical value per unit cost, as compared to the other observing platforms such as balloons or satellites. To meet this goal, it must be a particularly flexible, long lasting and economic platform for conducting research. The SOFIA data systems will be key to achieving these objectives.
SOFIA has five principal software development environments:
In the following subsections we describe each of SOFIA's software components, so that their use in SOFIA--and CORBA's use in each component--will be clearer. Figure 1 provides graphical context for these descriptions.
SOFIA's numerous Science Instruments, or SIs, are developed by science teams according to specifications provided by Universities Space Research Association (USRA), the prime U.S. contractor for SOFIA. The observatory will change SIs as often as every week, thereby supporting a wide variety of scientific investigations over its 20-year life cycle, while using the latest science instrumentation.
There are two classes of science instruments, Facility Science Instruments (FSIs) and Principal Investigator Science Instruments. FSIs will be operated by the observatory, and as such must undergo more thorough review and meet more challenging development criteria. The First Light Infrared Test Experiment Camera (FLITECAM) is an FSI designed to perform critical checkout and integration phase activities for SOFIA.
On SOFIA, science instruments control the observatory, but only partially. Scientists using SOFIA will command science operations through an SI's user interface, and the SI in turn commands some aspects of the observatory through the MCS. But other observatory controls must be performed by observatory staff using their own interfaces. (The Data Cycle System, described below, will eventually provide a consistent interface for many of the science instruments, and perhaps even for the observatory itself.)
The Mission Control Subsystem (MCS) is the command and control heart of SOFIA. It communicates with the observatory subsystems (including the TA), coordinates their actions, and provides an interface for science instruments, scientists, observatory staff, and other users to interact with the observatory. To make this interface as accessible as possible, the MCS implements an ASCII-based command language, available over a standard TCP/IP socket interface.
The MCS is a highly distributed system with a high-speed network of nine workstations, most running the Solaris operating system. The MCS must achieve reliable, high throughput for both SOFIA mission housekeeping data and science commands. (Science data does not go through the MCS but is maintained internally by all SIs and also by the DCS for the FSIs.) The MCS must provide enough configurability and flexibility to serve as the SOFIA baseline data system for 20 years of science operations (Papke et al. 2000).
The Telescope Assembly (TA) performs control and pointing for SOFIA's telescope and related components. It is being developed by a consortium of companies under the direction of the Deutschen Zentrum für Luft- und Raumfahrt e.V. (DLR, the German space agency). Although the TA's subsystems communicate with each other to some degree, the MCS is responsible for coordinating their work to perform science effectively.
The Data Cycle System (DCS) provides an observatory-level, science-oriented interface to SOFIA. On the ground it facilitates science interactions for all parts of the SOFIA data life cycle. During flights the DCS standardizes science functions available through the SOFIA science instruments. For both of these functions, the DCS must be capable of rapid reconfiguration to address a wide variety of science functions and interfaces.
The first DCS implementation will provide basic functions for the SOFIA Facility Science Instruments. The DCS will eventually support all science instruments that take advantage of it.
SOFIA's custom dedicated subsystems include the water vapor monitor, the Cavity Door Control Subsystem, the Environment Control Subsystem, and the Mission Audio Distribution System. These are dedicated systems performing specialized functions in support of the SOFIA mission. Most software is implemented within dedicated embedded systems, and if a subsystem communicates with the MCS, it usually uses a subset of the SOFIA Command Language.
Briefly, for those unfamiliar with CORBA (Common Object Request Broker Architecture), it is a standard which describes certain ``middleware" products. These products provide an infrastructure on top of which application features may be developed. CORBA specifies a certain set of features useful in an object-oriented, distributed, multi-processor environment, including:
One feature not built into CORBA is low-latency and real-time distribution of data. Systems with real-time constraints would typically be designed to avoid the need for data marshaling services that help convert data objects to different language and operating system formats in a distributed system. Such systems typically use consistent languages and operating systems.
Some of the SOFIA software architectures have compelling rationales for not using CORBA. The Telescope Assembly and custom dedicated subsystems do not incorporate CORBA at all. Those subsystems' software is analogous to firmware--largely stable code with specific functionality, typically implemented on single embedded CPUs with limited interface complexity. Given stable system requirements, pre-established stand-alone development environments and no need for on-the-fly component associations, CORBA would add little value.
Many of the science instruments have environments similar to the ``embedded subsystems" just described. They are also fairly stable systems, with unchanging connectivity needs, and often use only one or few processors and a single programming language and operating system. While the functional complexity of many of these science instruments (and other non-CORBA systems) is quite high, functional complexity alone does not imply the need for CORBA. For most science instruments, it is more appropriate to build the software for the expected needs, designing only the specific flexibility that has been defined, and then make changes in the developed software on an as-needed basis.
By reviewing the three SOFIA software products that do use CORBA--DCS,
MCS, and the FLITECAM
FSI--we identified a set of attributes that contributed to the decision
to use CORBA.
Not surprisingly, these attributes (see Table 1)
are well matched to CORBA's intended environment.
Another attribute common to the three SOFIA software products using CORBA is that at least one member of each development team had experience with CORBA or a similar standard. In some cases the experience was limited, and most of the subsequent training was on-the-job, but having some sense of the technology was a consistent starting point.
As seen from Table 1, the DCS, MCS, and FLITECAM each chose a different CORBA product (ILU, TAO, and VisiBroker), with characteristic strengths and weaknesses discussed below. CORBA fit the three systems to different degrees and for different reasons, but each chosen CORBA implementation appeared well suited to its intended use.
The Data Cycle System will support science operations over the entire 20-year operational lifetime of SOFIA, operating more or less continuously throughout that time. (Although it has ground-based and airborne components, we focus here on the ground-based components.) It will provide an interface for scientists to gain access to SOFIA (e.g., via proposal preparation) and its archived data.
With these requirements in mind, the DCS was designed to collect software on the fly, at user request. For a given data pipeline, algorithms from sites such as Ames, Goddard, and Australia might be combined to reduce archived raw data; while for a proposal, submission components that submit data, automatically review it and provide scheduling metrics could be combined. Even more impressive, new software components can be developed, tested and integrated while existing versions of those components remain fully available to users.
The DCS will be a widely distributed system, not only spanning local high-speed networks but also taking advantage of possibly distant computer resources. A flexible and robust communications layer was therefore a crucial element of the DCS, and this is a CORBA strength.
The job of the MCS can be summarized as routing and displaying data, accepting, routing and responding to commands, performing calculations on data (for example on coordinate systems) and coordinating the observatory and its subsystems. This involves a lot of communication across multiple machines, the state of which may change at any time for reasons both planned and unforeseen (e.g., at 41,000feet cosmic rays noticeably affect standard workstation memory, with unfortunate consequences for operating system reliability). The MCS is therefore built on the Jini model, with self-registering and self-discovering software components that make connections to each other as needed to perform their allocated functions.
The MCS hardware architecture has evolved over its first two years of development. Whereas multiple processor types (Sun SPARC, Motorola PowerPC) and operating systems (Unix, VxWorks) were originally expected to co-exist, it appears Sun processors with Unix (and its real-time features) may be sufficient to meet SOFIA's needs.
The team purchased the Visigenix VisiBroker product in order to make use of its excellent documentation (the planned CORBA tool had no documentation available during its beta phase), and the first three months of MCS implementation used VisiBroker. Once The ACE ORB (TAO) was formally released, the team replaced the Visibroker software with TAO, encountering only minor issues. Partly reflecting this experience, the team encapsulated MCS CORBA use as much as possible, so that another product, or custom code, can replace it if necessary. As a result of this encapsulation and the limited CORBA use, the MCS team needs only one or two experienced CORBA developers, whereas three or four would be more typical for a twelve person team.
Another potential issue is the criticality of the CORBA Name Server in the MCS system. A failure of this component would force the entire MCS to be restarted, and a suitable backup mechanism has not yet been designed.
As a specific example of the alternatives, consider the MCS data handling mechanism. Initially, the MCS design rejected CORBA's data marshaling/demarshaling capabilities: the team didn't want to access data across machines. The lack of a pass-by-value feature meant up to six handshakes were required for each data item, and the packing/unpacking process seemed likely to be too slow for the system's requirements. Since the MCS needs to know and manage data structures anyway in order to input and output data, CORBA wasn't expected to add much value.
However, in the current MCS implementation the data structure is unpacked and repacked on every machine in the communication path, and the original format of the data is ignored. On top of that, pass-by-value features are becoming available in CORBA. As a result the current design assumes the overhead burden of CORBA but gains little from its data distribution and management capabilities. This design will be revisited as MCS development goes forward, to take into account lessons learned and improvement in the tools.
The First Light Instrument Test Experiment Camera (FLITECAM) has a relatively complicated architecture with many components that are selectable during FLITECAM operation. This architecture is designed to maximize flexibility, component reusability and distributability to an unusual degree, making it more like the highly distributed MCS or DCS than a classic monolithic system for a science instrument.
Does it make sense to use a distributed object services product such as CORBA? Most projects will fall clearly on one side or the other of the CORBA benefit-vs-risk equation. As this paper describes, run-time application coordination in an object-oriented, distributed environment will benefit from CORBA middleware, a conclusion tempered to some degree by whether or not the project has hard real-time requirements. If a data system runs strictly on a single computer, is likely to remain fundamentally stable for several years or already must support well defined interfaces between all of its subsystems, CORBA usefulness will be limited at best.
If the decision is not so clear, the authors believe that a bias in favor of trying CORBA is appropriate. This conclusion is based on four points:
These lessons may be helpful when beginning a CORBA installation.
Graybeal, J., Brock, D., & Papke, B. 2000, Proc. SPIE Vol. 4009-17, "The Use of Open Source Software for SOFIA's Airborne Data System"
Papke, B., Graybeal, J., & Brock, D., 2000, Proc. SPIE Vol. 4014-35, "An extensible and flexible architecture for the SOFIA Mission Controls and Communications System"