Next: LTL - The Little Template Library
Up: High Performance Computing
Previous: An IRAF-based data reduction and analysis pipeline for the xFOSC family of instruments
Table of Contents - Subject Index - Author Index - Search - PS reprint - PDF reprint

Negoro, H., Kohama, M., Mihara, T., Kuramata, N., Tomida, H., Ueno, S., Katayama, H., Matsuoka, M., Serino, Y., Kawai, N., Arakuni, T., & Yoshida, A. 2003, in ASP Conf. Ser., Vol. 314 Astronomical Data Analysis Software and Systems XIII, eds. F. Ochsenbein, M. Allen, & D. Egret (San Francisco: ASP), 452

MAXI Software System: Photon Event Database

Hitoshi Negoro
Nihon University, 1-8-14 Kanda-Surugadai, Chiyoda, Tokyo 101-8308 Japan

M. Kohama, T.Mihara
RIKEN, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan

N. Kuramata, H. Tomida, S. Ueno, H. Katayama, M. Matsuoka
JAXA, 2-1-1 Sengen, Tsukuba, Ibaraki 305-8505, Japan

Y. Serino, N. Kawai
Tokyo Institute of Technology, 2-12-1 Ookayama, Meguro-ku, Tokyo, 152-8551, Japan

T. Arakuni, A. Yoshida
Aoyama Gakuin University, 5-10-1 Fuchinobe, Sagamihara, Kanagawa 229-8558, Japan

Abstract:

MAXI is an X-ray all-sky monitor loaded onto the Japanese Experiment Module of the International Space Station (ISS) from 2008. MAXI monitors more than $10^3$ X-ray sources, and provides quasi-real-time data of, for instance, AGN variability and X-ray Novae through the internet. Each X-ray event is stored into the MAXI databases on the ground as a single data record. As a result, the databases contain more than 100 Giga records with $\sim 0.2 - 1$ TB in 2 years mission life. We have just built this first huge 'photon event' database for low-speed Mil-1553b interface data.

1. MAXI and Database

MAXI is an X-ray all-sky monitor which will be loaded onto the Japanese Experiment Module (JEM, ``Kibo'') of the International Space Station from 2008 (Matsuoka et al. 1997). MAXI scans the whole sky in every $\sim90$ minutes with the sensitivity of as high as 7 mCrab (5 sigma level), reaching a detection limit of 1 mCrab in one-week. Thus, MAXI will continuously monitor X-ray time variability of a number of AGNs over a period of two years (Kawai et al. 2003).

MAXI has two kinds of X-ray detectors: Gas Slit Camera (GSC, see Mihara et al. 2002 in detail) and Solid-state Slit Camera (SSC, also see Miyata et al. 2002). The GSC consists of twelve identical position-sensitive proportional counters with the total area of $\sim$ 5,000 cm$^2$, and covers an energy range of 2-30 keV. The source direction is determined with the long rectangular field-of-view (1.5$\times$80 degrees) and a charge division method in the detectors. A set of three cameras covers 1.5$\times$160 degrees in an arc. While, the SSC consists of 32 X-ray CCD chips (1 inch square). It covers a soft energy (0.5-10 keV) band with a high energy resolution in spite of a relatively small total effective area ($\sim$ 200 cm$^2$), and complementarily works with the GSC.

Relatively large spatial resolution ($\sim 1.5$ deg) of the detectors prevents us from instantaneously determining a source direction for each X-ray. Furthermore, directions of the detectors change every moment due to the ISS orbital revolution. These and a large amount of data prevent us from building a normal database for pointing observations or for monitoring observations (for instance, the database for RXTE ASM). Instead, we have tried to build the first 'photon-event' astronomical database for this mission. This will make it easy to produce any kind of light curves, images and energy spectra for given periods and/or directions, suitable for highly variable sources. It will take much time (order of $\sim1$ TB/100 MB/s $\sim 10^4$ s) to do this for each source from raw telemetry data without such a database. Here, we briefly introduce the MAXI software system and the present status.

2. Data and Download

X-ray event data, house-keeping (HK) data and health and status (H&S) data are processed in parallel by the on-board data processor (DP) with 4 MIPS R3081 CPUs and VME bus (developed by NEC TOSHIBA Space Systems, Ltd, NTSpace). Using two interfaces on ISS (Table 1), the data are downloaded to the NASA and JAXA ground stations through the two relay satellites, TDRS and DRTS, respectively (Fig. 1). All the data are once stored to a database in the Operations Control System (OCS) at JAXA in Tsukuba and transferred to our 1553b and Ethernet databases (individual databases for a security reason).

The low-speed Mil-1553b interface is mostly available and is believed to be the most robust though the band-path is limited. Our system is designed so we can achieve a minimum scientific goal even if the medium-speed Ethernet interface is unavailable. For instance, GSC and SSC 64-bit data are degraded to 16-bit data with the DP, and downloaded through the 1553b interface.


\begin{deluxetable}{ccccc}
\scriptsize\tablecaption{Summary of two ISS interface...
... set
% 8 byte/event (GSC), 12 byte/event (SSC)
\nl
\enddata
\end{deluxetable}

Figure 1: MAXI data and work flows and software road-map.
\begin{figure}
\plotone{P4-23_1.eps}
\end{figure}

3. Database System for 1553b Data

3.1 sub-Tera order records

We have just built the 1553b database. Currently, the database contains 1807 items in 81 tables. Most items are accumulated every second, and 100-200 X-ray event/sec are expected in orbit. As a result, the total number of the records in 2 years mission life will be $\sim 2000 \times 86,400$ (s/d) $\times 365$ (d/y) $\times 2$ (y) $\sim
1.2\times 10^{11}$, and the total size of the database roughly $1.2\times 10^{11} \times 2-8$ byte $\sim 0.2 - 1$ TB.

The database itself is developed by Systems Engineering Consultants Co., Ltd. (SEC), and made with Java 1.4.x using ORBD and JDBC in order to make the system free from hard and softwares as best as possible.

3.2 Flexible and Scalable

The system should be flexible and scalable, because data items and hardwares have not yet been fixed. (We do not know what kinds of OSs survive in future, and how much hardwares will progress.) We describe all the data information (item name, C type, DB information, address in telemetry data, and so on) in a spreadsheet file. All the data-dependent parts in the system are automatically produced from this file even if some modifications are necessary (Fig. 2).

Figure 2: Spreadsheet and its productions
\begin{figure}
\plotone{P4-23_2.eps}
\end{figure}

3.3 Current Status

We are now testing the database at RIKEN in various hardware environments (different CPUs and hard disks) using PostgreSQL 7.3.x. We are also planning to test Sybase ASE 12.5 and Oracle 9i to find the best (cost) performance environment.

The DB system is so large that data access time will be a serious problem. This is our main concern at the moment in this test. How to divide data into (small) tables will be a key to solve the problem. Detailed results will be reported elsewhere.

References

Kawai, N., Negoro, H., Yoshida, A., Mihara, T. (Eds.), 2003, in MAXI Workshop on AGN Variability, (Seiyo Press: Tokyo)

Matsuoka, M. et al., 1997, SPIE, 3114, 414

Mihara, T. et al., 2002, SPIE, 4497, 173

Miyata, E. et al., 2002, SPIE, 4497, 11


© Copyright 2004 Astronomical Society of the Pacific, 390 Ashton Avenue, San Francisco, California 94112, USA
Next: LTL - The Little Template Library
Up: High Performance Computing
Previous: An IRAF-based data reduction and analysis pipeline for the xFOSC family of instruments
Table of Contents - Subject Index - Author Index - Search - PS reprint - PDF reprint