ACSIS is an auto-correlator intended for use with multiple beam receivers on the James Clerk Maxwell Telescope. The instrument is being designed and built at the DRAO, Penticton, in collaboration with the UKATC, Edinburgh, and Joint Astronomy Centre, Hawaii.
At the heart of the correlator are 8 correlator modules, each containing 4 correlator boards. A correlator board can be configured to accept a signal with 1 GHz or 250 MHz bandwidth and can measure its auto-correlation function (ACF) with up to 4096 lags. Each correlator module contains a micro computer which reads and coadds ACFs from the correlator chips, eliminating inconsistent lag data as it does so. The shortest time between data dumps from the correlator is 50 ms.
The reduction system delivered with ACSIS will be able to calibrate and grid the measured data into a data cube as they are taken. The reduction will be good enough that the result can be used to assess the quality of the observation. Ideally it will be so good as to require no further reduction off line!
The current plan is for a 16 mixer B-band (330-370 GHz) receiver, HARP, to be built as the front-end for ACSIS.
The coordination of telescope, receiver and ACSIS during an observation will be managed by a Java tool called the TODD (Telescope Observation Designer and Driver) and a small computer called the Real-Time Sequencer (RTS). The TODD executes an observing recipe, sending out commands to the various subsystems to perform the non real-time coordination required. The RTS, programmed by the TODD, generates the hard real-time signals used to coordinate tightly coupled sub-systems during a sequence of correlator integrations. Each integration result and sub-system state is tagged with a unique `sequence' number that will be used by the reduction system to knit together a data record containing both the data and a description of the system state at the time.
By developing new TODD scripts it is relatively easy to implement new observing modes.
Apart from some ancillary information, such as load temperatures from the front-end receiver, all stages in the reduction process before gridding can be performed using just the data passing through each correlator module. The gridding, however, requires data from all modules to be combined together, the necessary data exchange taking place over the Beowulf network and link switch.
In some observing modes the calibration of data must await the taking of a subsequent calibration measurement and the system will have to buffer incoming spectrum data until that occurs. The longest waiting time is 5 minutes, requiring 0.4 GB of storage. In addition, space is needed to store the gridded datacube result of an observation. The largest cube will be 1024 x 1024 (spatial dimension) x 512 (frequency dimension) which, spread over 8 machines, would require 0.3 GB per machine. To hold 2 spectrum buffers and one cube buffer each machine will need 1.1 GB of memory.
Reduction efficiency requires that the reduction processes be compiled code rather than interpreted scripts. Our tasks are written in C++ and make heavy use of code available from the AIPS++ class library and the Parkes MultiBeam Project. Efficiency also demands that the reduction be performed without storing partly reduced data to intermediate files - data must instead flow directly from one reduction process to the next.
The reduction recipe for a given mode is held in an ASCII file and is written in a simple C-like language. As in C you can write comments, define macros, ``include'' files and vary the recipe using conditional statements; functionality that is provided by the GNU C pre-processor. ``Include'' files will allow recipes to be developed from modules; for example, one module defining the cluster machines to be used, a second defining standard objects to calculate the system temperature, etc.
The reduction system is constructed from the following objects:
Once configured at the start of an observation the reduction system structure is fixed for that observation. The reduction process itself is data driven, triggered by the arrival of data records at a Sync task. Thereafter the self-describing data record plus the system configuration determine which Reducer and Archiver objects are called and how the data are reduced and stored.
Normal termination of an observation will occur when a data record describing itself as the last arrives from the Sync task. As this passes through the Reducer and Archiver objects they will close data files and shut down.
If an error occurs in any object then it will be reported to the TODD controlling the observation, which will issue commands to terminate data taking.
Recipes will be ``compiled'' by a program developed using the GNU tools Bison and Flex. The compiler will check that the recipe is self-consistent, with no undefined or unused objects or circular definitions. It will then generate 2 files; the first a Glish ``wiring'' procedure to initialize and link together the processes at the start of the observation; the second a simplified representation of the various ReducerProcess types and the names and functions of their constituent Reducer and Archiver objects, to be used by the constructors of the ReducerProcesses.
The RT Display will be implemented using Glish/Tk for GUI construction, the free version of Qt for line graphics and probably the AIPS++ ``viewer'' tool for image display. Qt was found to be significantly more efficient than the Glish/PGPLOT package available with standard Aips++ and was thus preferred in view of our need for high performance.