The Joint Astronomy Centre (JAC) operates two facilities, the sub-millimetre James Clerk Maxwell Telescope (JCMT) and the United Kingdom Infrared Telescope (UKIRT), both of which are now fully flexibly scheduled to maximize prevalent weather conditions. In order to operate efficiently and effectively in the absence of the PI on behalf of which data is being taken, we have put in place a large number of integrated software systems providing a ``cradle-to-grave'' approach the operation of the observatory (Economou et al. 2002; Economou, Jenness & Rees 2002).
The JAC flexible scheduling model (see e.g. Robson 2002) has two major points of departure from that commonly practiced in other facilities. First, we do not impose a deadline for submitting observations; PIs are free to modify their programmes throughout the semester in the light of incoming data. This model requires a seamless flow of observation definitions data and information from the observatory to the PI.
The second difference to other queue-scheduled operational models is that we do not have the resources to have staff doing the observations, as our funding was not increased after the switch from classical scheduling to flexed observing. We therefore run a hybrid model in which some PIs do come out to observe, but are required to do projects ``off the queue'' as soon as weather departs from the parameters allocated to their project. This means that we may have observers that are quite inexperienced with the instrument in use, and therefore need a system which allows observations to be done correctly in such cases.
Necessary for any kind of queue-scheduled system is an Observation Preparation tool (also known as a ``Phase II'' tool) (Wright et al. 2001; Folger et al. 2002). This allows the PI to fully specify their observation completely, as to avoid any subsequent interpretation error that would affect the quality of the data, since as mentioned above the actual observer may not be too experienced. It is also important for this information to be provided in a way that focuses on scientific descriptions of the observation (e.g. use wavelength ranges rather than filter names), as to allow a PI that may not be intimately familiar with the facility to specify their observations in an effective way.
In order to make a timely response to the rapid changes seen in mid-infrared and sub-millimetre conditions, we practice dynamic scheduling; that is, we do not pre-generate night plans, but ask at the end of each observing block what the best thing to do next is, given the known weather and project parameters. The success of this approach is, of course, also based on the ability of our facilities to seamlessly transition from one instrument to another with little or no overhead. In software terms, when the PI submits a project with the Phase II tool our server splits it into individual observation blocks that are stored in a database; a tool is provided to the observer to enable them to query the database for what to do next.
Critical to the success of flexible scheduling are robust, automated data reduction pipelines that produce high-quality reduced data at the telescope to provide instant feedback to the observer and for immediate dissemination to the absent PI. At the JAC facilities we use ORAC-DR (Cavanagh et al. 2003; Currie 2004), which in its various instances for each of our instruments provides advanced data products, such as source catalogues (imaging) and fully extracted flux-calibrated spectra (spectroscopy).
A whole suite of software systems is in use to manage information flow about a project, allow data eavesdropping and access and automatically generate highly detailed project status reports for both PIs and staff. These are described in greater detail in Delorey et al. (2004), where a simple diagram of the general software architecture can also be found. Such systems are critical to keeping the large number of people involved in the execution of projects informed, and also generate valuable meta-data for any archive.
The presence of all the systems described above makes possible the use of a number of automated time critical operational modes, such as the publication of any discoveries of transient events (e.g. flares of variable sources, minor planetary bodies) as well as rapid follow-up of transient events discovered elsewhere (e.g. supernovae, -ray bursters). Because our architecture is based entirely upon machine-readable information and well-defined interfaces, we can integrate into alert systems initially developed for robotic telescopes, such as e-Star (Allan et al. 2004). An intelligent agent that is monitoring a detection channel of say, a -ray burster, can submit an observing block to our system for sub-millimetre follow-up using an override priority code; next time the observer at the telescope queries the system for an appropriate block they will find this observation and execute it; ORAC-DR then reduces the data and provide the information to another intelligent agent which takes appropriate action (such as informing the PI or submitting additional follow-up observations, such as spectroscopy, as appropriate).
There are many benefits to such complex integrated software systems such as described above. Astronomers that have been awarded telescope time benefit from the maximization of their chances of getting their project completed that comes with flexible scheduling. They have to give up very little for this advantage, since our operational mode retains many of the advantages of classical scheduling, such as the ability to constantly modify one's observing strategy, as well as eavesdropping access to data as it is being taken. Staff members make use of software systems that manage the bureaucracy of flexible scheduling (such as time accounting and removal of exhausted projects), thus freeing them to provide science-level support to projects. Observers at the telescope can carry out observations for others confident that the PIs are getting what they actually asked for. Astronomers interested in transient events can obtain data without the need of being in constant communication with various facilities. Archive users end up being the beneficiaries of significant quantities of meta-data that provide an audit trail for the provenance and quality of their data. And last, but not least, the facility gets to maximize its scientific output without increasing its operational cost.
While the wide variety of automated and autonomous software systems are geared toward maximizing observing efficiency in present facilities, there is a concern as to their long-term effects in the community. These are largely speculative at this point; however we believe we have seen signs of both recently. The problem is that with the move toward automated and staffed observing, not to mention data mining, there are fewer opportunities for the younger astronomers to cut their teeth on the many challenges of observational astronomy by coming out for practical experience at a data-taking facility.
The first reason for concern relates to the pool of astronomers from which we draw our instrument builders. These are in every case highly experienced users of our facilities who then moved on to instrumentation labs. Their innovative designs are often based on detailed understandings of current instrumentation and their experience with first-hand data analysis in the wavelength of their choice.
The second reason for concern addresses software development at modestly funded facilities such as our own. Our ability to provide comprehensive software suites to our users hinges on our ability to hire staff experienced in both scientific data analysis and software engineering. These individuals have a top-to-bottom understanding of the products they develop, often fulfilling the often critical ``hero programmer'' role (Lupton et al. 2001). In the absence of such people, much larger teams containing both astronomers and industry programmers under formal project management need to be formed. This is a model that is out of reach for many world-class facilities that are subject to stringent funding constraints.
We suspect that the trend of serving the astronomical community a highly-processed data product from centralized archive facilities will only grow in the future. In order to retain a pool of technical expertise in the astronomical community, both in instrumentation and software, we would like to see placement of post-graduate students at observatories working in the more practical areas of observational astronomy.
Allan, A., Naylor, T., Steele, I., Carter, D., Jenness, T., Economou, F. & Adamson, A. J. 2004, this volume, 597
Cavanagh, B., Hirst, P., Jenness, T., Economou, F., Currie, M. J., Todd, S. & Ryder, S. D. 2003, in ASP Conf. Ser., Vol. 295, Astronomical Data Analysis Software and Systems XII, ed. H. E. Payne, R. I. Jedrzejewski, & R. N. Hook (San Francisco: ASP), 237
Currie, M. J. 2004, this volume, 460
Delorey, K., Jenness, T., Cavanagh, B. & Economou, F. 2004, this volume, 728
Economou, F., Jenness, T. & Rees, N. P. 2002, Proc. SPIE, 4844, 321
Economou, F., Jenness, T., Tilanus, R. P. J., Hirst, P., Adamson, A. J., Rippa, M., Delorey, K. K. & Isaak, K. G. 2002, in ASP Conf. Ser., Vol. 281, Astronomical Data Analysis Software and Systems XI, ed. David A. Bohlender, Daniel Durand and T. H. Handley (San Francisco: ASP), 488
Folger, M., Bridger, A., Dent, B., Kelly, D., Adamson, A., Economou, F., Hirst, P. & Jenness, T. 2002, in ASP Conf. Ser., Vol. 281, Astronomical Data Analysis Software and Systems XI, ed. David A. Bohlender, Daniel Durand and T. H. Handley (San Francisco: ASP), 453
Lupton, R. H., Gunn, J. E., Ivezic, Z., Knapp, G. R., Kent, S. & Yasuda, N. 2001, in ASP Conf. Ser., Vol. 238, Astronomical Data Analysis Software and Systems X, ed. F. R. Harnden, Jr., Francis A. Primini, & Harry E. Payne (San Francisco: ASP), 269
Robson, I. 2002, Proc. SPIE, 4844, 86
Wright, G. S. et al. 2001, in ASP Conf. Ser., Vol. 238, Astronomical Data Analysis Software and Systems X, ed. F. R. Harnden, Jr., Francis A. Primini, & Harry E. Payne (San Francisco: ASP), 137