Next: The Sloan Digital Sky Survey and its Archive
Up: Sky Surveys
Previous: Sky Surveys
Table of Contents - Subject Index - Author Index - PS reprint -

Ferguson, H. C. 2000, in ASP Conf. Ser., Vol. 216, Astronomical Data Analysis Software and Systems IX, eds. N. Manset, C. Veillet, D. Crabtree (San Francisco: ASP), 395

The Hubble Deep Fields

H. C. Ferguson
Space Telescope Science Institute, Baltimore, MD

Abstract:

The Hubble Deep Field (HDF) observations in 1995 and 1998 represent a watershed, not only in the depth and quality of information on distant galaxies, but also in the manner in which astronomical facilities are brought to bear on important questions in astrophysics. This review presents some of the scientific highlights of the HDF observations and discusses some of the technical and sociological aspects of the campaigns.


1. Introduction

In September 1840, the United States Exploring Expedition, led by Lieutenant Charles Wilkes, arrived in Hawaii. On board one of the ships was a young geologist named James Dwight Dana. Already earlier in the expedition, Dana had already undertaken extensive studies of volcanos and coral atolls, but it was in Hawaii that a synthesis of these observations began to emerge. Dana was able to recognize different shapes and eruptive styles of volcanos, and to recognize these forms even when they had later been degraded by later erosion or collapse. For example, he noted that Mauna Kea and Mauna Loa were both enormous shield volcanos, but that parts of Mauna Kea were gullied with huge ravines: a clear sign of the action of wind and water over time. By estimating the degree of modification, he was able to estimate the relative ages of volcanos. Dana noted that many of the Pacific island chains, Hawaii included, run parallel to each other, and show a systematic progression in age, with the oldest volcanos to the northwest and youngest toward the southeast. This evidence for systematic age progressions was crucial to the development of the theory of plate tectonics, with the age sequence of islands indicating the direction and rate of motion of the Pacific plate over a series of hot spots arising from deeper in the earth.

In much the same way, astronomers studying the distant universe seek to identify and quantify evolutionary features and age progressions among the myriads of galaxies distributed on the sky. Among the most powerful tools we have for this task is the Hubble Space Telescope, which currently offers the highest resolution and the greatest point-source sensitivity of any facility working at optical wavelengths. As it is also one of the most expensive scientific instruments ever built, time on HST is precious and it is important to consider novel ways to maximize its scientific productivity.

The Hubble Deep Field (HDF) observations, one of a northern field in 1995, and the other of a southern field in 1998, represent a significant advance in observational data on distant field galaxies (and faint stars in our own galaxy), and have become among the most intensely studied areas of sky. The first half of this review presents highlights of the scientific results, while the second half focuses on scheduling and policy issues of possible relevance to future HDF-like campaigns.

2. The Observations

The Hubble Deep Fields were carefully selected to be free of bright stars, radio sources, nearby galaxies, etc., and to have low Galactic extinction. The HDF-S selection criteria included finding a QSO that would be suitable for studies of absorption lines along the line of sight. Field selection was limited to the ``continuous viewing zone'' around $\delta = \pm 62^\circ$, because these declinations allow HST to observe, at suitable orbit phases, without interference by earth occultations. Apart from these criteria, the HDFs are typical high-galactic-latitude fields; the statistics of field galaxies or faint Galactic stars should be free from a priori biases. The HDF-N observations were taken in December 1995 and the HDF-S in October 1998. Both were reduced and released for study with six weeks of the observations. Many groups followed suit and made data from followup observations publicly available through the world wide web.

Details of the HST observations are set out in Williams et al. (1996), for HDF-N and in a series of papers for the southern field (e.g. Williams et al. 2000) The HDF-N observations primarily used the WFPC2 camera, while the southern observations also took parallel observations with the new instruments installed in 1997: the Near Infrared Camera and Multi-Object Spectrograph (NICMOS) and the Space Telescope Imaging Spectrograph (STIS). The area of sky covered by the observations is small: $5.3$ arcmin$^2$ in the case of WFPC2 and $0.7$ arcmin$^{2}$ in the case of STIS and NICMOS for HDF-S. The WFPC2 field subtends about 4.6 Mpc at $z \sim 3$ (comoving, for $\Omega_M,
\Omega_\Lambda, \Omega_{\rm tot} = 0.3, 0.7, 1.0$). This angular size is small relative to scales relevant for large-scale structure. In particular the correlation length $r_0 \sim 5 h^{-1} \,\rm Mpc$ typical of normal galaxies at $z = 0$ (Tucker et al. 1997; Ratcliffe et al. 1998) is larger than the angular size of the field.

During the observations, the telescope pointing direction was shifted frequently in small dithering motions, so that the images fell on different detector pixels. The final images were thus nearly completely free of detector blemishes, and were sampled at significantly higher resolution than the original pixel sizes of the detectors. The technique of variable pixel linear reconstruction ( drizzling) (Hook & Fruchter 1997) was developed for the HDF and is now a popular tool for reconstructing images for a variety of applications (Hook, this conference).

3. Scientific Highlights

3.1. Stars

Within months of the release of the HDF-N observations, several groups had counted the point sources and compared their color-magnitude distribution to the expectation from Galactic star-count models (e.g. Flynn et al. 1996; Elson et al. 1996; Mendez et al. 1996). The HDF-N contains fewer than 10 point sources with $R < 26$ and colors consistent with low-mass main sequence stars. This drives the general conclusion (supported by other HST fields) that hydrogen-burning stars with masses less than 0.3 $M_\odot$ account for less than 1% of the total mass of the Galactic halo.

In addition the handful of red dwarfs in the HDF-N, to a limiting magnitude of $I = 28$ the field has about 50 unresolved objects with relatively blue colors. Recent work by Hansen (1998; 1999) has shown that at low metallicity, molecular hydrogen forms in white dwarf atmospheres at temperatures below about 3500 K, increasing opacity in the red and causing the oldest, lowest luminosity white dwarfs to become blue as they cool. Two groups have presented tentative evidence for the existence of such a population of blue white dwarfs in the HDFs. Ibata et al. (1999) analyzed proper motions of point-like sources in two epochs of HDF-N data, separated by two years. They identify five sources with proper motions more than $3 \sigma$ above the measurement uncertainty. All have colors plausibly consistent with the Hansen models. (Mendez & Minniti 1999; MM) compare the number of point-like sources in the HDF-N and the HDF-S and find more in the southern field even in the blue part of the CMD. More stars are expected in the HDF-S because the line of sight passes through more of the Galactic halo. So the asymmetry is tentative evidence that the blue sources might be stars.

While these developments are exciting, the evidence is not conclusive. The sources identified by MM are brighter than those found to have proper motions by Ibata et al. (1999). The two studies thus appear to be inconsistent, in that the closer WDs ought to have larger proper motions. Also, the enhancement in faint blue point-like objects in HDF-S relative to HDF-N is sensitive to the magnitude limit chosen. If MM had included objects down to $I = 29$ in their sample, they would have found more objects in HDF-N than HDF-S.

3.2. Galaxies

In this short review it is difficult to capture even the highlights of the diverse studies of distant galaxies that have made use of the HDF. The images have been used for traditional comparison of model-predictions to number-counts and color distributions (e.g. Metcalfe et al. 1996; Ferguson & Babul 1998), for quantitative estimates of faint-galaxy morphology (Abraham et al. 1996; Marleau & Simard 1998), for studies of galaxy size evolution (Roche et al. 1998; Simard et al. 1999; de Jong & Lacey 1999), for clustering studies (Colley et al. 1996; Villumsen 1997; Connolly et al. 1999; Arnouts et al. 1999), and for identification and study of galaxies at $z > 2.5$ (Lanzetta et al. 1996; Steidel et al. 1996; Lowenthal et al. 1997).

Perhaps the greatest source of discussion in extragalactic circles has been the attempt to combine the HDF observations with other data to determine the cosmic star-formation rate vs. redshift. While steps in this direction had been made prior to the HDF (Cowie et al. 1988; Lilly et al 1996; Fall et al. 1996), the HDF analysis of Madau et al. (1996) was the first to incorporate emission from galaxies at $z > 1$. This initial analysis suggested that the global star formation rate summed over all galaxies at $z \sim 3$ was about a factor of three higher than the rate at present. The star-formation rate rose to $z \sim 1.5$ and then declined by about a factor of 10 from $z=1$ to the present. The plot of the metal-formation rate vs. redshift has provided a good foil for discussion of the HDF and for comparisons to theoretical models. There was remarkable agreement between metal-enrichment rate derived from the galaxy luminosities, the results of Pei & Fall 1995, and the predictions of hierarchical models (e.g. White & Frenk 1991; Cole et al 1994; Baugh et al. 1998), all of which show a peak in the metal production rate at $z \sim 1-2$. Subsequent work in this area has focused on (1) galaxy selection; (2) effects of dust; and (3) the connection of the Madau diagram to general issues of galaxy evolution and cosmic chemical evolution.

Figure: Star-formation rate density vs. redshift derived from UV luminosity density. The $z > 2$ points are from Lyman-break objects in the HDF-N (open triangles), in the HDF -S (filled triangles) and in the Steidel et al. (1999) ground-based survey (X's). The luminosity density has been determined by integrating over the luminosity function and correcting for extinction following the prescription of Steidel et al. (1999). Possible contributions from far-IR and sub-mm sources are not included. References for the points at $z < 2$ are given by Madau et al. (1996).
\begin{figure}
\epsscale{0.7}
\centerline {\plotone{O1-01a.ps}}\end{figure}

Figure 1 shows an updated version based on the latest high-redshift data from HDF-N, HDF-S, and the Steidel et al. (1999) spectroscopic survey. Galaxy selection is based on a refined set of color criteria from Dickinson 1998, and a correction of dust extinction has been carried out on all the high-redshift data points following the prescription of Steidel et al. (1999). Many of the discussions of the Madau diagram have centered around selection effects and whether the data actually support a decrease in star-formation rate for $z > 2$. Star-formation occurring in dusty or low-surface-brightness galaxies (or parts of galaxies) may be unaccounted for in the HDF source counts. Estimates of the typical extinction correction in $z > 2$ galaxies range from a factor 3 to a factor of 15 (Sawicki & Yee 1998; Pettini et al. 1998; Meurer et al. 1999).

The recent NICMOS HDF-N observations provide some additional insight into possible dust corrections. Ferguson (1999) compared spectral-energy distributions of HDF galaxies with spectroscopic redshifts $2.5 < z < 3.5$ and found typical best fit spectra-energy distributions had constant star formation, ages $10^{7-9}$ yr, and reddening $0.1 < E(B-V) < 0.4$. Nevertheless, observations of the HDF 850$\mu$m with the SCUBA bolometer array on the JCMT, and at 6.5 and 15$\mu$m with ISO reveal a few sources, which would dominate the overall luminosity density and star-formation rate if they turn out to be at high redshift (Hughes et al. 1998; Aussel et al. 1999).

An important use of the metal-enrichment rate derived from the HDF and other surveys is the attempt to close the loop: to show that the emission history of the universe produces the metal abundances and stellar population colors we see at $z \sim 0$. Madau et al. (1996) made a first attempt at this, concluding that the metals we observe being formed via the stellar UV radiation that escapes from galaxies are a substantial fraction of the entire metal content of galaxies. Fall et al. (1996), Calzetti & Heckman (1999) and Pei et al. (1999) have attempted to incorporate dust and chemical evolution in a self-consistent way. The models include a substantial amount of obscured star-formation; more than 50% of the UV radiation is reprocessed by dust. This comes about naturally as a result of the model assumptions about correlations of dust, gas, and metals. The obscuration corrections increase the value of the metal-enrichment rate from samples already detected in optical surveys, but do not introduce whole classes of completely dust-obscured objects. While there are significant differences in the inputs and assumptions of the models, in both cases a model with a peak in metal-enrichment rate at $z \sim
1-1.5$ is found consistent with a wide variety of observations. In particular the models can simultaneously fit the COBE DMR and FIRAS measurements of the cosmic infrared background and the integrated light from galaxy counts.

Overall, the success of these consistency checks is quite remarkable. Various imagined populations of galaxies (dwarfs, LSB galaxies, highly dust obscured objects, etc.) now seem unlikely to be cosmologically important. The fact that the UV emission, gas metallicities, and IR backgrounds all appear capable of producing a universe like that we see today leaves very little room for huge repositories of gas and stars missing from either our census at $z = 0$ or our census at high redshift.

Nonetheless, there is room for caution in this conclusion. X-ray observations show that, at least in clusters of galaxies, the mass of metals ejected from galaxies exceeds that locked inside stars by a factor of 2-5 (Mushotzky & Loewenstein 1997). If the same factor applies to galaxies outside clusters (Renzini 1997), then the local mass-density of metals greatly exceeds the integral of metal-enrichment rate, implying that most star-formation is hidden from the UV census. Various lines of evidence also indicate that elliptical galaxies (both inside and outside of clusters) and the bulges of luminous early-type spirals are very old (Renzini 1999; Goudfrooij et al. 1999). The requirements for the early formation of metals in these systems look to be at odds with the inferences from the models described above. Renzini (1999) estimates that 30% of the present density of metals must be formed by $z \sim 3$, while the best-fit models to the evolution of the luminosity density have only 10% formed by then. It also remains a major challenge to ascertain how the metals got from where they are at $z \sim 1$ to where they are today. The bulk of the metals locked up in stars at $z = 0$ are in luminous, normal, elliptical and spiral galaxies. These galaxies are observed to undergo only mild luminosity and density evolution out to $z \sim 1$; an increasing fraction of irregular and compact galaxies are responsible for a large portion of the UV luminosity density at $z=1$. How do the metals produced in these galaxies find their way into normal giant galaxies at the present?

4. Scheduling

A key feature of the HDFs is their location in the HST continuous viewing zone. While this in principle can increase the observing efficiency by up to a factor of two, there are complications that make it far from trivial to do that. In particular, scattered light from the bright earth can be a limiting factor some of the observations; observations with different filters or optical elements need to coincide with specific phases in the HST orbit to make the best use of the time.

Scheduling of the observations made use of a numerical model (SEAM) of the HST background. Starting with the prerequisite that the WFPC-2 observations should have nearly equal times in all four photometric bands, a semi-automated scheduling system was developed to lay down the exposures efficiently in places where the scattered background would have the least impact. For the HDF-N observations, the system operated in several passes, filling in blocks of bright and dark time that were unaffected by SAA passages first, filling in the largest remaining blocks of next (allowing those exposures to cross the day-night boundary), and filling in the remaining small opportunities last. The noise contributions from sky background and scattered light, dark-current, and readout were computed for each exposure, and used together to compute the overall limiting magnitude for the campaign in each band. As various changes were made to the initial schedule (e.g. several gaps in the campaign had to be inserted to carry out time-critical observations for other HST programs), the effect on the S/N could be assessed. After the sequence of filters were layed down, the sequence of small pointing offsets (over a total span 2.6 arcsec) was determined. This dithering reduces the photometric errors due to flat-fielding uncertainties and also allows reconstruction of a higher-resolution image because sources are sampled in different portions of a pixel at each dither position. For HDF-N, concerns about the ease and efficiency of cosmic-ray removal led us to try to take at least five exposures at each dither position, and to keep the observations taken at the same dither position nearly contiguous in time. The observing sequence was initially layed down about 6 months before the HDF-N campaign, with the detailed start and stop times adjusted about 6 weeks prior to the observations to take into account the shifts in the SAA passages in the most up to date orbit ephemeris.

The initial sequencing of observations was carried out with a series of awk scripts. The first few scripts took care of sequencing the observations based on list of exposure time goals, some simple rules for deciding which filter to schedule at a given opportunity, and a timeline of SAA and day-night passages. Output of the program was a list of start-stop times for each exposure with each filter, and detailed S/N statistics. This list was hand-edited to rearrange a few exposures, and then another awk script turned the list into a fully-formatted phase-2 proposal that could be fed to the scheduling system.

Scheduling was considerably more complicated for HDF-S. A sample portion of the timeline is shown in Figure 2. As for HDF-N the practicalities of using the HST time efficiently partly drove the scientific decisions on which modes to use. For example, while one could make a compelling case that 150 orbits of UV imaging or UV spectroscopy of the QSO with STIS would be extremely exciting, in practice only half that exposure time could be achieved because the UV MAMA detectors could not be operated in orbits containing SAA passages. Similarly, a 150 orbit CCD exposure with STIS would be extremely interesting, but half the time would be largely wasted due to the bright sky background on the day side of the orbit. The final set of exposure times with the different filters and gratings was determined after extensive consideration of what could be done most efficiently by HST during the campaign.

Figure: A portion of the HDF-S timeline. The curve near the bottom shows the model-predicted background in the WFPC2 F606W filter. The various line segments are (from bottom to top) the WFPC, NICMOS and STIS observing modes (and data-set IDs in small type), various spacecraft activities, such as a guide-star re-acquisition, a South Atlantic Anomaly passage, and the times of HST communications contact with the TDRS satellite.
\begin{figure}
\epsscale{1.0}
\plotone{O1-01b.ps}
\end{figure}

The dithering strategy for HDF-S was completely different from HDF-N. The desire to keep the QSO near the center of the STIS field meant that the different dither positions for the WFPC2 and NICMOS imaging had to be accomplished largely by rotating around the STIS position. Based on favorable experience with combining dithered images since the HDF-N, relatively large dithering motions were considered acceptable for WFPC-2 and STIS, and indeed preferable for NICMOS to allow construction of a sky flat. Also, considerable experience had been gained in doing cosmic ray rejection as an iterative part of the image reconstruction process. It was thus no longer considered a requirement to have multiple frames at the same pointing position.

The overhead for changing the telescope boresight roll angle is considerably longer than the overhead for small motions; the overall program therefore had to be orchestrated to try to keep the number of orientation changes small, while still giving adequate sampling across NICMOS and WFPC-2. Scripts similar to those used for HDF-N were used to lay down the exposure sequences, but, partly due to the more complicated interactions and overheads in dealing with three separate instruments, considerably more iterations through the entire HST scheduling system and more hand-editing of the phase-2 proposal were needed to arrive at a final schedule.

In the end, both HDF-N and HDF-S spent about 70% of the available time with the shutters open taking images and spectra, roughly a factor of two higher than the standard HST efficiency when measured the same way.

5. The HDF and Community Projects

The 1995 decision by STScI director Bob Williams to devote 150 orbits of his discretionary time to observations of distant galaxies was a topic of heated discussion both at STScI and elsewhere. Williams reasoned that such observations were going to be one of the great legacies of the observatory, and the dedication of a large block of observing sooner rather than later could fulfill a variety of goals. First it could stimulate some creative thinking on how best to use HST for studies of galaxy evolution; second, the large amount of HST time could perhaps be used to leverage time on other facilities to add value to the HST observations; and third, it would ensure that these legacy images got taken, should HST fail a year or two later. Aside from the guideline that the time be devoted to studies of galaxy evolution (guaranteed to be controversial given the demands on HST time from other fields), the decisions of what field or fields to observe and how to allocate the time were also intensely debated. While it was not at the outset obvious that HST should observe a field that had not been previously studied, the use of CVZ to double the observing efficiency was too attractive too pass up. That decision, endorsed by and external advisory committee, basically dictated that HST should concentrate on a field-galaxy survey and should choose a new field. The decision to make the survey data non-proprietary, also endorsed by the advisory panel, basically ensured that the data would be rapidly disseminated, and also that STScI could play scheduling constraints against science goals to try to maximize the scientific output of the observations.

Judged by the number of papers and citations, the HDF strategy paid off in a producing a significant scientific impact. While the observations may not have ``solved'' any one problem, it is clear they have contributed in great measure to the growing body of knowledge on the distant universe, and on Galactic structure. The devotion of considerable amounts of observing time by Keck, ISO, the VLA, BIMA, Merlin, KPNO, JCMT, and other facilities added tremendous value to the HST observations, and to a great extent vindicated the decision to choose a new patch of sky that did not already have someone's stamp of ownership on it.

Other observatories have followed suit with HDF-style campaigns. The ESO Imaging Survey (EIS) and the NOAO Deep Wide-Field Survey both promised rapid release of reduced data to general astronomical community. The SIRTF legacy program will consist of large science investigations, carried out for a broad community, with no proprietary data rights for the investigation team. With such legacy-style programs becoming more popular, it is important to consider not only the scientific interest of the investigations, but also the practicalities of how they are carried out. There are several ways to ensure that such a program will have a wide impact:

  1. Ensure that the data taken for legacy program are scientifically unique and serve multiple uses,
  2. Publicize the project widely and solicit community input on the science goals and observing strategy,
  3. Couple the observing and the scheduling strategies to make the best scientific use of the capabilities of the observatory,
  4. Take the data, if possible, in campaign-mode, rather than stretching the observations over a long period of time, and
  5. Release fully reduced data, in a timely manner.

Not all observing programs can or should be done as HDF-style projects. The model of a principle investigator having exclusive access to his or her data for a set period of time is still a good one, and can help ensure that proper care is taken in the data reduction and interpretation. The occasional major non-proprietary campaign can however remind people of the benefit of national and international facilities, and can often serve science better than if the same observations had been first closely held and analyzed by a small group.

This work was based in part on observations with the NASA/ESA Hubble Space Telescope obtained at the Space Telescope Science Institute, which is operated by the Association of Universities for Research in Astronomy, Incorporated, under NASA contract NAS5-26555.


References


Abraham, R. G., et al. 1996, MNRAS, 279, L47

Arnouts, S., et al. 1999, MNRAS, 310, 540

Aussel, H., et al. 1999, A&A, 342, 313

Baugh, C., et al. 1998, ApJ, 498, 504

Calzetti, D., & Heckman, T. M. 1999, ApJ, 519, 27

Cole, S., et al. 1994, MNRAS, 271, 781

Colley, W. N., et al., 1996, ApJ, 473, L63

Connolly, A. J., Szalay, A. S., & Brunner, R. J. 1999, ApJ, 499, L125

Cowie, L. L., et al. 1988, ApJ, 332, L29

de Jong, R. S., & Lacey, C. 1999 in ASP Conf. Ser., Vol. 170, The Low Surface Brightness Universe, ed. J. I. Davies, C. Impey, & S. Phillipps (San Francisco: ASP), 52

Dickinson, M. E. 1998, in The Hubble Deep Field, ed. M. Livio, S. Fall, and P. Madau, Cambridge: Cambridge Univ. Press, 219

Elson, R. A. W., Santiago, B. X., & Gilmore, G. F. 1996, New Astron. 1, 1

Fall, S. M., Charlot, S., & Pei, Y. C. 1996, ApJ, 464, L43

Ferguson, H. C. 1999, in Photometric Redshifts and the Detection of High-Redshift Galaxies, ed. R. Weymann, L. Storrie-Lombardi, M. Sawicki, and R. Brunner, San Francisco: ASP, 51

Ferguson, H. C., & Babul, A. 1998, MNRAS, 296, 585

Flynn, C., Gould, A., & Bahcall, J. N. 1996, ApJ, 466, L5

Goudfrooij, P., Gorgas, J., & Jablonka, P. 1999, astro-ph/9910020

Hansen, B. M. S. 1998, Nature, 394, 860

Hansen, B. M. S. 1999, ApJ, 520, 680

Hook, R. N. & Fruchter, A. 1997, in ASP Conf. Ser., Vol. 125, Astronomical Data Analysis Software and Systems VI, ed. G. Hunt & H. E. Payne (San Francisco: ASP), 147

Hughes, D. H., et al. 1998, Nature, 394, 241

Ibata, R., et al. 1999, ApJ, 524, L95

Lanzetta, K. M., Yahil, A., & Fernández-Soto, A. 1996, Nature, 381, 759

Lilly, S. J., et al. 1996, ApJ, 460, L1

Lowenthal, J. D., et al. 1997, ApJ, 481, 673

Madau, P., et al. 1996, MNRAS, 283, 1388

Marleau, F. R., & Simard L. 1998, ApJ, 507, 585

Mendez, R. A., et al. 1996, MNRAS, 283, 666.

Mendez, R. A., & Minniti, D. 1999, astro-ph/9908330

Meurer, G. R., Heckman, T. M., & Calzetti, D. 1999, ApJ, 521, 64.

Metcalfe, N., et al. 1996, Nature, 383, 236

Mushotzky, R., & Loewenstein, M. 1997, ApJ, 481, L63

Pei, Y. C., Fall, S. M., & Hauser, M. G. 1999, ApJ, 522, 604

Pettini M., et al. 1998, ApJ, 508, 539

Ratcliffe, A., et al. 1998, MNRAS, 296, 173

Renzini, A. 1997, ApJ, 488, 35

Renzini, A.1999, in ``The Formation of Bulges'', ed. C. M. Carollo, H. C. Ferguson, and R. F. G. Wyse, NY: Cambridge Univ. Press, in press

Roche, N., et al. 1998, MNRAS, 293, 157

Sawicki, M., & Yee, H. K. C. 1998, ApJ, 115, 1329

Simard, L., et al. 1999, ApJ, 519, 563

Steidel, C. C., et al. 1996, AJ, 112, 352

Steidel, C. C., et al. 1999, ApJ, 519, 1

Tucker, D. L., et al. 1997, MNRAS, 285, L5

Villumsen, J., Freudling, W., & da Costa, L. N. 1997, ApJ, 481, 578

White, S. D. M., & Frenk, C. 1991, ApJ, 379, 52

Williams, R. E., et al. 1996, AJ, 112, 1335

Williams, R. E., et al. 2000, in preparation


© Copyright 2000 Astronomical Society of the Pacific, 390 Ashton Avenue, San Francisco, California 94112, USA
Next: The Sloan Digital Sky Survey and its Archive
Up: Sky Surveys
Previous: Sky Surveys
Table of Contents - Subject Index - Author Index - PS reprint -

adass@cfht.hawaii.edu