We present an algorithm to sift through multiple reads of an
image and find and reject cosmic ray events and other glitches. The
resulting image is then compressed first with a lossy compression
algorithm and then by a lossless compression algorithm. The final
compression ratio is of order 4n (n is number of reads) for simulated data.This
order of data compression is required to fit the NGST data into the
anticipated downlink bandwidth. The computational requirements are modest,
showing the key limitation may be the bus from the A-D converter to the
computer rather than the computation itself.

First, saturated data are marked and not used. Next, for each pixel, the set of reads (64) is fit to a straight line. The interval with largest deviation from the line (either direction) is compared with the expected noise. If it is larger than (optimum for test case) the interval is not used in the fit and the process is repeated. Most of the processing time is used by the cosmic ray rejection.

Next, a weighted fit is applied to the remaining data. The optimum fit depends on the signal. High signal uncertainties are dominated by photon (electron) counting noise and the optimum fit weights the endpoints. Low signal uncertainties are dominated by readout noise and the optimum fit is uniform weighting. We calculate the weights for these and 6 intermediate signal/noise ratios and chose the best weighting scheme for the signal. By computing the weight table for all 8 signal/noise levels ( sec), and all possible segment lengths we save time in the weighted fit.

After the fit, we reduce the dynamic range and equalize the noise for the different pixels by finding the square root of the slope plus an offset, which compensates for the readout noise. Finally, an adjustable scaling allows retention of nb bits of noise after conversion to an integer. Thus N (64) 16 bit reads are converted to a single 8 bit byte. This is further compressed without loss (see Nieto-Santisteban et al. 1999) to approximately 4 bits per pixel (if we keep 2 bits of noise).

The final results are robust. Even integration times that lead to most of the pixels being affected by cosmic rays can be effectively cleaned allowing longer integration times than are practical with Fowler sampling.

void cr_rej(//Reject cosmic rays and perform linear fit of data. float **values, //Input: Data cube (MxM xNumimg) int nr, int nc, int N, //Input: data cube dimensions int Full, //Input: count for full-well image *data, image *err){ //Output:Image, CR count register int t,b,k; int p,T[M]; register float s,x,y; float z,*R,*S,*U,*W; for(p=0;p<nr*nc;p++){R=values[p]; //all pixels if(R[1]>Full){s=0;b=N;} else { //bad pixel t=n;b=0;while(R[t--]>Full);while((T[b++]=++t)<n);//Saturated s=(R[*T]-*R)/ *T; //Average sig while(1){x=b=0;W=R;do{S=R+T[b++]; //all segmnts while(W!=S)if((y*=y=s+*W-*++W)>x){U=W;x=y;}//worst dif^2 }while(W++<R+n); //full list if(x<(s+VP)*Kp)break; //No More CR s+=(s-*U+*--U)/(n-b);t=U-R; //zap CR sig while((T[b]=T[b-1])>t&&--b);T[b]=t;} //file new CR for(k=K;k>0&&s<SVals[--k];);W=WT[k][*T]; //S/N ratio if(b<3)for(U=W+N,s=F;W!=U;s+=*R++* *W++); //0,1 CR, Fit else{for(s=y=t=b=0;t<n;){U=W+T[b]-t+1; //all segmnts z=WT[k][N-2-T[b]+t][N]+W[N];y+=W[N]; //Row Weights for(x=0;W<U;x+=*R++* *W++); //sum segment s+=x*z;t=T[b++]+1;W=WT[k][T[b]-t];}s=s/y+F;}}//sum Sig err->setval(p,--b); //Record # CR data->setval(p,(s>0)?int(np*sqrt(s)):0);}} //Record data

CR Processed | Fowler Processed | |

Total input: | 10.7 GB | 10.7 GB |

Max data rate: | 10 MB/sec | 100 MB/sec |

Input time: | 5170 sec (162 op/Pix) | 5170 sec (162 op/Pix) |

Process time: | 1150 sec (36 op/Pix) | 200 sec (6 op/Pix) |

CR Identification: | 725 sec (23 op/Pix) | |

Weighted Fit: | 420 sec (13 op/Pix) | |

Compression time: | 70 sec (2 op/Pix) | |

Output time: | 100 sec (3 op/Pix) | 250 sec (8 op/Pix) |

Total output: | 50 MB | 128 MB |

Total time: | 6440 sec (203 op/Pix) | 5600 sec (176 op/Pix) |

The Consultative Committee for Space Systems, 1997, Lossless Data Compression. CCSDS Blue Book 121.0-B-1

The NGST Study Team, 1997, The Next Generation Space Telescope: Visiting a Time When Galaxies Were Young, ed. H.S. Stockman. Available at http://oposite.stsci.edu/ngst/initial-study

Im, M., Stockman, H. S. 1998, Science with the NGST, ASP Conference Series, 133, 263, eds. E.P. Smith, A. Koratkar

Nieto-Santisteban, M. A. et al. 1999, in ASP Conf. Ser., Vol. 172, Astronomical Data Analysis Software and Systems VIII, ed. D. M. Mehringer, R. L. Plante, & D. A. Roberts (San Francisco: ASP), 137

Offenberg, J. D. et al. 1999, in ASP Conf. Ser., Vol. 172, Astronomical Data Analysis Software and Systems VIII, ed. D. M. Mehringer, R. L. Plante, & D. A. Roberts (San Francisco: ASP), 141

Stockman, H. S. et al. 1998, Cosmic Ray Rejection and Image Processing Aboard the Next Generation Space Telescope, NGST Workshop (in press). Available at http://ngst.gsfc.nasa.gov/public/doc_172_2/index.html

- ... Fixsen
^{1} - Raytheon ITSS
- ... Hanisch
^{2} - Space Telescope Science Institute
- ... Mather
^{3} - NASA Goddard Space Flight Center
- ... Nieto-Santisteban
^{4} - Space Telescope Science Institute
- ... Offenberg
^{5} - Raytheon ITSS
- ... Sengupta
^{6} - Raytheon ITSS
- ... Stockman
^{7} - Space Telescope Science Institute

© Copyright 2000 Astronomical Society of the Pacific, 390 Ashton Avenue, San Francisco, California 94112, USA

adass@cfht.hawaii.edu