Observation

class Data_Reduction.DSN.Malargue.Observation(parent=None, name=None, dss=None, date=None, start=None, end=None, project=None)

Bases: Data_Reduction.DSN.Observation

Class for observations based on open-loop recording made as DSA-3 (DSS-84)

The arguments for the superclass initialization are:

parent  (typically ``self`` or ``None``),
name    (will be set to a YEAR/DOY default if not provided),
dss     (required),
date    (YEAR/DOY required),
start   (optional, usually taken from file header),
end     (otional, inferred from header and number of records), and
project (required).

Methods Summary

load_file([num_recs, datafile, schdfile, …])

loads data from an OLR file

Methods Documentation

load_file(num_recs=5, datafile=None, schdfile=None, catlfile=None)

loads data from an OLR file

This is Malargue soecific because of the catalog and schedule file formats.

We load five records at a time or poor old Python really bogs down.

I think we proceed as follows:

  1. Get a list of data files in the directory.

  2. We parse the *.obs file in the directory to get:

    1. a list of scan numbers,

    2. the base file name for each scan (without scan or channel number),

    3. a list of channel numbers from the first scan.

  3. For each scan (which is a map):

    1. Ignore if there is no corresponding data file (NET*.prd)

    2. Find the corresponding schedule (sch*.txt) and load the times and position names.

    3. Open the matching catalog (cat*.txt) and for each position get the coordinates.

    4. for each channel:

      1. Form the file name: NET4n%03dtSsMG12rOPc%02d*.prd where the first formatted item is scan number and the second is channel.

      2. Process the ordered list of data files found with the above mask.

        1. Read and parse the file header.

        2. For each record (in groups of five records at a time) for efficiency):

          1. For the first record of the file only

            1. Put the time and coordinates in the first row of the structured numpy array.

          2. Read the record data from the datafile.

          3. Process the data (e.g. sum square average, power spectrum, spectrogram, etc.)

          4. Save the processed data for each record in the numpy dict keyed on channel number.

    5. save the numpy array of reduced data.