CReSIS Toolbox Guide

From OPS
Jump to: navigation, search

Return to Main page or CReSIS Toolbox Setup.

The cresis-toolbox is a collection of software routines (mostly Matlab) for processing and analyzing radar sounder data. The toolbox was originally developed to support CReSIS radar sounders for the cryosphere, but the toolbox can be used to process other radar systems once an ingest interface is developed (the "preprocess" step in the toolbox). The toolbox also provides a set of tools to work with radar sounder data and supporting data (e.g. GPS, IMU, LIDAR, DEMs). The tools support browsing, tracking layers, and posting data products. The toolbox repository is hosted at https://github.com/CReSIS/ and the setup is described in the CReSIS Toolbox Setup. The documentation is hosted on a Wiki at https://ops.cresis.ku.edu/wiki/. The cresis-toolbox development group is interested in supporting other radar systems and programming languages over time with the end goal of community collaboration on tools to help establish a standard way to process and store radar sounder data that facilitates easier and better use of radar sounder data. This CReSIS Toolbox Guide describes the processing steps, file layout, followed by detailed information on various aspects of the toolbox as described in the table of contents below.

The toolbox installation is described here CReSIS Toolbox Setup. This step includes setting up a "startup" script which sets up a global variable called gRadar that the toolbox commands use to determine where files are located at. The startup script also sets up your command path to provide access to all of the cresis-toolbox commands.

The cresis-toolbox data processing software takes raw data and GPS data (metadata) and produces support files (csarp_support), calibration files, radar image files, and layer tracking files. The output files can be turned into distributable data products with the post command. The produced files are described in the file guides. The data processing software uses parameter spreadsheets to store information about the radar settings and how to process the data to produce these files.

Radar data are organized into radar systems ("accum", "kaband", "kuband", "rds", and "snow"). The choice of which radar system is the best fit for your data is arbitrary, but generally speaking we recommend using the following guidelines:

  • accum: broadband VHF to UHF (usually 30-3000 MHz) radar sounder that penetrate >100 m in dry snow/cold ice. Usually the aim is to detect layers for estimating accumulation and generally not to sound the ice bottom. More recent versions of these radars can and are used to sound the ice bottom however so the distinction between "accum" and "rds" is often based on the legacy of that instrument.
  • kaband: shallow ice radar sounder 26.5-40 GHz
  • kuband: shallow ice radar sounder 12-18 GHz
  • rds: HF to UHF (usually 3 MHz-1000 MHz) radar sounder that is designed to detect the ice bottom
  • snow: shallow ice radar 2-8 GHz (we include 2-18 GHz radar data here)

Radar data for a particular system are divided into seasons. The season name is unique and is structured as YYYY_LOCATION_PLATFORM where YYYY is the year (Gregorian calendar) that the field season started, LOCATION is the primary location of where the data were collected, and PLATFORM refers to the platform used to collect the data.

Radar data for a particular system and season are divided into segments. Each segment represents the data collected during one recording period (the time between pressing "record on" and "record off" on the radar system). This is generally one flight of data, but if multiple settings were used or if there were radar issues then there may be more than one data segment per flight. Segments are uniquely named YYYYMMDD_SS where MM is the month (1-12), DD is the day (1-31), and SS refers to the segment (count starts at 1 each day and increases by 1 for each segment in chronological order). The definition of which "day" that data was collected on is not critical (e.g. could use UTC date at the usual start of the waking day), but should be consistent so that all segments are unique and should be setup so that it matches the conventions of other data that are collected on the mission such as GPS and LIDAR data to simplify synchronization of these data sources.

Since segments are long, the data are further divided into frames for viewing. During processing, a single image file is produced for each data frame. Frames are uniquely named YYYYMMDD_SS_FFF where FFF refers to the frame number which starts count at 1 for each segment and increases by 1 for each frame in chronological order.

Echograms and tracked layers can be browsed with the imb.picker image browser which also provides tools for semi-automatically tracking the data.

Please direct questions, help requests, suggestions, and error notifications to cresis_data@cresis.ku.edu.

The Field Workflow
The Data Center Workflow
Process Data Workflow
Processing Steps
Field Processing Steps

Contents

Processing Steps

Note that when testing the processing for the first time that you should be careful about overwriting data products that have already been generated. This is most relevant for make_gps_SEASON.m, records_create.m, qlook.m, analysis.m, sar.m and array.m processing steps.

Field Processing Steps

Setup field computer

See Field Computer Setup

Create configs file

These are the configurations used by the radar to collect data so this step is only required before the data has been collected. Create create_configs_SEASONNAME_RADARNAME.m according to the guidelines in Create configs. This step is currently only required for some radars (e.g. mcords3, mcords5, mcords5-accum, mcords6, accum3). The NI based snow radars do not require this file since they are not configurable from an XML file. Important: ensure that parameters are updated as required, especially param.max_DDS_amp.

Create default radar settings file

Create default_radar_params_SEASONNAME_RADARNAME.m which is used during preprocessing and during settings creation. See the Create configs step for details on how to create this default radar parameters script. Important: ensure that parameters are updated as required, especially param.config.max_tx since this could lead to power amplifier damage for systems that must limit the waveform generator output.

Create your parameter spreadsheet

Parameter spreadsheets are stored in cresis-toolbox/ct_params git repository. The naming convention is SYSTEM_param_SEASON.xls where SYSTEM must be accum, kaband, kuband, rds, or snow and SEASON follows the format YYYY_LOCATION_PLATFORM (e.g. 2015_Greenland_Polar6, 2018_Antarctica_Ground, 2018_Antarctica_TObas). The "YYYY" year for Antarctica missions should use the year at the beginning of the Antarctic season. In other words, we use "2018" to refer to the 2018-2019 season even when the entire radar season is in Jan-Feb 2019. This filename is never parsed so this filename convention is a guideline, but is not required.

  • The naming convention for the "param.config.date_strs" field in preprocess.m and the "Date" field in the parameter spreadsheet is to choose the UTC day in which you began the data collection for that waking day. In other words, if the flights cross two or more UTC days, then the first day is used in this field.
  • One method is to copy the spreadsheet from another season and delete the old segments or to use the "blank_param.xls" spreadsheet file.
  • Data from each day is divided into segments. This is done later during the run_preprocess.m step.
  • Segment boundaries are caused by two things: 1) a time gap in the recording and 2) a change in parameters that prevents processing the data in one segment. The naming and ordering of the segments is explained in Command Worksheet. See Records Worksheet for instructions on how to create segments.
  • Check leap seconds. Before making GPS files or records files, double check to make sure leap seconds have not changed (read help and follow instructions in utc_leap_seconds.m).

    Set lever arms

    Modify lever_arm.m function to include the Lever Arms from wherever the GPS data is processed at (e.g. ATM processes data to the GPS antenna) to each of the radars' antenna phase centers. Note that the particular lever arm is determined by looking at the records.gps_source field which comes from gps.gps_source and this is set in the make_gps_SEASON.m file. The season and radar type are also used. Details of how to update the lever_arm.m function for a new radar system are given in the comments of lever_arm.m.

    Set param.radar.wfs(wf).system_dB

    Set system_dB to account for nominal (or measured if available) system component of radar equation:

    • 10*log10(Pt*Gt*Gr*lambda_c^2 / (8*pi)^2 * Z0)

    where Pt is the transmit power, Gt is the transmit antenna gain, Gr is the receive antenna gain for a single channel, lambda_c is the wavelength at the center frequency, and Z0 is the characteristic impedance.

    For cases where the transmit power is turned off (e.g. when doing noise calibration), enter the value used during regular operation rather than -inf (since Pt == 0 when the transmitter is off and 10*log10(0) == -inf). This will allow the data to produce finite values when processed.

    If system_dB is a scalar, the single value is used for all receive paths. If system_dB is a vector, then the rx_path determines which system_dB applies. During data_load.m, the raw data are divided by 10^(system_dB/20). In addition to other corrections (such as hardware presums, quantization, and receiver gain), the resulting signal should be proportional to 1/R^2 for a specular perfectly reflecting target such as a metal sheet or liquid water (at most radio frequencies metal and liquid water both have a reflection coefficient ~= 1 and behave similar to perfect electrical conductors).

    Any deviation from 1/R^2 for a perfectly flat liquid water (or metal) reflector in the final image data products should be corrected by the param.qlook.radiometric_corr_dB and param.array.radiometric_corr_dB terms. The radiometric_corr_dB term is applied right before the final output is created and is there to account for any imperfections in the preceding steps.

    Create segments (Preprocess)

    This step is run after collecting data each day.

    Run the run_preprocess.m script to populate segments in the parameter spreadsheet and extract header information from raw data files. This step should be done in the field. This step is used to populate the parameter spreadsheets.

    Usually run_preprocess.m calls a preprocess setup script with a name formatted as run_preprocess_NAME.m. If a preprocess setup script does not exist for your dataset, use one of the existing scripts as a template.

    Create GPS files

    The first time the GPS files are created, you will need to create a make_gps_SEASON_NAME.m script. See the section Creating GPS Files which describes the whole process.

    This step is run after preprocessing is started. With some radar systems, the preprocessing step copies the GPS data recorded by the radar into the metadata directory.

    Run gps_plot.m on each of the GPS files and check for errors as described in the GPS Files for Post Processing section.

    Setup run_master.m script

    Create a local copy of run_master.m in gRadar.path_override (i.e. scripts/matlab/) and edit it to use the parameter spreadsheet for this season.

    Create records and frames

    Create the records and frames files:

    • records_YYYYMMDD_SS.mat file in gRadar.support_path (i.e. csarp_support/records/RADAR_NAME/SEASON/records_YYYYMMDD_SS.mat)
    • frames_YYYYMMDD_SS.mat file in gRadar.support_path (i.e. csarp_support/frames/RADAR_NAME/SEASON/frames_YYYYMMDD_SS.mat)

    To create the records and frames files, in the cmd worksheet of the parameter spreadsheet, enable "cmd.records" for the segments you want to run and then run run_master.m or run_records_create.m. See Records Worksheet for parameters that control the creation .

    • For ground based data that you plan to SAR process, you may need to manually edit the records files to mask out long sections of stationary data because the SAR processor could run out of memory in these sections. Run run_records_bit_mask.m to help with this. This step can be completed during post processing if not doing SAR processing in the field.
    • Any time GPS files for this segment are updated in the csarp_support/gps directory, the records files should be updated with records_update.m. Generally speaking, all output data products should be regenerated if GPS data are changed although this is generally not necessary for field processing. run_update_data_files.m can be used as a faster temporary solution.
    • To adjust or view frames, use run_frames_create.m Create frames guide.

    Pre-Science mission calibration

    These calibration steps can be skipped and done in post processing if time does not allow for them to be completed. However, transmit equalization should always be done carefully for multichannel systems when possible.

    See Calibration and In Flight Routines for more detail.

    The steps in order of application are:

    1. Deramp systems: Use BW_window_gen.m to determine the param.radar.wfs.BW_window field. Adjust the BW_window_guard for BW_window_gen to remove bad portions of the raw data at the start and stop of the waveform. The t_ref should be relatively close to its correct value for this to work properly so a preliminary check of the system time delay is needed. Use run_load_data.m to load raw data from a single strong target. Determine the time duration where the signal is present and then find the corresponding RF frequencies for this time axis. Adjust BW_window_guard so that the BW_window_gen function will choose an RF bandwidth in the good region.
    2. All systems: Estimate coherent noise and update parameter spreadsheets fields radar.wfs(wf).coh_noise_method, radar.wfs(wf).coh_noise_arg to remove coherent noise data during loading. This step may be skipped if coherent noise removal does not improve the data. When setting coherent noise removal parameters, it may be necessary to produce qlook data products without coherent noise removal first so that they can be compared with qlook data products produced with coherent noise removal. For complex coherent noise, several iterations testing different parameters may be necessary.
    3. All systems: Generate qlook products with coherent noise removal enabled. Track the surface if it has not been tracked yet.
    4. All systems: Estimate system time delay coefficients and update parameter spreadsheet fields radar.wfs(wf).Tadc (only radars without ADC time fields), radar.wfs(wf).Tadc_adjust (all radars), and radar.wfs(wf).t_ref (deramp-only) to correct time delays during loading. The qlook data products must be available and had the surface tracked before this step can be run. The GPS time offset/lag is also estimated and should be corrected in this step. For deramp-only systems, the signal nyquist zone is also set in this step and the check_surface.save_records_en parameter should be enabled during check_surface.m after the system time delay and GPS time offsets are fixed.
    5. Multichannel Only: Estimate and apply transmit equalization coefficients (time-delay, amplitude, and phase for each transmit channel) to the transmit waveforms. This step is critical and should be done as precisely as possible since it cannot be updated later. Usually a single set of coefficients are used for the whole season.
    6. Multichannel Only: Estimate receiver equalization coefficients, update parameter spreadsheet fields radar.wfs(wf).Tsys, radar.wfs(wf).chan_equal_dB, and radar.wfs(wf).chan_equal_deg to equalize data during loading. Usually a single set of coefficients are used for the whole season.
    7. All Radars: Check radiation pattern if data were collected over a sufficiently flat and consistent scattering surface while rolling the aircraft. If not, this step can be skipped.
    8. All systems: Estimate deconvolution impulse response. If deconvolution response is good, update parameter spreadsheets field radar.wfs(wf).deconv to deconvolve the data during loading. This step may be skipped if the deconvolution method does not improve the data.
    9. Radiometric calibration from deconvolution to radiometrically correct data during loading. An approximate method must be used (power gain only) if deconvolution is not possible.
    10. Noise and saturation analysis (Not Completed)

    Commit, pull, and push the cresis-toolbox

    If an internet connection is available, consider keeping the cresis-toolbox up to date with the latest bug fixes.

    Commit and push any changes to the parameter spreadsheets

    If an internet connection is available, keep the primary repository up to date with the fieldwork.

    Create output data products

    When field processing set the motion compensation parameters as:

    qlook.motion_comp = false; % Default setting
    sar.mocomp.en = false; % Default setting
    sar.mocomp.uniform_en = true; % Default setting
    
    1. CSARP_qlook (All Radars): In the cmd worksheet enable qlook for the segments you want to run and then run run_master.m or run_qlook.m
      You can create qlook products for specific frames by passing in a vector of frames. For example, [1 2 5] would process frames 1, 2, and 5.
      The default output is CSARP_qlook. When multiple people are using the same file system to store data: be careful to not inadvertently overwrite data products by disabling surface tracking by setting surf.en to false (since this would overwrite the surface in the layer data files) and by specifying the out_path variable to point somewhere other than "qlook" (e.g. YOURNAME_qlook)
      To verify outputs, use imb.picker.
      1. Occasionally products should be produced with different stages of processing when testing calibration. The recommended convention is: CSARP_qlook (without coherent noise removal, equalization, or deconvolution), CSARP_qlook_noise (without equalization or deconvolution), CSARP_qlook_equal (multichannel only; without deconvolution), CSARP_qlook_deconv (all calibration steps performed).
      2. If multiple images are combined together in fast time (e.g. low-gain and high-gain channels which are common for the radar depth sounders), then the surface tracking must be correct for the combine to work properly. After qlook processing, check that the surface is tracked properly everywhere. Any frame that has its surface corrected during this checking should have run_img_combine_update.m run on it to recombine the images with the new surface information.
    2. CSARP_sar (All Radars): Usually SAR processing is only done on specific frames because it is CPU intensive. In the cmd worksheet enable sar for the segments you want to run and then run run_master.m or run_sar.m
      If the layer source (usually CSARP_layerData) has not been updated with surface altimetry, you'll need to run qlook with surface tracking enabled or run_layer_tracker.m AND correct any mistakes in the surface with the picker before processing the data with sar. One way to verify outputs quickly is to generate images with the surface layer drawn on them using run_post.m with param.post.echo_en and param.post.layer_en enabled.
      The default output is CSARP_sar (it writes the folders named fk_data_FFF_SA_FA if using f-k migration). When multiple people are using the same file system to store data: be careful to not inadvertently overwrite data products by specifying the out_path variable to point somewhere other than "sar" (e.g. YOURNAME_sar). Update the array worksheet in_path fields to match this modified sar_path.
      To verify outputs, see run_load_sar_data.m
    3. CSARP_standard (All Radars), CSARP_mvdr (Multichannel Radars): In the cmd worksheet enable array for the segments you want to run and then run run_master.m or run_array.m
      The default input is CSARP_sar (it reads the folders named fk_data_FFF_SA_FA if using f-k migration).
      The default output is CSARP_METHOD where METHOD is one of the array processing methods (csarp-combined, standard, mvdr, or music). When multiple people are using the same file system to store data: be careful to not inadvertently overwrite data products by specifying the out_path variable to point somewhere other than the default (e.g. YOURNAME_standard, YOURNAME_mvdr, etc.).
      To verify outputs, use imb.picker.

    Verify data

    Use imb.picker to validate the data.

    Post files to the CSARP_post directory

    Post the files by making a local copy of run_post, setting the parameters of the local copy, and then running it.

    For the FMCW (snow/kuband) radars, the posting should use ASCAT imagery (map.type is "ascat" in the posting worksheet) or natural-earth Arctic imagery if it is not available.

    If a local copy of the data products is to be made in the field, the following script can be used to copy the files: backup_data_products_in_field.m

    If an internet connection is available and sufficiently high speed, copy the files to the ftp site. The recommended priority is: 1. KML files 2. JPEG echogram and map files (consider just copying a subset) 3. Echogram files (often much larger than JPEG files)

    Copy to CReSIS data.cresis.ku.edu ftp site:

    If copying files to CReSIS: First, make sure permissions are correct. If some files are not owned by you and do not have the right permissions, please ask helpdesk@cresis.ku.edu to run these commands for you.

    # Set user/group/other permissions
    chmod u=rwx,g=rwx,o=rx -R /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/
    # Set group to mdce
    chgrp mdce -R /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/
    # Set set group id bit on directories
    find /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/ -type d -exec chmod g+s {} \;
    

    Second, copy data (we recommend filezilla).

    Documentation

    Ensure that final parameter spreadsheet is committed at the end of the field season and the Processing Notes wiki are updated.

    Post processing processing steps

    Note that when testing the processing for the first time, please be careful about overwriting data products that have already been generated. This is most relevant for make_gps_SEASON.m, records_create.m, qlook.m, analysis.m, sar.m and array.m processing steps.

    Ensure that all field processing steps are completed before post processing. Often there are days that are not fully processed because of time limitations, especially at the end of the field season. First, verify that the parameter spreadsheet file in the ct_params.git repository is up to date. Then check that all the steps have been run (gps, records, and frames files especially). run_check_data_products.m can be used to track down all files that are needed.

    Multiple commands and segments can be submitted simultaneously by run_master. Some commands run locally (e.g. records_create and custom generic commands) while the rest use the cluster interface.

    Update GPS files

    See Creating GPS Files.

    Update lever arms

    In case lever arm measurements are updated or not entered yet, verify lever_arm.m is up to date. See Modify lever arms.

    Update run_master.m script

    Create a local copy of run_master.m in gRadar.path_override (i.e. scripts/matlab/) and edit it to use the parameter spreadsheet for this season.

    Update records and frames

    Update the records and frames files:

    • records_YYYYMMDD_SS.mat file in gRadar.support_path (i.e. csarp_support/records/RADAR_NAME/SEASON/records_YYYYMMDD_SS.mat)
    • frames_YYYYMMDD_SS.mat file in gRadar.support_path (i.e. csarp_support/frames/RADAR_NAME/SEASON/frames_YYYYMMDD_SS.mat)

    To update the records and frames files, in the cmd worksheet of the parameter spreadsheet, enable "cmd.generic" for the segments you want to run and then run run_master.m or run_records_update.m. See Records Worksheet for parameters that control the updating of files.

    • Note: Please do not remake records files without coordinating with the primary data manager for the dataset.
    • For ground based data that you plan to SAR process, you may need to manually edit the records files to mask out long sections of stationary data because the SAR processor could run out of memory in these sections. Run run_records_bit_mask.m to help with this.
    • Since GPS files are generally updated for post processing, the records files should be updated with records_update.m after the new gps files are generated in csarp_support/gps. Generally speaking, all output data products should be regenerated if GPS data are changed.
    • To adjust or view frames, use run_frames_create.m Create frames guide.

    Update cresis-toolbox

    Commit, pull and push any changes to the toolbox before starting the final processing. Before processing final output data products, the toolbox working directory should be in a known state (i.e. no changes in the working directory). This means that any changes should be committed to the toolbox and pushed. Also, make sure no files are in scripts/matlab that might shadow processing files in the toolbox path. Usually "run_*" scripts are fine to have in the scripts/matlab directory, but the functions/scripts that the run scripts call should not be. All output data products will be marked with the git hash tag in the sw_version variable (e.g. param_qlook.sw_version). The hash tag should represent the code that was actually used which means no working directory changes should be present when processing the final data products.

    Update parameter spreadsheet

    Before processing final data products, ensure that segments that will not be processed are marked in two ways:

    1. Add the text "Do not process." to the cmd.notes field. Also add the reason for not processing. Common examples are: "No GPS data.", "Bad GPS data.", "Test data (no targets of interest)."
    2. Highlight the frms, records_create, qlook, sar, array, and generic cells for the segment on the cmd worksheet by setting the fill to the standard Excel "orange" color (red: 255, green: 192, blue: 0). Do not use "orange" for anything else. Also, do not ever use the "red" fill color since Excel uses this to indicate changes when the document is differenced with other documents.
    3. More details are provided in Parameter Spreadsheet Guide.

    Post-processing mission calibration

    Before processing final data products follow the steps described in each of the relevant sections in Calibration and In Flight Routines. If new/good GPS data are available, set the motion compensation parameters for post-processing as described in the "Create output data products" step below. Use the Processing Notes page to document settings used for each calibration. The calibration steps in the usual order of application are:

    1. Deramp systems: Use BW_window_gen.m to determine the param.radar.wfs.BW_window field. Adjust the BW_window_guard for BW_window_gen to remove bad portions of the raw data at the start and stop of the waveform. The t_ref should be relatively close to its correct value for this to work properly so a preliminary check of the system time delay is needed. Use run_load_data.m to load raw data from a single strong target. Determine the time duration where the signal is present and then find the corresponding RF frequencies for this time axis. Adjust BW_window_guard so that the BW_window_gen function will choose an RF bandwidth in the good region.
    2. All systems: Estimate coherent noise and update parameter spreadsheets fields radar.wfs(wf).coh_noise_method, radar.wfs(wf).coh_noise_arg to remove coherent noise data during loading. This step may be skipped if coherent noise removal does not improve the data. When setting coherent noise removal parameters, it may be necessary to produce qlook data products without coherent noise removal first so that they can be compared with qlook data products produced with coherent noise removal. For complex coherent noise, several iterations testing different parameters may be necessary.
    3. All systems: Generate qlook products with coherent noise removal enabled. Track the surface if it has not been tracked yet.
    4. All systems: Estimate system time delay coefficients and update parameter spreadsheet fields radar.wfs(wf).Tadc (only radars without ADC time fields), radar.wfs(wf).Tadc_adjust (all radars), and radar.wfs(wf).t_ref (deramp-only) to correct time delays during loading. The GPS time offset/lag is also estimated and should be corrected in this step. For deramp-only systems, the signal nyquist zone is also set in this step and the check_surface.save_records_en parameter should be enabled during check_surface.m after the system time delay and GPS time offsets are fixed.
    5. Multichannel Only: Estimate receiver equalization coefficients, update parameter spreadsheet fields radar.wfs(wf).Tsys, radar.wfs(wf).chan_equal_dB, and radar.wfs(wf).chan_equal_deg to equalize data during loading.
    6. All Radars: Check radiation pattern if data were collected over a sufficiently flat and consistent scattering surface while rolling the aircraft. If not, this step can be skipped.
    7. All systems: Estimate burst noise and update parameter spreadsheets field radar.wfs(wf).burst_noise to compensate for burst noise during loading. (NOT IMPLEMENTED)
    8. Deramp systems: Estimate prepulse compression deconvolution and update parameter spreadsheets field radar.wfs(wf).prepulse_H to compensate for pre-pulse compression filter during data loading.
    9. Non-deramp systems: Estimate Fast Time Gain and update parameter spreadsheets field radar.wfs(wf).gain to compensate for fast time gain variation during loading.
    10. All systems: Estimate deconvolution impulse response. If deconvolution response is good, update parameter spreadsheets field radar.wfs(wf).deconv to deconvolve the data during loading.
    11. Radiometric calibration from deconvolution (NOT IMPLEMENTED FOR SAR) to radiometrically correct data during loading.
    12. Multi-image Only: Check image combination or relative calibration between images with img_combine_check.m to relatively correct different images that are combined during loading.
    13. Multichannel Only: Create and use Steering Vector Generation outputs and eventually be run after SAR processing (NOT FULLY IMPLEMENTED)

    Update output data products

    During post-processing, regenerate output data products with post-processed GPS data, and set the motion compensation parameters as:

    qlook.motion_comp = true;
    sar.mocomp.en = true;
    sar.mocomp.type = 2; % Default setting
    sar.mocomp.filter = {@butter,2,0.1}; % Default setting
    sar.mocomp.uniform_en = true; % Default setting when mocomp.en is true
    

    Usually the data should be over-sampled by two in the post processing:

    qlook.resample= [2 1; 1 1];
    array.ft_oversample = 2;
    


    See notes from Create output data products.

    Insert into OPS

    If using the Open Polar Server (OPS): Use runOpsBulkInsert.m script to load data in bulk from the CReSIS filesystem to the OPS. Several user inputs should be set: settings.runType, settings.paramFn, settings.location, settings.sysName, and settings.layerDataPath. Moreover, you may set settings.pathSpacing (default value is 15m). When you run this script, in order not to cause the 'OutOfMemoryError:Java heap space', you may increase the 'Java Heap Size' in MATLAB. The steps are: 'Preferences'---'General' Tab---'Java Heap Memory'---increase the Java Heap Size using the scroll bar to at least 384 MB---Click 'Apply',then 'OK'.

    Check OPS data

    If using the Open Polar Server (OPS): Use opsCheckData.m script to check whether segments/frames have been inserted and specified layers are picked. Three inputs should be specified: the system name, param spreadsheet name, and location. After running this script, it will generate the report including segment name, insertion flag, percentage of surface layer picked, and percentage of bottom layer picked.

    2D image analysis and posting

    See Posting Output Data Products section.

    3D/tomographic processing

    See Generating 3D Images, 3D Surface Tracking, and 3D DEM Generation. This step is only done for multichannel RDS radars (e.g. mcrds, mcords).

    Documentation

    Ensure that final parameter spreadsheet is committed and Processing Notes are updated.

    Creating GPS files

    The first step is to ensure the GPS and INS data are copied to the gRadar.data_support_path/SEASON directory. Once the files are copied there, use the make_gps_SEASON function to produce the YYYYMMDD.mat GPS files in csarp_support (YYYYMMDD is year month day with zero padding). The date of the GPS filename needs to match the data segments date for which it will be used. If there are multiple flights and/or segments from the same day, there should be just one daily GPS YYYYMMDD.mat file for all the data segments. The make_gps_SEASON function should ensure this unless the param.records.gps.fn field is used to override the default GPS filename. This sometimes requires concatenating raw GPS sources together to create a single daily file. All radars on a platform usually share the same daily GPS .mat file unless the param.records.gps.fn field is set.

    The "GPS .mat files" are stored in gRadar.support_path/gps/SEASON/YYYYMMDD_SS.mat files. To create these files, you need to get the GPS/INS information. For field processing this is often just a file containing the NMEA strings. For each season, a script called "make_gps_SEASON.m" should be created and place it in the cresis-toolbox/gps/missions/ repository directory. This script sets up all the parameters for a call to "gps_make.m" and should follow the same format as the other make_gps_SEASON functions in the directory. Any hand or manual corrections that need to be made to the GPS files to make them accurate must be codified and put into this script. The emphasis here is on automation! Please do not make hand corrected GPS files that require any scripts besides the make_gps_SEASON scripts.

    Often multiple GPS datasets will be available for one season. For example, you may use NMEA files for the field processing and then Applanix PPP corrected files for post processing. An example of the setup of a make_gps_SEASON.m script is shown below.

    For default operation, there should be ONE GPS file per day (e.g. even if two missions are flown on the same day, do not create 20131021a and 20131021b... just create 20131021). To merge multiple files using gps_make, set "in_fns{}" and "in_fns_ins{}" to a cell vector of each of the files that need to be merged in chronological order. If the gps-parameters are not the same for each file, then a cell array of gps-parameter structures can be passed in through "params{}" that should be the same length as the cell array of files. Sometimes custom combining is required due to overlap between the GPS files. In this case, it may be necessary to create files gps_YYYYMMDD_source1.mat and gps_YYYYMMDD_source2.mat and then combine them into gps_YYYYMMDD.mat after calling gps_make. The combine code should be placed at the end of the make_gps_SEASON.m script.

    Multiple input file example:

      in_fns{file_idx} = {fullfile(in_base_path,'GPS_20130921a.txt'), fullfile(in_base_path,'GPS_20130921b.txt')};
    

    Here is an example make_gps_SEASON.m file:

    % script make_gps_2013_antarctica_P3
    %
    % Makes the GPS files for 2013 Antarctica P3 field season
    
    tic;
    
    global gRadar;
    
    support_path = '';
    data_support_path = '';
    
    if isempty(support_path)
      support_path = gRadar.support_path;
    end
    
    gps_path = fullfile(support_path,'gps','2013_Antarctica_P3');
    if ~exist(gps_path,'dir')
      fprintf('Making directory %s\n', gps_path);
      fprintf('  Press a key to proceed\n');
      pause;
      mkdir(gps_path);
    end
    
    if isempty(data_support_path)
      data_support_path = gRadar.data_support_path;
    end
    
    % ======================================================================
    % User Settings
    % ======================================================================
    debug_level = 1;
    
    in_base_path = fullfile(data_support_path,'2013_Antarctica_P3');
    
    file_idx = 0; in_fns = {}; out_fns = {}; file_type = {}; params = {}; gps_source = {};
    sync_fns = {}; sync_params = {};
    
    gps_source_to_use = 'ATM';
    if strcmpi(gps_source_to_use,'ATM')
      file_idx = file_idx + 1;
      in_fns{file_idx} = fullfile(in_base_path,'BD960_15Mar13_PPPK_P21Mar13.out');
      out_fns{file_idx} = 'gps_20130315.mat';
      file_type{file_idx} = 'Applanix';
      params{file_idx} = struct('year',2013,'month',03,'day',15,'time_reference','utc');
      gps_source{file_idx} = 'ATM-field';
      sync_flag{file_idx} = 0;
    
    elseif strcmpi(gps_source_to_use,'NMEA')
        file_idx = file_idx + 1;
        in_fns{file_idx} = fullfile(in_base_path,'20130426NMEA.TXT');
        out_fns{file_idx} = 'gps_20130426.mat';
        file_type{file_idx} = 'NMEA';
        params{file_idx} = struct('year',2013,'month',04,'day',26,'format',1,'time_reference','utc');
        gps_source{file_idx} = 'nmea-field';
        sync_flag{file_idx} = 1;
        sync_fns{file_idx} = get_filenames(in_base_path,'accum2_20130426_combined','','.gps');
        sync_params{file_idx} = struct('year',2013,'month',04,'day',26,'time_reference','utc','format',3);
    end
    
    % ======================================================================
    % Read and translate files according to user settings
    % ======================================================================
    gps_make;
    

    A couple of the radar systems only store radar time in the radar files (MCRDS, accum2, and arena at the time of writing). The gps_make script can only support a single such radar and separate gps files are required if more than radar uses this method of synchronization. The entries must be modified as follows to allow the synchronization information to be loaded as well. The sync_fns, sync_params, and sync_flag variables must be defined.

      file_idx = file_idx + 1;
      in_fns{file_idx} = fullfile(in_base_path,'accum2_20140212_170433.gps');
      out_fns{file_idx} = 'gps_20140212.mat';
      file_type{file_idx} = 'NMEA';
      params{file_idx} = struct('year',2014,'month',02,'day',12,'format',3,'time_reference','utc');
      gps_source{file_idx} = 'nmea-field';
      sync_flag{file_idx} = 1;
      sync_fns{file_idx} = get_filenames(in_base_path,'accum2_20140212','','.gps');
      sync_params{file_idx} = struct('year',2014,'month',02,'day',12,'time_reference','utc','format',3)
    

    Many types of GPS files are supported and the particular type is specified by the "file_type" variable. See gps_make_fh.m for a list. At the time of writing these are:

    • Applanix: Applanix .out binary files (a common format used by many groups)
    • arena: Remote Sensing Solutions Arena binary file
    • awi_netcdf (GPS only) or awi_netcdf+awi_netcdf (GPS and INS): Very general support for netcdf files including GPS and INS data including AWI netcdf GPS and INS files.
    • cresis: A custom ascii format produced by the Novatel Inertial Explorer Software (used for all cresis GPS collections starting with 2013_Antarctica_Basler)
    • DMSraw: Raw GPS/INS data produced by the DMS group from their Applanix system
    • General_ASCII: This is a very general interface for supporting many different ascii formats including CSV, tab delimited, etc.
    • Litton: Litton INS format from the ATM group
    • Litton_DGPS: Litton DGPS format from the ATM group
    • NMEA: Standard NMEA files as well as ones produced by MCRDS and accum2 which have special computer and radar time stamps at the end of the line
    • Novatel: GPS/INS data from CReSIS's Novatel system (new version is "cresis"; this is only for old seasons)
    • Reveal: NASA DC-8 GPS/INS system format
    • Novatel_RPYGGA: GPS/INS from CReSIS's Novatel system (new version is "cresis"; this is only for old seasons)
    • Traj: Another ATM group format (Traj+Litton, Traj+Litton_DGPS, and Traj+General_ASCII are supported for including INS data)
    • TXT: A text format from Sander's Geophysics group
    • csv: Supports two formats, one is the format output by the flight_tracker.m program from the cresis toolbox and the other is Canadian CSRS_PPP xlsx files that have been converted to CSV.

    GPS .mat files are described in the GPS File Guide page.

    To verify a GPS file (generally this should be done everyday), use gps_plot.m. For more complete instructions on verification see GPS Files for Post Processing. For example:

    gps_plot('/cresis/snfs1/dataproducts/csarp_support/gps/2014_Alaska_TOnrl/gps_20130921.mat')
    

    Updating GPS

    Sometimes the GPS information needs to be updated. Common reasons are that the GPS product has improved or the GPS time offset (param spreadsheet->records worksheet->gps.time_offset) needs to be corrected. The steps are:

    1. Update the gps file by editing and rerunning make_gps_SEASON.mat
    2. Update the parameter spreadsheet with the correct records.gps.time_offset in the records worksheet.
    3. Run run_records_update.m. This will cause the records files to be updated with the newest GPS data in gRadar.support_path (e.g. csarp_support/gps/SEASON/gps_YYYYMMDD.mat). You can also run run_records_create.m to update the records file, but that is usually slower and will overwrite any other updates that have been made to the records file since it was created.
    4. When GPS data are updated, the processing should be rerun. For a faster temporary solution, modify and run update_data_files.m to update the data files, but these temporarily corrected data files should not be used for final data products.

    GPS Files for Post Processing

    1. Get DGPS/INS data. For IceBridge campaigns, Applanix .out files obtained from NASA's ATM group using PPP processing are the default file to use when available. The location on the plane that the trajectory data represent (often the location of the GPS antenna), the INS definitions, and the time reference (UTC or GPS) is often the same for each season, but this should be verified each time data are requested in case there were changes.
    2. Copy GPS data to /cresis/projects/metadata/SEASON and include a README.txt file in the directory to explain the files. The README.txt file should contain correspondence about the files and important information like the location on the plane that the trajectory data represent, the INS definitions for heading/roll/pitch, and the time reference.
    3. Update lever arms. Get the absolute location of the reference point that the GPS data were processed to. This position becomes the origin of the coordinate system used to define lever arm variables to each radar systems' phase centers. Update the lever_arm.m function with this information. Lever arms are specific to each system, season, and GPS source.
    4. Make GPS files for cresis-toolbox. Create and run a make_gps_SEASON.m file (see make_gps_2019_Greenland_P3.m for example). For the case of making GPS files from Applanix DGPS/INS data product, make the following variable assignments:  file_type{file_idx} = 'Applanix', gps_source{file_idx} = 'ATM-final_YYYYMMDD' where YYYYMMDD is the date that post-processing of the GPS files was completed or the day we received them from another group. Outputs have the form 'gps_YYYYMMDD.mat' and are stored at '.../csarp_support/gps/SEASON/'.
    5. Verify GPS files created above using gps_plot.m. This is important because the assumption of the processing code is that all trajectory and attitude problems are taken care of in the "gps_YYYYMMDD.mat" files which are generated by gps_make. Any problems that are not caught will directly affect the quality of the data products.
        Example of high frequency ATM attitude noise and filtering.
      • Check for high frequency INS errors. For example, Applanix data from the ATM and DMS teams on NASA OIB often need the following added to the end of the make_gps_SEASON file to filter the INS data:
        for idx = 1:length(file_type)
          out_fn = fullfile(gps_path,out_fns{idx});
          
          gps = load(out_fn);
          if regexpi(gps.gps_source,'atm')
            
            warning('Smoothing INS data: %s', out_fn);
            
            gps.roll = sgolayfilt(gps.roll,2,101); % Adjust filter length as needed to remove high frequency noise
            gps.pitch = sgolayfilt(gps.pitch,2,101); % Adjust filter length as needed to remove high frequency noise
            gps.heading  = sgolayfilt(gps.heading,2,101); % Adjust filter length as needed to remove high frequency noise
             
            save(out_fn,'-append','-struct','gps','roll','pitch','heading');
          end
        end
        
      • Check for non-positive time jumps and large gaps and add code to the make_gps_SEASON script to fix these. The gps_time variable must be monotonically increasing. For example, the DC-8 Reveal system often needs the following added to the make_gps_SEASON file:
        for idx = 1:length(file_type)
          out_fn = fullfile(gps_path,out_fns{idx});
          
          gps = load(out_fn);
          if regexp(gps.gps_source,'reveal')
            
            warning('Fixing non-monotonic GPS data in reveal file: %s', out_fn);
            
            [gps,error_flag] = make_gps_monotonic(gps);
            
            if error_flag
              save(out_fn,'-append','-struct','gps');
            end
          end
        end
        
      • Check for nonphysical trajectories and attitudes and add code to the make_gps_SEASON script to fix these.
      • For some data sources such as GPS data stored by Arena digital system, it is useful to extrapolate the data a little at the beginning and end to account for radar data being collected just before and just after the GPS data.
          gps = load(out_fn,'gps_source');
          if regexpi(gps.gps_source,'arena')
            % Extrapolation is necessary because GPS data starts after/stops before
            % the beginning/end of the radar data.
            warning('Extrapolating arena GPS data: %s', out_fn);
            gps = load(out_fn);
            
            if length(gps.lat) >= 2
              new_gps_time = [gps.gps_time(1)-10, gps.gps_time,gps.gps_time(end)+10];
              gps.lat = interp1(gps.gps_time,gps.lat,new_gps_time,'linear','extrap');
              gps.lon = interp1(gps.gps_time,gps.lon,new_gps_time,'linear','extrap');
              gps.elev = interp1(gps.gps_time,gps.elev,new_gps_time,'linear','extrap');
              gps.roll = interp1(gps.gps_time,gps.roll,new_gps_time,'linear','extrap');
              gps.pitch = interp1(gps.gps_time,gps.pitch,new_gps_time,'linear','extrap');
              gps.heading = interp1(gps.gps_time,gps.heading,new_gps_time,'linear','extrap');
              gps.gps_time = new_gps_time;
              
              save(out_fn,'-append','-struct','gps','gps_time','lat','lon','elev','roll','pitch','heading');
            end
          end
        
      • NMEA Data which only has 1-m elevation resolution should be smoothed. For example, the Arena digital system usually stores NMEA data and here is example code to apply the elevation smoothing:
          
          gps = load(out_fn,'gps_source');
          if ~isempty(regexp(gps.gps_source,'arena'))
            gps = load(out_fn,'elev');
            warning('Filtering elevation: %s', out_fn);
            gps.elev = fir_dec(gps.elev,ones(1,101)/101,1);
            save(out_fn,'-append','-struct','gps','elev');
          end
        
    6. Update records files See run_records_update.m. Updates records files with the new GPS files.

    Lever Arms

    The lever arm is the vector which points from the position on the aircraft where GPS data are processed (usually the IMU, but it can also be a GPS antenna or sensor if the GPS data were processed with a lever arm) to the phase centers of the radar antennas. There are two sets of vectors. There is a vector for each transmit phase center and another for each receive phase center. The coordinate system that is used is explained in "lever_arm.m". Methods for entering are also detailed in lever_arm.m and usually there is a unique lever arm for each (GPS source, radar, season) set. If there are different receive antenna configurations, each receive configuration may need a separate phase center listed in lever_arm.m. Sometimes different GPS sources are used in one season, in this case, multiple entries for the GPS location in lever_arm.m would be required to handle the processing and each GPS file's gps_source field would need to indicate the appropriate GPS source.

    For example: Let us say there are two receiver configurations. Four channels are individually sampled in one configuration. In the other configuration the four channels are combined and then the combined result is sampled. This would require 5 phase centers (also known as rx_paths in the radar worksheet): one for each of the four channels and one for the combination of all channels.

    Measuring Lever Arms

    The software needs to know the location of the radar antenna phase centers. For regular dipoles, this is at the central feed point. For dipoles in an array with a single feed, this is the center of the array or average of all the dipole feed locations. For dipoles mounted on a metal surface, it is the center of the feed projected onto the metal surface (from image theory we know that the phase center will actually lie on the metal surface). For horn antennas it is approximately in the center of the waveguide feed as it begins to widen.

    The distances between the GPS antenna phase center and the radar antenna phase center(s) must be known in the IMU coordinate system. For an IMU to work properly, you must also know the position and orientation of the IMU relative to the GPS antenna phase center and roll, pitch, and heading biases should be accounted for in the make_gps_SEASON script.

    Typically this is done with a full station for surveying. Optimally, we would measure our IMU position and orientation (e.g. 3 points on the lid) in this survey as well. However, the IMU is sometimes out of sight and we have to rely on less accurate methods to measure its lever arm to the GPS antenna.

    It is critical that the IMU orientation be known. Typically this is done by ensuring the survey of the GPS and radar antennas can be transformed to the aircraft body coordinate system (x=forward, y=right, z=down) or something parallel to it. The IMU is usually mounted parallel to the aircraft body coordinate system so its coordinate system matches. One way to do this is to measure points along the center line of the plane and then three points on an object whose orientation is aligned with the IMU (e.g. a rack inside the cabin which can be see from outside of the plane where the survey is being done). The larger the separation between the points, the more accurate the orientation will generally be. For example: 1) measure the front and back of the plane for the most accurate center line measurement, 2) measure two points on a rack separated in elevation by as much as possible by opening an aircraft door, 3) measure the same height on two separate racks to increase the separation. The measurements for the IMU (which is usually not visible from the outside) can then be done relative to the coordinate system defined by the points on the racks.

    Posting Output Data Products

    After creating all the CSARP_* data products that are required for post processing, the following steps should be followed:

    Step 1. Check the Radar System Time Delay, GPS time offsets, and errors in the Nyquist zone

    This step is described in the System time delay section found under Calibration and In Flight Routines.

    Errors in the radar system time delay cause a constant vertical shift or bias in the echogram time axis which results in a constant elevation error bias. System time delay values should be set so that the surface radar echo aligns with the actual surface location. Pulsed systems (rds,accum) use Tadc_adjust for this large scale adjustment (applied to all receive paths equally and applied through an offset to the time axis). For some old MCoRDS datasets, Tadc, also has to be set to account for the ADC settings manually in the parameter spreadsheet because the setting was not stored by the digital system. FMCW systems (snow,kuband,kaband) use Tsys. The pulsed systems (rds,accum) use Tsys for small scale adjustments. The small scale adjustments provide a separate control for each receive path and are applied with the pulse compression matched filter. These relative adjustments should have been determined during receiver equalization and not during this step. runOpsLoadLayers is a good function for loading and comparing surface values to determine the system time delay. A quick check for radars is to use the surface multiple(s), but ground control is the preferred method because it is more accurate. The most convenient ground control is a comparison with LIDAR data (e.g. ATM) or surface DEM data (e.g. GIMP, BEDMAP2).

    One common issue with the deramp on receive data (kaband, kuband, and snow radars) is that the wrong Nyquist zone gets selected (wrong selection on IF filter bank). Side note: With the newer CReSIS deramp on receive radars (2013+ snow radar and kuband radar), digitally controlled filter banks are used and the Nyquist zone is usually correct. Incorrect Nyquist zones are identified by upside down echograms or by large errors in the surface two way travel time. You can override the Nyquist zone by setting the records.settings.nyquist_zone in the records file. This allows record level control over the Nyquist zone. An automated function for doing this is run_check_surface.m. This allows either lidar data loaded from opsLoadLayers.m (ATM or AWI LIDAR) or surface/land digital elevation model (GIMP, DTU mean sea surface, and Bedmap 2) data to be used to set the Nyquist zone. There are two separate settings that must be correct: 1) the hardware Nyquist zone which tells the processing which IF filter was chosen and 2) the signal Nyquist zone which tells which Nyquist zone the signal actually came in. Ideally these two settings are the same for each record, but it is possible that the wrong hardware filter was selected. This commonly occurs when the surface elevation is changing rapidly (e.g. in mountains and fjords).

    Step 2. Fix Surface Tracking Errors

    To correct surface tracking errors, usually re-running tracking with run_layer_tracker.m (echogram_source set to the standard SAR processed data product) and then hand correcting remaining frames is sufficient. Hand correction can be done with imb.picker. Automated surface tracking parameters in qlook worksheet should be updated if they are not tracking the surface very well. run_layer_tracker.m runs the same tracker as qlook and can be used to fine tune these parameters. Note that run_layer_tracker.m has a mode for initializing the surface with the GIMP/Geoid which is helpful when the surface is very hard to track due to a poor quality signal.

    imb.picker can be used for layer information stored in OPS or stored in CSARP_layerData files. run_layer_tracker.m can work with any layer storage location (ops, CSARP_layerData, records, or echogram files).

    Important: Once surface has been corrected:

    1. If using OPS, copy the new surface from ops to records and CSARP_layerData using runOpsCopyLayers.m
    2. If not using OPS, copy the new surface from CSARP_layerData to records using runOpsCopyLayers.m
    3. Rerun SAR processing on all echograms where substantial corrections were done and recreate CSARP_standard and CSARP_mvdr data products.
    4. In the parameter spreadsheets:
      1. Set param.qlook.surf.en to false. This prevents qlook from overwriting the hand corrected surface tracking.
      2. Set param.qlook.img_comb_layer_params to struct('name','surface','source','layerdata','layerdata_source','layerData'). This is only necessary if img_comb is used and forces the image combination step to use the hand corrected surface for combining images rather than the automatically tracked version.
      3. Set param.array.img_comb_layer_params to struct('name','surface','source','layerdata','layerdata_source','layerData'). This is only necessary if SAR processing is done and img_comb is used. It forces the image combination step to use the hand corrected surface for combining images rather than the surface that is passed in from the SAR processed files (which may be old). (There is currently no method of update the surface layer in the SAR processed files besides re-SAR processing because the surface stored in the SAR files is supposed to represent the surface used during SAR processing to account for refraction and therefore should not be updated without rerunning SAR processing.)

    It can be helpful to insert a DEM or LIDAR layer to compare to the radar surface into the layer data source (OPS or CSARP_layerData) when using imb.picker. Use runOpsInsertLayer.m to do this. runOpsLoadLayers.m has examples of comparing these layers once they are inserted.

    run_check_surface.m should be run at this point to fine tune the system time delay. It can also be used to do comparisons. It uses a reference layer (usually LIDAR) and DEM data to compare with radar data. For deramp on receive systems, it is used to correct the Nyquist zone settings on a per record basis. Any frames that have their Nyquist zone modified, should be reprocessed.

    Step 3: Compress Echograms (kuband, snow, kaband only)

    This step is for FMCW radars only and is run from run_compress_echograms.m. The surface is tracked and the data are truncated according to the posting depth_axis field.

    Step 4. Marking artifacts and bad frames (optional for all radars, required for sea ice kuband, snow, kaband)

    Optional for all radars: You can mark bad frames using the proc_mode variable in the frames file (see Frame file guide), but this is generally not done. The script run_quality_control.m is an interactive tool for doing this. It is a simple command line tool for browsing and marking frames. A base 10 mask is used to mark and organize the frames and the frm_types field in the parameter spreadsheets gives control over how certain frames are processed and posted. The most common values are "00" for good frame and "10" for bad frame (second decimal digit indicates good/bad data frame).

    For sea ice segments with kuband, snow, and kaband: These segments should have artifacts identified and these artifacts should be recorded in the "quality" field of the frames file. You can mark artifacts in the data with the run_quality_control.m script. One of the output NSIDC data products is a file containing which artifacts are seen in each frame.

    Step 5: Check Data Products

    Modify run_check_data_products.m for your radar, season and output files and then run with the output post directory set to an empty string (since posting has not been done yet). This script will tell you if there are any missing files and if there are any files that should not be in good segment directories. All files that should not be there will be removed if delete_bad_files is set to true. One way to check for missing files is to copy and paste the output of the script into an editor such as notepad++ and search for an exclamation mark "!". Since all errors are followed by exclamation marks, this will allow you to search for each error. If is often best to run check_data_products twice with delete_bad_files set to true. The first time removes all the bad files and the output from the second time will then only have missing files errors since the bad file errors will have been taken care of.

    To run for the whole season, set enable_all_without_do_not_process to true. To check a subset of segments, set enable_all_without_do_not_process to false and use the generic flag in the parameter spreadsheet.

    Run records_check.m and frames_check.m for the whole season to ensure there are no problems with the records and frames files.

    Where possible, check results against previous measurements (e.g. repeat flight tracks from previous seasons) and other instruments (surface elevation from LIDAR).

    Step 6: Pick Layers and Quality Control Data Products

    The steps below are for the radar depth sounder (or any radars where layer picking is required). Currently, only the surface is tracked for accum, snow, and kuband and a simple spreadsheet and using JPG images is probably sufficient for quality control.

    Create layer data or insert into the OPS

    If using OPS to store layer data, use runOpsBulkInsert.m. You can use imb.picker to browse and track layers in the data.

    If not using OPS, you will store the layer data in layerData files and use run_picker.m to browse and track layers in the data. The qlook.m surface tracking process should have created layerData files already. It is very important that you recreate the layerData files at the final desired resolution. If the qlook is at the final desired resolution, then the layerData files generated during qlook.m surface tracking are fine. If the SAR processed data are finer resolution, then you need to use runOpsCopyLayers.m to copy the layerData. First move CSARP_layerData to CSARP_layerData_qlook. Then use runOpsCopyLayers.m to copy the CSARP_layerData_qlook to CSARP_layerData (this directory should not exist yet or this process will not have the desired effect) using runOpsCopyLayers and echogram_source set to the SAR processed data (usually CSARP_standard). You may wish to retrack the surface using run_layer_tracker.m with CSARP_standard. You can directly create layerData files with run_make_layer_files.m too. The default settings overwrite existing layer files so be careful with the settings and ensure backups are made if needed. Typical setting that need to be changed are:

    param.param_fn = ct_filename_param('RADAR_param_SEASON.xls'); % Set "generic" column for segments that you want to create layer data for
    param.echogram_input = 'standard'; % Usually standard for accum, RDS and qlook for other radar types
    

    Create a new picking/tracking spreadsheet for layer tracking

    1. Add a new worksheet for the season. For example, name it ‘2006_Greenland_TO_picking’ for season ‘2006_Greenland_TO’ picking. Contact cresis_data@cresis.ku.edu for access to the Google Spreadsheet that is used by CReSIS. 2. Add the following fields in this spreadsheet.
    Fields in the picking spreadsheet.png

    3. Add the specific Segments/Frames to be picked in Column A. If the whole season is to be picked, use create_picking_spreadsheet.m with the ascii (default) option: In the parameter spreadsheet for the season, set the generic field to 1 or true for the segments that should be picked. Sometimes we need to pick the data in a specific glacier or area. In this case, you should know the accurate area (one polygon) which you could get from OPS Geoportal firstly. Then you could call the function opsgetFramesWithinPolygon.m in MATLAB to get all the frames within this polygon.

    4. When you select frames to pick (preferably select an entire segment at a time), put your OPS username in the "owner" column for those frames. When you are done fixing the automated surface tracker, put a "1" in the surface column. When you are done picking the bottom, put a "1" in the bottom column. When you are done with your self QC, put a "1" in the "Self QC Complete" column. If you are assigned a QC role, put your name next to frames that you will QC and that have been self QC'd. After you QC them (which will be the second time that the frame is QC'd), put a "1" in the "QC done" column. If there are any problems or questions you encounter when you pick the data, please add them to the "QC Notes" column.
    Picking spreadsheet.png


    Browse the output data products with the picker and correct surface and bottom picks. For details on running the picker, please see Data Picking Tutorial.

    Step 7: Post Files to the CSARP_post Directory

    Post the files by making a local copy of run_post, setting the parameters of the local copy, and then running it.

    For the non-RDS radars, post_csv_kml.m should be run.

    For the FMCW (snow/kuband) radars, the posting should use ASCAT imagery (map.type is "ascat" in the posting worksheet) or natural-earth Arctic imagery if it is not available.

    Step 8: Check Data Products in Posting Directory

    Run run_check_data_products.m on the posting directory by setting the output posting directory. To indicate directories that should not be posted, add "do not process" string to the cmd.notes field in the parameter spreadsheet.

    Step 9: Copy files to archival site

    To copy/update files to the archive in the field:

    Run backup_data_products_in_field.m

    To move files to the data.cresis.ku.edu ftp site:

    First, make sure permissions are correct. If some files are not owned by you and do not have the right permissions, please ask helpdesk@cresis.ku.edu to run these commands for you.

    # Set user/group/other permissions
    chmod u=rwx,g=rwx,o=rx -R /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/
    # Set group to mdce
    chgrp mdce -R /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/
    # Set set group id bit on directories
    find /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/ -type d -exec chmod g+s {} \;
    

    Second, move data:

    # As the data user (run "su data" first), move the old folder if it exists and create a new one
    mkdir /cresis/snfs1/dataproducts/public/data/RADAR/old_versions
    mv /cresis/snfs1/dataproducts/public/data/RADAR/SEASON /cresis/snfs1/dataproducts/public/data/RADAR/old_versions
    mkdir /cresis/snfs1/dataproducts/public/data/RADAR/SEASON
    chmod o+rwx /cresis/snfs1/dataproducts/public/data/RADAR/SEASON
    
    # The user who owns the folders in ct_data should do the following
    mv /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/CSARP_post/* /cresis/snfs1/dataproducts/public/data/RADAR/SEASON/
    rm /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/CSARP_post/ 
    
    # For snow/kuband, move the CSARP_noise and CSARP_deconv folders to CSARP_post
    mv /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/CSARP_noise /cresis/snfs1/dataproducts/public/data/RADAR/SEASON/
    mv /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/CSARP_deconv /cresis/snfs1/dataproducts/public/data/RADAR/SEASON/
    
    # If there are any files left in the season folder on ct_data, move them to a folder for deletion
    mkdir /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/deleteThis 
    mv /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/* /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/deleteThis/
    
    # Do NOT delete the "deleteThis" folder. This will be done during a scheduled cleanup after file owners have been notified.
    
    # The season folder on ct_data should be empty at this step except for deleteThis, add a symbolic link to the new location
    ln -s /cresis/snfs1/dataproducts/public/data/RADAR/SEASON/ /cresis/snfs1/dataproducts/ct_data/RADAR/SEASON/CSARP_post 
    
    # After verifying new data products are good and if the old version is not needed, remove the old folder as the data user
    rm -rf /cresis/snfs1/dataproducts/public/data/RADAR/old_versions/SEASON
    

    Then send an email to helpdesk@cresis.ku.edu asking them to change the owner of the files in the new directory:

    chmod o-w -R /cresis/snfs1/dataproducts/public/data/RADAR/SEASON
    chown data:data -R /cresis/snfs1/dataproducts/public/data/RADAR/SEASON
    

    Step 10: Post to NSIDC

    For posting to NSIDC, follow the NSIDC posting guide.

    Step 11: Update KML file links and coverage maps (currently just RDS)

    From the public "rds" directory (e.g. for CReSIS this is "/cresis/dataproducts/public/data/rds"), run the script "./create_kml_dir" as user "data".

    Run coverage_maps_master_fig_files.m after adding the new season to the script in the right category (greenland/canada or antarctica). The script needs to be run for each location that data was collected in (e.g Canada, Greenland, Antarctica). This script creates an interactive figure file with all the data collected in the specified region.

    Run make_coverage_maps.m after adding the new season in the right category (greenland/canada or antarctica). The script needs to be run for each location that data was collected in (e.g Canada, Greenland, Antarctica). This script creates an image for the coverage of the individual season.

    Startup.m: File Locations and Cluster Setup

    The startup.m file creates the global gRadar variable, sets the path, and lists hidden dependencies to the cluster_compile.m function. Note that "clear all" and "clear global gRadar" will remove gRadar completely from memory and path(pathdef) will reset the path.

    cheduler, Paths, and Hidden Dependencies (File Locations)

    The gRadar fields are:

    • path_override: This folder path is first in the path so it overrides all other folders in the path. Use this directory to store temporary/personal files.
    • path (ct_path): This is the location of the cresis toolbox.
    • param_path: This is the location of your parameter spreadsheets repository (ct_params.git).
    • tmp_file_path: This is the location where personal temporary files will be stored by the cresis toolbox.
    • ct_tmp_file_path: This is the location of shared temporary files.
    • data_path: This is the location of raw radar data. This is generally not used and the records.base_path is expected to give the full path always.
    • data_support_path: Points to the raw "metadata" directory (radar system measurements, GPS/INS data, etc).
    • support_path: Points to the "csarp_support" directory. The raw radar data and raw metadata files are converted into standard formats and stored in csarp_support in gps, records, and frames files.
      • gps/SEASON/gps_YYYYMMDD.mat
      • records/RADAR_NAME/SEASON/records_YYYYMMDD_SS.mat
      • frames/RADAR_NAME/SEASON/frames_YYYYMMDD_SS.mat

      SEASON is YYYY_LOCATION_PLATFORM (e.g. 2011_Greenland_P3)
      RADAR_NAME is the system name given by ct_output_dir (e.g. accum, kaband, kuband, rds, or snow)
      YYYYMMDD is zero padded year month day
      SS is zero padded segment

    • out_path: The output data products are stored here
      • CSARP_qlook: Quick look files (output of qlook.m)
      • CSARP_deconv: Quick look with deconvolution files (output of qlook.m)
      • CSARP_analysis: Data analysis files (output of analysis.m):
        • Coherent noise files (output of analysis for coherent noise tracking)
        • Equalization files (output of analysis for surface/equalization tracking)
        • Specular leads and deconvolution files (output of analysis for specular targets)
      • CSARP_sar: SAR processing files (output of sar.m)
      • CSARP_standard: SAR processed and array processed outputs using standard (periodogram) array processing (output of array.m)
      • CSARP_mvdr: SAR processed and array processed outputs using Minimum Variance Distortionless Response (MVDR) array processing (output of array.m)
      • CSARP_music: SAR processed and array processed outputs using Multiple Signal Classification (MUSIC) array processing (output of array.m)
      • CSARP_post: Posted files (output of post.m)
    • gis_path: Points to the GIS directory with the standard locations for GIS files that the cresis toolbox uses.
    • cluster: Structure containing default cluster (parallel processing) settings
      • NEED TO COMPLETE THIS SECTION

    File Paths

    Please use the ct_filename_*.m functions for creating file paths in the cresis toolbox. These functions are designed to take the parameter structure from a parameter spreadsheet read in with read_param_xls.m. Also, always use fullfile.m when creating paths to ensure cross platform compatibility.

    • ct_filename_ct_tmp: Function for creating shared temporary file paths (gRadar.ct_tmp_file_path).
      param = struct('radar_name','mcords5','season_name','2015_Greenland_Polar6','day_seg','20150730_01');
      ct_filename_ct_tmp(param,'','headers', fullfile('chan1', ['test' '.mat']))
      
      /cresis/snfs1/dataproducts/ct_data/ct_tmp/headers/rds/2015_Greenland_Polar6/test_20150730_01.mat
      
    • ct_filename_data: Function for creating raw radar file paths (gRadar.data_path).
      Use get_segment_file_list.m rather than directly calling this function.
    • ct_filename_gis: Function for creating GIS directory paths (gRadar.gis_path).
      ct_filename_gis([],'world\egm96_geoid\WW15MGH.DAC')
      /cresis/snfs1/dataproducts/GIS_data/world/egm96_geoid/WW15MGH.DAC
      
    • ct_filename_out: Function for creating data product file paths (gRadar.out_path).
      param = struct('radar_name','mcords5','season_name','2015_Greenland_Polar6','day_seg','20150730_01');
      ct_filename_out(param,'CSARP_post/mvdr','')
      
      /cresis/snfs1/dataproducts/ct_data/rds/2015_Greenland_Polar6/CSARP_post/CSARP_mvdr/20150730_01
      
      param = struct('radar_name','mcords5','season_name','2015_Greenland_Polar6','day_seg','20150730_01');
      ct_filename_out(param,'qlook','')
      
      /cresis/snfs1/dataproducts/ct_data/rds/2015_Greenland_Polar6/CSARP_qlook/20150730_01
      
      param = struct('radar_name','mcords5','season_name','2015_Greenland_Polar6','day_seg','20150730_01');
      ct_filename_out(param,'noise','',1)
      
      cresis/snfs1/dataproducts/ct_data/rds/2015_Greenland_Polar6/CSARP_noise
      

      To create the full path to a combined echogram or subimage echogram for two examples:

      param = struct('radar_name','mcords5','season_name','2015_Greenland_Polar6','day_seg','20150730_01');
      frm = 1;
      fullfile(ct_filename_out(param,'qlook',''),sprintf('Data_%s_%03d.mat', param.day_seg, frm))
      
      frm = 1; img = 1;
      fullfile(ct_filename_out(param,'qlook',''),sprintf('Data_img_%02d_%s_%03d.mat', img, param.day_seg, frm))
      
    • ct_filename_param: Function for creating parameter spreadsheet file paths (gRadar.param_path).
      ct_filename_param('rds_param_2011_Greenland_P3.xls');
      
      users/paden/scripts/params/rds_param_2011_Greenland_P3.xls
      
    • ct_filename_support: Function for creating csarp_support file paths (gRadar.support_path).
      param = struct('radar_name','mcords5','season_name','2015_Greenland_Polar6','day_seg','20150730_01');
      ct_filename_support(param,'','records')
      
      /cresis/snfs1/dataproducts/csarp_support/records/rds/2015_Greenland_Polar6/records_20150730_01.mat
      
      param = struct('radar_name','mcords5','season_name','2015_Greenland_Polar6','day_seg','20150730_01');
      ct_filename_support(param,'','gps',1)
      
      cresis/snfs1/dataproducts/csarp_support/gps/2015_Greenland_Polar6/gps_20150730.mat
      
    • ct_filename_tmp: Function for creating personal temporary file paths (gRadar.tmp_file_path).
      param = struct('radar_name','mcords5','season_name','2015_Greenland_Polar6','day_seg','20150730_01');
      ct_filename_tmp(param,'','headers', fullfile('chan1', ['test' '.mat']))
      
      /cresis/snfs1/scratch/paden/mdce_tmp/headers/rds/2015_Greenland_Polar6/test_20150730_01.mat
      

    File Guides

    File Type Location Functions to Access
    Coherent Noise gRadar.out_path/RADAR_NAME/SEASON_NAME/CSARP_analysis/coh_noise_simp_YYYYMMDD_SS_wf_W_adc_A.mat ct_filename_out(param,'analysis')
    CSV (Layer) gRadar.out_path/RADAR_NAME/SEASON_NAME/CSARP_post/csv ct_filename_out(param,'post')
    DEM gRadar.out_path/RADAR_NAME/SEASON_NAME/CSARP_DEM ct_filename_out(param,'DEM')
    Echogram gRadar.out_path/RADAR_NAME/SEASON_NAME/CSARP_$type ct_filename_out(param,$type), load_L1B
    Frames gRadar.support_path/frames/RADAR_NAME/SEASON_NAME ct_filename_support(param,'','frames'), frames_load(param)
    GPS gRadar.support_path/gps/param.SEASON ct_filename_support(param,'','gps',true), gps = load(gps_fn)
    GPS (Raw) gRadar.data_support_path/SEASON_NAME/* read_gps_*.m, make_gps_SEASON_NAME.m
    Layer Data (2D) gRadar.out_path/RADAR_NAME/SEASON_NAME/CSARP_layerData ct_filename_out(param,'layerData'), layerdata.m
    XML (National Instruments) gRadar.data_path/* read_cresis_xml
    Parameter Spreadsheet gRadar.param_path ct_filename_param('SYSTEM_param_SEASON.xls'), read_param_xls
    Raw Radar Data gRadar.data_support_path/SEASON get_segment_file_list, get_raw_files
    Records gRadar.support_path/records/RADAR_NAME/SEASON_NAME/records_YYYYMMDD_SS.mat ct_filename_support(param,'','records'), records_load(param)
    SAR gRadar.out_path/RADAR_NAME/SEASON_NAME/CSARP_sar/YYYYMMDD_SS/ ct_filename_out(param,'out'), load_sar_data
    Surface Data (3D) gRadar.out_path/RADAR_NAME/SEASON_NAME/CSARP_surfData ct_filename_out(param,'surfData'), tomo.surfdata.m
    Tracks gRadar.support_path/tracks/track_LOCATION_SEASON_NAME.mat ct_filename_support(param,'','tracks'), track_load(param)

    Radar Guides

    This general classification of radars is used to distinguish the various radar sounders. The categories roughly follow the different radar systems developed at CReSIS. The "boundaries" between radars are not black and white. For example, a radar could be described as an accumulation radar if its frequency range is below 500 MHz and so the categorization should consider both the intended purpose of the radar and the frequency range and accept that some radars belong in more than one category. The ct_output_dir.m command returns the radar classification information.

    Radar Frequency Range1 Description2
    Accumulation Radar 500 to 2000 MHz Detecting shallow layers in the top 100's of meters. Usually a pulsed direct capture radar because of the long time gate and narrow bandwidth.
    Radar Depth Sounder 1 to 500 MHz Detecting shallow to deep layers and the ice bottom. Usually a pulsed direct capture radar because of the long time gate and narrow bandwidth.
    Snow Radar 2000 to 8000 MHz Detecting shallow layers in the top 10's of meters. Deeper penetration and less clutter and signal extinction in wet snow conditions. More recent versions of the CReSIS Snow Radar extend from 2000 to 18000 MHz. Usually a deramp on receive radar because of the wide bandwidth and short time gate.
    Ku-band Radar 12000 to 18000 MHz Detecting shallow layers in the top 10's of meters. Compares to satellite altimeters (13.25-13.75 GHz band). Usually a deramp on receive radar because of the wide bandwidth and short time gate.
    Ka-band Radar 32000 to 38000 MHz Detecting shallow layers in the top 10's of meters. Compare to potential satellite altimeter 35.5-36 GHz band. Less penetration and more sensitive to just the air-snow surface scattering. Usually a deramp on receive radar because of the wide bandwidth and short time gate.

    1 Approximate frequency range for each category of radars. There are exceptions.
    2 Usual primary purpose for this type of radar. There are exceptions.

    Calibration and In Flight Monitoring

    In Flight Monitoring

    1. flight_tracker.m Flight tracker (can read serial input or monitor a GPS file). This is not a calibration step, but is a useful tool during flight.
    2. Signal statistics Use analysis.m statistics command to extract noise power in the time and frequency domain and then use run_collate_noise_mean.m and run_collate_noise_psd.m to generate plots to check. For inflight verification to ensure proper system functionality, use run_noise_analysis.m (NOT IMPLEMENTED) which automates all of the steps. This command should be run routinely through out the campaign to ensure proper system functionality.
    3. Receiver equalization For multiple recieve channel radars only. Use analysis.m waveform command to extract surface or layer targets and then collate_equal.m to estimate the time, phase, and amplitude equalization coefficients. For inflight verification to ensure proper system functionality, use run_rx_equalization.m (NOT IMPLEMENTED) which automates all of the steps. This command should be run routinely through out the campaign to ensure proper system functionality.
    4. run_preprocess_online.m Online generation of headers. This is similar to run_preprocess.m except that it does not generate headers for every file to avoid overburdening the file system during collection of radar data. It is used to ensure that radar files are good quality during data collection. (NOT IMPLEMENTED).
    5. Transmit equalization For multiple transmit channel radars only. Use analysis.m waveform command to extract surface or layer targets and then collate_equal.m to estimate the time, phase, and amplitude equalization coefficients. This is usually run in flight since it requires iterating radar transmission parameters. For inflight verification to ensure proper system functionality, use run_tx_equalization.m (NOT IMPLEMENTED) which automates all of the steps. Ideally, this command should be run routinely through out the campaign to ensure proper system functionality, but this might not be possible because it usually requires a special operation mode.

    After Flight Calibration

    1. Array Calibration For estimating the steering vectors for array processing. Use analysis.m snapshot command to extract snapshots and then collate_snapshot.m to sort the snapshots into bins. (NOT IMPLEMENTED)
    2. Burst Noise For removal of noise that is and/or space domains. Use analysis.m burst_noise command to detect burst noise (abrupt transients in the time-space domain) and then collate_burst_noise.m to create burst noise removal files. This routine should be run before final post processing. (NOT IMPLEMENTED)
    3. Check Image Combination Check that images are combined properly in the final imagery by running img_combine_check.m. This command must be run after the echogram generation, but should be run as soon as possible.
    4. Coherent Noise Use analysis.m coh_noise command to extract coherent noise then collate_coh_noise.m to create coherent noise removal files. This routine should be run before final post processing.
    5. Deconvolution Use analysis.m specular command to extract specular target then collate_deconv.m to create deconvolution files. This routine should be run before final post processing, but after final INS/GPS is created and coherent and burst noise is removed.
    6. Fast Time Gain Use analysis.m waveform command to extract a fast time gain measurement then collate_gain.m to create fast time gain correction files. This routine should be run before coherent noise analysis.
    7. Prepulse compression deconvolution For deconvolution using an inverse filter applied before pulse compression. This routine should be run before coherent noise analysis.
    8. Radiation Pattern Radiation pattern measurement. Using the results from generate_complex_svLUT, run the script plot_rad_patterns.m. Use analysis.m waveform command to extract surface or layer targets and then collate_radiation_pattern.m to create radiation patterns. This routine requires INS data and should be run as soon as possible for system verification.
    9. Radiometric For radiometric calibration, analysis.m specular command with collate_deconv.m is used to set the wf-adc specific system_dB terms followed by a final inspection of the images at the same specular targets to set the final radiometric correction combined image corrections. For routine monitoring, the [[Analysis|analysis.m] statistics command is used to extract the surface signal power and then use collate_stat_peak.m to generate plots to check.
    10. Receiver equalization Same as in flight processing, but with final INS and GPS data.
    11. Signal statistics Use analysis.m statistics command to extract signal and noise statistics then collate_noise.m to create noise characterization files. This routine should be run after coherent noise analysis. The get_echogram_stats command can be used to retrieve information about reflection variation in the final echograms.
    12. Steering Vector Generation For multiple channel radars only. Use analysis.m waveform command to extract surface or layer targets and then collate_svLUT.m to create a complex-valued steering vector lookup table. This routine requires INS data and should be run as soon as possible for system verification. A second run is required after final INS/GPS is created and deconvolution is completed.
    13. System time delay Find the correct start for fast time (the two way travel time of the first sample). Create echogram image data products with the surface layer tracked and then use check_surface.m.
    14. Transmit equalization Same as in flight processing, but with final INS and GPS data.

    Maintenance and Utility Functions

    Layer Management (e.g. layer data in OPS, echograms, layerdata, or records)

    • imb.picker.m: Function for browsing imagery and editing layers. User Guide-Developer Guide
    • layer_data.m: Class for loading and manipulating layer information from layerdata files.
    • layer_tracker.m: Function for re-running the automated surface tracker. Includes an interactive debugging mode. See run_layer_tracker.m for common use cases.
    • block_data.m: Function to create arbitrary blocking of layer and data files. Used for segmenting data up for processing in external programs such as layer tracking. unblock_data.m copies the layer data generated by the external programs back into the regular data structures.
    • make_layer_files.m: Function to create layer files from ecohgrams and update the GPS information. See run_make_layer_files.m for common use cases. runOpsCopyLayers.m can be used to create layer files too.
    • opsCopyLayers.m: Function for copying layers from one data source to another (not just for OPS). Can be used to update records files, echogram files, layerData files, OPS, etc. Can be used to overwrite a layer, merge two layers, and apply functions on layers (including twtt shifts). The source of the data can be LIDAR (ATM) or custom data. See runOpsCopyLayers.m for common use cases.
    • opsDeleteLayer.m: Function to delete layers in a data source. Currently only supports OPS. This function also needs a runOpsDeleteLayers.m file.
    • opsInsertLayer.m: Function to insert a new layer using interpolation of a point cloud or raster dataset. See runOpsInsertLayer.m for common use cases.
    • opsLoadLayer.m: Function for loading layers. Can be used to load LIDAR (ATM) data as well. See runOPSLoadLayers.m for common use cases. This script has examples for aligning layers to a common time axis (e.g. lidar, records, OPS, layerdata, echogram, different radar systems).
    • opsScatterPlotExample.m: Load all points in a WKT polygon. You can use https://ops.cresis.ku.edu/ web client to create the WKT polygon.
    • update_surface_twtt_delta.m: Function to update echograms with new system time delay (radar worksheet radar.wfs(wf).Tsys and radar.wfs(wf).t_ref), ADC gain, or adc time delay (radar worksheet wfs(wf).Tadc and wfs(wf).Tadc_adjust). This affects the time axis, Time field, and the layer variables, Surface and Bottom. If ADC gain is changed, then the Data variable is affected too. The new t_ref, Tsys, Tadc, and Tadc_adjust fields are also updated in the param_{qlook,sar} field. For quick look products this is okay for final products. For SAR products, the data should be reprocessed if the time offset is too large since this affects the focusing. Also, this is only for time shift corrections that are applied equally to all channels in the data product being updated (otherwise the script reports that it cannot make the updates and fails). After running this function, you may need to also update the records file, layerData files, and OPS using opsCopyLayers.

    Metadata Management (e.g. frame, gps, records)

    • create_frames.m: Update the frame boundaries in a frames file using a Map GUI.
    • records_create_aux_files.m: Creates auxiliary netcdf records files. These are used because Matlab's netcdf interface allows partial variable loading. Whenever a .mat records file is modified, this function should be called to regenerate the auxiliary file. (update_records.m, records_create_sync.m already call this function when the records file is written.)
    • get_frame_id.m: Function for getting the segment, frame, and records associated with a set of GPS times.
    • get_raw_files.m: Function for getting a list of the raw files for a particular frame or segment.
    • get_segment_file_list.m: Mostly a support function for create vectors and records, but can be used to get all the files for a segment or segments (see run_get_segment_file_list.m).
    • quality_control.m: Update the quality control field and processing mode field in the frames file. Uses a simple interactive command line interface and displays echograms from either an image or mat file. Can be used to compare any number of different processed outputs. Designed to be quick (by using jpeg images as opposed to full resolution mat files) for quality control, but can also load full resolution mat files.
    • update_records.m: Run this to update the GPS information in the file. This is useful when you have received new and improved GPS files, created your csarp_support/gps files and want to update the records files as well. This usually happens every time we return the field and receive improved GPS data. It is also useful when the records worksheet "gps.time_offset" is changed for GPS synchronization to radar data. This function can also update old version 3 records files that refer to param_records.vector. This function is usually called from the script run_update_records.m.

    Data File Management (e.g. quick look, sar, array products)

    • update_data_files.m: Run this to update the GPS information in the echogram files without having to recreate them. Useful when records file GPS information is updated. Typically this should only be used as a temporary solution since having accurate GPS information improves the processing results and phase center information may also be less accurate.
    • run_update_surface_twtt_delta.m: Run this function to update the Tsys, t_ref, Tadc_adjust, Tadc, or adc_gain_dB fields in the radar worksheet. As long as all the wf-adc pairs involved in creating a particular image are changed in the same way, then the data echogram can be updated with this function. If different changes occur for any of these parameters, this function will throw an error and the data will need to be reprocessed.

    Loading and plotting echogram files

    Field Description
    imb.picker.m Function for browsing imagery and editing layers.
    echo_detrend Function for detrending echogram data.
    echo_filt Function for applying filtering to echogram data.
    echo_flatten Function for flattening an echogram for changes in elevation, surface, or a set of layers.
    echo_load Function for loading echograms.
    echo_mult_suppress Function for applying surface multiple suppression to echogram data.
    echo_noise Function for estimating the noise power of echogram data.
    echo_norm Function for normalizing echogram data. Applies a linear scale and offset to the data. Useful for transforming data to specific ranges such as [0,1] or [0,255] for image processing or saving in an image format.
    echo_param Function which returns the parameter structure that generated the echogram.
    echo_stats Function which extracts useful statistics from echograms.
    echo_stats_layer Function which extracts useful statistics from a layer in an echogram.
    echo_xcorr Applies a fast-time cross-correlation operation on echogram data. Useful for apply a unit step cross correlation operation.
    echo_xcorr_profile Support function for echo_xcorr.
    load_L1B Function for loading L1B echogram data (handles netcdf, .mat, and .mat compressed files)
    load_L1B_L2 Function for loading L1B echogram and L2 layer data (handles netcdf, .mat, and .mat compressed files). Provides some basic along-track filtering options.
    plot_L1B Script demonstrating how to plot L1B echogram data (calls load_L1B) and L2 layer data and applying elevation correction and handling dielectric changes with depth.
    publish_echogram Function for plotting L1B echograms in standard posting format. Can also plot L2 layer data on the echogram. See run_publish_echogram.m for example.
    publish_map Function for plotting L1B maps of echograms in standard posting format.
    run_load_data_by_gps_time Function for loading sensor echogram data based on GPS time (useful for quickly comparing data from different systems and data from overlapping or adjacent flight lines). It includes various system time delay corrections and detrending options.

    Echogram data manipulation

    hsv_plot.m
    For plotting angle of complex data (e.g. interferograms)
    fir_dec.m
    For applying along track FIR filter and doing decimation while handling edge conditions by normalizing and creating no filter delay. Most common usage is for stacking data which has a simple format when stacking N records: fir_dec(data,N).

    OPS Functions

    • runOpsBulkInsert.m: Insert new flight paths, layers, or ATM data. This script calls the function opsBulkInsert.m. Modify the run script for the particular season (lots of comments so this is usually straight forward), enable the generic column in the spreadsheet for each segment to be inserted, and then run the script. This can often take days to finish so be prepared to leave a Matlab session up and running for that long and ensure that IT knows that it should not be interrupted. It is best to run this on the server which is running the database (ops.cresis.ku.edu at KU).
    • runOpsDeleteBulk.m: Insert new flight paths, layers, or ATM data. This script does not exist yet. This script should call opsDeleteBulk.m.
    • opsShiftTwtt.m: Shift layer two way travel time (in other words: vertical shift) in OPS
    • opsShiftGpsTime.m: Shift layer GPS time (in other words: horizontal shift) in OPS
    • runOpsCopyLayers.m: See above.
    • runOpsLoadLayers.m: See above.

    Simulator

    • DOA simulator. This is a 1-D simulator (direction of arrival).
    • Cross-track simulator. This is a 2-D simulator (cross-track slices, handles direction of arrival and range). Has limited 3-D support through snapshots.
    • Full simulator. This is a 3-D simulator. In progress.

    Multipass

    Tracking 2D and 3D images

    Tracking the surface

    Tracking layers in 2D imagery

    Tracking layers in 3D imagery

    Parameter Spreadsheet

    There are seven standard worksheets in the spreadsheet that control the data processing. For creating and reading parameter spreadsheets see the Parameter Spreadsheet Guide. The seven worksheets are:

    1. cmd worksheet: Specifies operations and data segments to apply those operations on to.
    2. records worksheet: Parameters for the "create records" and "create frames" operations. Also used to find raw radar data.
    3. qlook worksheet: Parameters for the "qlook" operation
    4. sar worksheet: Parameters for the "sar" operation
    5. array worksheet: Parameters for the "array" operation
    6. radar worksheet: Radar settings used in the data loading, pulse compression, and array processing steps
    7. post worksheet: Data posting settings
    8. analysis worksheet: Data analysis function settings

    Other custom worksheets can be added as well.

    Command Worksheet

    cmd-records-qlook-sar-array-radar-post-analysis

    The command worksheet (cmd worksheet) specifies which operations are going to run and which data segments and frames to run the operations on. The four primary operations are:

    1. records: This step reads all the header information from every radar file in the segment (usually already extracted during run_preprocess.m) and creates a "records" file in the gRadar.support_path (gRadar.support_path/records/RADAR_NAME/SEASON/records_YYYYMMDD_SS.mat) containing this information. This step also creates a frames file in records.frames.mode is not zero. When not zero, the frames are automatically generated if the frames file does not exist already. For records.frames.mode==1, the create_frames.m GUI is opened and the frame breaks may be manually edited. The frames file is stored in the support path (gRadar.support_path/frames/RADAR_NAME/SEASON/frames_YYYYMMDD_SS.mat). The frames file defines the data frame boundaries and may contain information that controls how the frames are processed.
    2. qlook: Quick look processing and basic altimetry product. The default output directory is gRadar.out_path/RADAR_NAME/SEASON/CSARP_qlook. The records file is optionally updated with the surface altimetry information.
    3. sar: SAR processor; This step SAR processes the data. The default output directory is gRadar.out_path/RADAR_NAME/SEASON/CSARP_sar.
    4. array: Combine data streams (wf-adc pairs) into images and optionally images into a combined image. Combines the SAR processor outputs for each data stream using array processing to form output image data products. The default path is dependent on the method. For example, the "standard" method outputs in gRadar.out/RADAR_NAME/SEASON/CSARP_standard.

    There is also support for external add-on functions which use the "generic" interface. Tools which use the generic interface are:

    • run_analysis.m
    • run_check_data_products.m
    • check_records.m
    • run_post.m
    • run_make_layer_files.m
    • run_update_records.m
    • update_data_files.m

    The cmd worksheet looks like this:

    Rds command worksheet.jpg
    Header Information:

    • Version: 4.0 (refers to the parameters spreadsheet format version)
    • Radar: Radar name (e.g. accum, accum2, accum3, kuband, kuband2, kuband3, mcords, mcords2, mcords3, mcords4, mcords5, mcrds, rds, snow, snow2, snow3)
    • Season: Season name (e.g. 2012_Greenland_P3, 2013_Antarctica_P3, 2013_Antarctica_Basler, 2018_Antarctica_TObas, 2018_Antarctica_GPR)

    Columns:

    • Date YYYYMMDD: Specifies the date of the data collection. This date controls all the naming conventions in the system. In some cases (especially McMurdo, Antarctica), you will have flights that cross days (in UTC time). In this case, you need to choose one day to reference all the segments for that flight from. For example, say you flew UTC date Dec 12 and Dec 13 in 2009. Then you would name all the segments 20091212 for that flight. However, for any segments starting on 20091213, you will probably need to adjust the gps.time_offset in the vectors worksheet to be +86400 larger than you would have otherwise (this is because the radars store only seconds of day and if they start after in 20091213 they have no way of knowing that a day wrap has happened). The idea is that you tell the radar what day it is by a combination of this date field and the gps.time_offset field. This can be tricky because you have to know whether or not the radar recognized the day wrap or not for each segment. The radar always recognizes the day wrap for segments that cross the date boundary... the problem is segments which start in the new day. Only the accum2 radar is capable of seeing these day wrap boundaries for all segments... and this is only possible is the accum2 .gps file was recorded continuously across the date boundary. If the radar (e.g. accum2) recognizes the day wrap then you don't want to add in the +/-86400 seconds. Finally, it is not necessary to (and you should not) create a new segment at the day boundary... segments should only be created when the radar stops/starts data recording or if settings are changed that require a new segment to be started.
    • Segment: This specifies the segment number for the data. Segment numbers start at 1 and are always in the order that the data were collected.
    • Frames to process: This specifies a vector of frames to process. To process all frames, leave the field blank.
    • Create records (and frames), qlook, sar, and array: Placing a non-empty and non-zero value in this field will cause master.m to execute the this command on the corresponding segment
    • Generic: Used by functions using the generic interface and master.m when it contains a string or cell. There are two methods for using this column.

    Method 1: Functions will look for a logical value in this column to determine which segments to run on (blank or empty is equivalent to zero/false).
    Method 2: Generic functions can be run through run_master.m/master.m. These generic functions must support the (param,param_override) argument interface. The format for the generic column is a cell array and each cell in the array should be:

    • Cell array containing two entries: a string and a cell array. (e.g. {{'analysis',{'analysis_noise','analysis'; 'radar_noise', 'radar'}}, {'analysis',{'analysis_spec','analysis'}}} )
      • The string is the name of the function to call (e.g. "sar", "qlook", "array", "analysis", "post").
      • The cell array should be Nx2 in size where the first column gives the worksheets to load and the second column gives the variable name to use for each worksheet.
      • The example {{'analysis',{'analysis_noise','analysis'; 'radar_noise', 'radar'}}, {'analysis',{'analysis_spec','analysis'}}} will call analysis and will load "analysis_noise" worksheet into the "param.analysis" field and the "radar_noise" worksheet into the "param.radar" field. Then it will call analysis again and will load "analysis_spec" worksheet into the "analysis" variable.
    • There is a simplified interface where leaving out cell arrays or certain options causes default behaviors to happen. If instead of the cell array of cell arrays, you pass in:
      • String:
        'analysis'
        --> Gets converted to -->
        {{'analysis',{[],[]}}}
      • Cell array of functions to run without generic worksheet loading:
        {'analysis','post'}
        --> Gets converted to -->
        {{'analysis',{[],[]}}},{'post',{[],[]}}}}
      • Cell array of functions and generic worksheets without renaming
        { {'analysis','analysis'}}
        --> Gets converted to -->
        {{'analysis',{'analysis','analysis'}}}
    • Mission name: Name of this flight. For OIB, each day there is a single flight with a specific name and this column matches that. For other missions, without specific names, a basic regional description should be used (e.g. Byrd, Jakobshavn, Helheim, etc.)
    • Notes: This field must be text and is used for notes about the corresponding segment. If the segment should not be processed the phrase "do not process" should be contained in the notes.

    Segments which are not to be processed should be filled with a orange background as described in Parameter spreadsheet guide.

    Records Worksheet

    cmd-records-qlook-sar-array-radar-post-analysis

    The records file controls the creation of records and frames files in the gRadar.support_path folder. These files describe which raw data files belong to each segment, where each record is in these data files, GPS information for each record, and how the data files are to be broken into data frames for processing and visualization. Please coordinate with the primary data manager before recreating records files.

    Normally run_preprocess.m is used to populate the records worksheet. Because some radars do not have explicit segment boundaries (e.g. kuband and snow radars), the segments may need to be adjusted manually since the preprocess step guesses where these boundaries occur based on time gaps in the data. When operating these radars it is useful to record a list of segments (e.g. start and stop filenames) to check against.

    An example records worksheet is shown below:

    Rds records worksheet.jpg

    1. Each segment is listed in the first two columns in the order that it was collected. The segment ID is formed as YYYYMMDD_SS where YYYY is the year, MM is the month, DD is the day, and SS is the segment number. The ordering must match the command worksheet.
    2. data_map: Cell array of matrices of size Nprofiles by 4. Each board defined in file.boards must have an entry in data_map. This matrix does not need to be defined for many systems. It allows arbitrary mappings between wf-adc pairs and actual wf-adc pairs (NI digital systems) or mode-subchannels (Arena digital systems). The first column is the hardware wf/mode, the second column is the hardware adc/subchannel, the third column is the processing wf, the fourth column is the processing adc.
    3. file: Structure of fields that describe where and what the raw radar files are. A search is performed using these fields and all files matching the search are returned in alphabetical order using get_filenames.m. The files to be used in the segment are then selected with the "file.start_idx" and "file.stop_idx" which are both 1-indexed.
    4. file.version: Specifies the raw file version for this radar. Required field.
    5. file.boards: N_boards element cell array specifying the subfolder where raw data are stored for a particular "board". A "board" represents one or more data streams which are recorded to a single file. This is usually left empty or undefined if there is only a single recorded file stream. Most CReSIS systems are either stored in "chan#" directories (e.g. for two channels: {'chan1',chan2'}) or in "board#" directories (e.g. for four channels: {'board0','board1','board2','board3'}).
      • file.start_idx: N_boards element vector of start indices into the search result vector that indicates the first raw file in the segment. Each entry corresponds to a particular board.
      • file.stop_idx N_boards element vector of stop indices into the search result vector that indicates the last raw file in the segment. Each entry corresponds to a particular board.
      • file.base_dir contains a string which should go to the base of the dataset; ideally this string should be identical for every segment (it is used to make it easy to change the path for all segments when the raw data files are moved around)
      • file.board_folder_name contains a string which gives the rest of the path (i.e. it is appended to file.base_dir). Anywhere "%b" occurs in the string, this will be replaced by the file.boards cell array string. For example, if "/1703301101/UWBM/chan1" and "/1703301101/UWBM/chan2" are where data are stored, then this field would be '/1703301101/UWBM/%b' and file.boards would be {'chan1','chan2'}.
      • file.prefix Optional string containing the data filename prefix
      • file.suffix Optional string containing the data filename suffix
      • file.regexp Optional string containing a regular expression that the data filename must match
    6. file.clk: The clock to use when interpreting headers in the file
      • gps.time_offset contains the offset (in units of seconds) that will be added to the UTC time header fields in the raw data files. For a perfect radar system this field would be zero or +/-86400*N seconds to deal with segments that start on different days then their name implies (see command worksheet for more detail on segment naming). Each version of each digital system has a different standard offset (i.e. the offset is not always zero). When the GPS time is bad in the radar data, you may have to use a non-standard offset. If the surface terrain is known run_check_surface.m can help find a non-standard offset. If there are geocoded features that show up in the radar echogram (e.g. calving front, sea ice lead), then these can be used to manually determine a non-standard offset too by adjusting the time offset until the feature shows up in the correct spot in the radar echogram.
      • gps.utc_time_halved is a boolean field that only applies to the kuband radar. The utc time values read in from the raw radar data files will have their utc times halved if this is true.
    7. gps.en: Include GPS data in the records file (default is true)
    8. frames.geotiff_fn: This is the background geotiff that is used by the create_frames.m GUI.
    9. frame.mode: Specifies the frame mode (default is 0):
      • [0] Do not create frames file
      • [1] Automatically generate frames file if it does not exist and then run create_frames.m GUI
      • [2] Automatically generate frames file

    OLD DEPRECATED INFORMATION: See the Common FMCW radar header errors guide.

    Qlook Worksheet

    cmd-records-qlook-sar-array-radar-post-analysis

    An example qlook worksheet is shown below:

    Qlook worksheet.jpg

    1. Each segment is listed in the first two columns in the order that it was collected. The segment ID is formed as YYYYMMDD_SS where YYYY is the year, MM is the month, DD is the day, and SS is the segment number. The ordering must match the command worksheet.
    2. B_filter Double vector. Specifies the along-track (coherent averaging or stacking) FIR filter weights. These filter weights will be normalized so that they sum to one for radiometric accuracy. This along-track filter is applied so that it has zero delay. To avoid having to resample the data in along-track, this zero delay means that B_filter must be odd length (aka even order filter). The filter handles edge effects by loading in additional data at the beginning and end of each block when available or renormalizing filter coefficients for segments edges. The default B_filter is ones(1,dec)/dec where dec is the qlook.dec parameter field. Note that the number of coherent integrations should not exceed the unfocussed SAR aperture length which causes about [math]\pi/4[/math] phase shift which is approximately [math]\sqrt{\lambda R/2}[/math]. If the raw radar data are sampled at [math]\lambda/4[/math], which is Nyquist sampled for isotropic antennas, then the number of averages is [math]\sqrt{8R/\lambda}[/math].
      • For 195 MHz and R = 500 m, the recommended filter length is 50. If the filter length is set to 50, point targets closer than 500 m will be over-averaged and suffer from destructive interference so that the signal amplitude is decreased. Point targets greater than 500 m range will have no problems and the signal amplitude will not be changed by the coherent averaging. More coherent averaging is advantageous because the noise power decreases.
    3. block_size: Positive scalar integer which specifies the number of range lines to load in each block of data that is processed. block_size controls the number of records that are processed at a time in qlook and should be set based on memory needs and cluster efficiency (e.g. 20000 for RDS, 2000 for accum2, and 2000 for ka-band, ku-band, and snow radars).
    4. dec: specifies the decimation to apply after along track filtering with the FIR filter specified by B_filter. In the special case where "B_filter" has elements that are all the same and this field is equal to it in length, a short cut is taken and the processing is more efficient (called simple_firdec in the source code).
    5. frm_types is a five element cell array specifying which frame types to process. See the frames file's proc_mode for the different frame types frames file. The five elements correspond to the different decimal masks used in the processing mode {R1, R2, R3, R4, U}. Each entry in the cell array is a vector of the allowed frame types. Including a negative frame type causes that entry to be ignored. For example, setting frm_types to {0,[0 1],0,0,-1} means that R1 must be zero, R2 can be 0 or 1, R3 must be zero, R4 must be zero, and U can be anything.
    6. img_comb See array worksheet's "img_comb" parameter.
    7. imgs specifies the waveform-adc pair list for each image.
    8. inc_dec integer scalar, number of incoherent averages to apply (also decimates by this number). If set to < 1, complex data are returned. Setting to 1 causes the data to be power detected (i.e. become incoherent), but no averaging is done.
    9. inc_B_filter Double vector, FIR filter coefficients. Used for multilooking or averaging after power detection. Will be applied before incoherent average decimation. If not defined or empty, then inc_B_filter is set to ones(1,inc_dec)/inc_dec.
    10. motion_comp Logical. Default is false. If true, relative true time delays are added in before coherently combining multiple wf-adc pairs, coherently combining multiple along-track samples (i.e. presumming or stacking with qlook.B_filter), or incoherent along-track multi-looking (i.e. qlook.inc_B_filter).
    11. out_path should be left blank so the default output is used unless you are doing special processing. For example, entering "special" into this field will cause the outputs to be placed in CSARP_special. ct_filename_out is used to transform this field into the actual output path used.
    12. surf Structure defining how the layer_tracker will operate. This "surf" field overwrites the param.layer_tracker.track field.
    13. resample 2x2 positive integer matrix as [p_ft q_ft; p_st q_st]. Default is [1 1; 1 1]. The first row, [p_ft q_ft], defines the p/q resampling ratio for the fast time axis. The second row, [p_st q_st], defines the p/q resampling ratio for the slow time axis. Final data products should have [2 1; 1 1] to over sample by 2 in fast-time.

    Qlook Processing Steps

    The qlook worksheet controls the quick look product generation and the automated tracking of the surface (calls layer_tracker.m).

    The quick look product is an unfocused synthetic aperture radar (SAR) product. It involves time delay shifts for elevation compensation, but the along-track averaging that is set with B_filter does not apply any time delay correction for focusing a point target as is done in SAR processing.

    The processing steps are split between qlook.m, qlook_task.m, and qlook_combine_task.m.

    qlook.m

    1. Load the records information (trajectory, attitude, radar settings, and raw data record information)
    2. Break each desired output image into blocks based on input range lines
    3. Determine what input data is required for each block and create tasks for each block

    qlook_task.m

    The processing steps:

    1. Load raw data.
    2. Trajectory information is loaded for each wf-adc pair
    3. Convert quantized samples into the equivalent voltage at the antenna receiver. This accounts for ADC quantizer (radar.Vpp_scale, radar.adc_bits) and nominal receiver gain (radar.wfs(wf).Tadc_gain_dB).
      • For example, let the receive signal at the antenna be a -100 dBm sinusoid. The volts peak-to-peak of the sine is Vpp = sqrt(10.^((-100-30)/10) * 50)*sqrt(2) * 2 = 6.325e-6Vpp. The conversion steps are 1. dBm to dBW by subtracting 30, 2. Convert from dBW to W, 3. Multiply by the characteristic impedance of the system which is usually always 50 Ohms, 4. Convert from RMS power to sine wave amplitude by multiplying with sqrt(2), 5. Convert from amplitude to peak-to-peak by multiplying by 2x. Assume the receiver gain is 55 dB so that the signal is -45 dBm at the ADC input or 3.557 mVpp. Let the ADC be a 14 bit quantizer with a full scale voltage of 2Vpp or 1V amplitude (10 dBm). The quantized peak-to-peak would then 3.557e-3/2*2^14 = 29.135 quantization steps or +/-14.568 quantization steps about the quantizer's DC value. Therefore this step would take the 29.135 quantized peak-to-peak and convert it to 6.325e-6Vpp by multiplying by 2/2^14/10^(55/20) which first converts the quantized steps into voltage at the ADC input (2/2^14) and then accounts for receiver gain 10^55/20.
    4. The nominal amplitude and phase equalization coefficients, param.radar.wfs(wf).chan_equal_dB(rx_path), from Analysis equalization are applied to each channel.
    5. Fast-time gain compensation: time domain gain and delay effects are corrected. This is crucial for systems with a sensitivity timing control, systems with a slow transmit/receive switch, and so on.
    6. Time varying amplitude and phase equalization coefficients are applied.
    7. Coherent noise is removed: no effect on signal amplitude
    8. Pre-pulse compression deconvolution: corrects signal amplitude. This is most important for deramp on receiver systems where the signal amplitude is affected by the IF filter and is a function of time delay/IF frequency.
    9. Pulse compression: the matched filter used for pulse compression normalizes the output so that the signal amplitude is not changed. Pule compression is done by 1. down converting to complex baseband, 2. frequency domain matched filter with zero padding to prevent circular convolution and windowing (default is Hanning) to reduce side lobes, 3. resampling using a polyphase anti-aliasing filter. The data are always complex at this point. The time axis generated includes the system time delay correction, radar.wfs(wf).Tadc_offset, that affects all channels. Frequency domain time shifts, applied during pulse compression, account for the smaller individual data stream delays found during Analysis equalization that are independent for each wf-adc, param.radar.wfs(wf).Tsys(rx_path). The time axis and pulse compression are also adjusted to always create time-sampling which is a multiple of the desired output sampling frequency so that all waveforms and channels have time axes that are aligned.
    10. Post-pulse compression deconvolution: the deconvolution filter is normalized to not change the signal amplitude [NOTE: current implementation applies radiometric correction at this step when radiometric correction is done, but it is not always done. We plan to drop this radiometric correction step in the future.]
    11. The signal is elevation compensated by introducing delays that are opposite to delays caused by elevation changes in the radar's phase center: no change in signal amplitude
    12. The signal is averaged in along-track with the filter described by B_filter and decimated by dec. The filter is normalized so that the signal amplitude does not change.
    13. The signal is over sampled in fast time and slow time according to resample parameter.
    14. All wf-adc pairs in each image are combined for the corresponding image by taking the mean across the wf-adc pairs. Variations in time delay based on the individual trajectories of each wf-adc measurement phase center will be compensated before the channels are combined. Note: Amplitude and phase equalization is not applied if this is not enabled because it will be applied in combine.m.This is equivalent to boxcar windowing.
    15. The signal is squared to convert to power (characteristic impedance of 1 is assumed and not 50).
    16. The power signal is averaged in along-track with the filter described by B_inc_filter and decimated by inc_dec.
    17. A parameter param_qlook.radar.Rsquared is stored which allows the signal amplitudes in the final output to be converted to be proportional to 1/R^2 losses such that if R = 1, the signal value would be 1. This constant scaling accounts for transmit antenna gain, receive antenna gain, transmit power, actual characteristic impedance of the system, and other system effects not accounted for in previous steps. [NOT IMPLEMENTED: current implementation applies correction during post-pulse compression deconvolution]
      • For example, if the radar is 500 m from a perfectly reflecting (0 dB Fresnel reflection coefficient) specular target, the qlook data product can be multiplied by the param_qlook.radar.Rsquared term and the specular target will ideally have a value of 10*log10(1/R^2) = -53.979 dB.

    qlook_combine_task.m

    This steps takes the block outputs from each of the tasks and concatenates these together into frame images, one per image specified in qlook.imgs. If image combining is enable with qlook.img_combine, then the images are combined in the fast-time dimension to form a combined image. This image is truncated to remove pulse compression roll off for non-deramp-on-receive systems; the default is to cutoff at 50% support or a 3 dB drop. Besides the roll off on the edges, the signal amplitude is not affected during qlook_combine_task.

    Radar images formed using CSARP_qlook will have:

    • For specular targets: 1/R^2 losses for specular targets and the signal power will be equal to the signal power at the antenna input (1/R^2).
    • For point targets: 1/R^4 losses for point targets and the signal power will be equal to the signal power at the antenna input (1/R^4). This assumes that the number of coherent integrations does not violate the unfocussed SAR aperture maximum length of [math]\sqrt{\lambda R/2}[/math].

    There are several functions that allow the surface to be updated if the qlook surface is in error:

    • run_layer_tracker.m (rerun automatic tracking with different settings)
    • runOpsCopyLayers.m (copy surface from another data source)
    • imb.picker.m (manually correct surface)

    SAR Worksheet

    cmd-records-qlook-sar-array-radar-post-analysis

    Rds csarp worksheet img1of2.jpg

    Data in each frame are broken into blocks (called "chunks" in the code) and processed. The blocks overlap so that each pixel in the output image has full support of the input (this means that data from adjacent frames are used except for the start of the first data frame and the end of the last data frame where no data are available).

    1. out_path: String specifying the output directory (note that final directory will always have "CSARP_" prepended. Default is "sar" which will place outputs in the "CSARP_sar" directory.
    2. imgs: specifies the waveform-adc pair list for each image.
    3. frm_typesSee qlook worksheet description.
    4. chunk_len: Numeric scalar specifying length of the output of each processing block (chunk) in meters. Default is 2500 m (typical is 5000 m for lower bandwidth systems and less for higher bandwidth systems to keep the memory requirements of each task lower). Longer chunk lens are more computationally efficient; for FFT based methods like f-k migration this can be a problem because the flight path is approximated by a straight line.
    5. combine_rx: Scalar logical. Default is false. If true, all wf-adc pairs are combined during loading. Receiver equalization coefficients in radar worksheet are applied. No motion compensation or lever arm is applied. This is for field processing only when a quick data product is needed. Because no lever arm is applied, the equalization coefficients in radar.wfs(wf) need to be ones generated with no lever arm correction also. In other words, the regular equalization coefficients are generally DIFFERENT than what is needed when combine_rx is true.
    6. time_of_full_support: Numeric scalar that specifies the maximum time range to use when computing the processing block overlap. To speed up processing, the chunk overlap can be decreased by not requiring pixels beyond this range-time to have full support. Default is inf which guarantees full support for all time ranges.
    7. presums: Positive integer indicating the number of presums to do when loading the data. Default is 1. Loader will do additional presums before passing the data to the SAR processor. This is helpful to speed up processing and reduce memory requirements when the along track spacing is smaller than necessary. This can be the case with accum2 radar sytem ground based data where the maximum hardware presums is 254 which may lead to very small along track spacings and therefore very large memory requirements to load all the data in for the SAR aperture.
    8. mocomp.en: logical scalar to enable/disable motion compensation. Default is false.
    9. mocomp.filter: Leave empty to disable. Default is {@butter,2,0.1}. Enabling, causes the dx and drange motion compensation vectors to be filtered. This should probably always be set (which automatically enables the feature). The format is a 3-element cell vector which is used in motion_comp.m: fcs.filter{1}(fcs.filter{2},fcs.filter{3}) as in {@butter,2,0.1}.
      • First element is function handle to function that returns [B,A] where B is feedforward and A is feedback coefficients of filter to be used with call to filtfilt (remember to include this function in gRadar.sched.hidden_depend_funs so the mcc compiler will see it).
      • Second element is usually the filter order
      • Third element is usually the filter bandwidth
    10. mocomp.tukey: Positive scalar from 0 to 1. Default is 0.05. Tukey window to apply when undoing motion compensation after fk-migration to prevent circular convolution artifacts. A value of 0 disables the window.
    11. mocomp.type: Positive integer (0 to 3) indicating the motion compensation method. Default is 2.
      3: compensation to straight line fitted to reference trajectory (works well when the trajectory is straight)
      2: compensation to piecewise line (length of SAR aperture, Lsar lengths) fitted to reference trajectory, works well when the trajectory is not straight)
      0 or 1: compensation to reference trajectory. Flight path variations about a straight line fit are not compensated for and this is not ideal for FFT based methods like 'fk' that expect the data to represent a straight line.
    12. mocomp.uniform_en: Logical scalar to enable/disable uniform along-track resampling. Default is true.
    13. mocomp.uniform_mask_en: Logical scalar to mask bad records based on the data_load hdr.bad_recs field. Masked records are not included in the sinc interpolation. Default is true.
    14. sar_type: 'fk' or 'tdbp'. Default is 'fk'.
    15. sigma_x: along-track resolution of output image
    16. sub_aperture_steering: vector of normalized doppler centroids for subaperture processing (0 is zero doppler). For example, [-2:0.5:2] would produce 9 subapertures with 50% overlap in Doppler.
    17. st_wind: slow time window handle '@hanning to be applied in wavenumber (doppler) domain
    18. start_eps: ice dielectric to use for medium below surface. Default is 3.15. For air, use 1.
    19. refraction_method: tdbp parameter
    20. skip_surf: tdbp parameter
    21. start_range_bin_above_surf: tdbp parameter
    22. start_range_bin: tdbp parameter
    23. end_range_bin: tdbp parameter

    SAR Processing steps

    The processing steps are split between sar.m, sar_coord_task.m, and sar_task.m.

    sar.m

    1. Load the records information (trajectory, attitude, radar settings, and raw data record information)
    2. Break each desired output image into blocks (called chunks in the source code) based on input range lines
    3. Determine what input data is required for each block and create tasks for each block

    sar_coord_task.m

    1. Load the records information (trajectory, attitude, radar settings, and raw data record information)
    2. Create a SAR coordinate system for the entire segment based on the reference point of the antenna array. The reference point is set in lever_arm.m and should be in the center of the array. Create a smoothed surface for the entire segment.
      1. The lever arm from the GPS trajectory to the reference point is loaded (lever_arm.m)
      2. The trajectory is loaded and updated with the platform attitude from the inertial navigation system and the lever arm to find the radar's phase center for the reference point (trajectory_with_leverarm.m)
      3. Create the SAR coordinate system (called the fcs or flight coordinate system data structure in the source code) from the reference phase center (SAR_coord_system.m). The FCS is the SAR coordinate system which gives the instantaneous along-track vector (x), elevation vector (z) which is the local up vector projected orthogonal to x, and the cross-track vectors (y) which points left and is the cross product of z with x. The origin is at the reference point of the antenna array as defined in lever_arm.m and should be centrally located in the array.
        • Each range line in the output SAR image has its own coordinate system. The coordinate system is defined by a first order polynomial fit to the trajectory from all the range lines that contribute to that output position (the x-axis is the polynomial).
        • The range lines which contribute are approximated using param.sar.Lsar (length of SAR aperture).
      4. The smoothed surface is filtered with a Savitsky-Golay filter and guarantees a continuous slope by using a piecewise low order polynomial to approximate the slope using spline. These outputs are stored in the cresis toolbox sar output directory so they only need to be generated one time (sar_coord.mat) unless the SAR processing parameters are changed.

    sar_task.m

    All the same steps as qlook_task.m are run up to, but not including, elevation compensation. At this point, the processing methods diverge and either f-k migration or time domain back projection are run.

    F-k Migration Steps

    sar_type is "fk"

    1. Fast time FFT. The signal amplitude is not affected.
    2. Determine motion compensation parameters (sar_motion_comp.m). This step does not involve any operations on the signal.
      • For each input range line, the location of the reference phase center and the measurement phase center are converted to the SAR coordinate system which corresponds to the along track position of the current range line. The motion compensation parameters are "dx" which stores differences in the along-track vector (x-dimension) and "drange" which stores differences in the range dimension (determined by the squint vector) relative to the smoothed reference trajectory. The smoothed trajectory is smoothed using a piecewise polynomial filtering of the actual trajectory with a smoothing length equal to the SAR aperture for a mid-range-gate target. Range direction is determined by the squint angle. The correction is perfect for the measurements in the center of the aperture, but degrades towards the edges of the aperture. The degradation worsens as the SAR beamwidth is increased.
      • The smoothed reference trajectory is a first order polynomial fit to the input range lines that contribute to the output range line closest to the current input range line.
    3. Apply motion compensation which applies a time shift to remove range-dimension deviations from the smoothed SAR trajectory.
    4. Data are resampled in slow-time to be uniformly sampled (arbitrary_resample.m). A windowed sinc-interpolation kernel is used to handle non-uniform sampling. Any motion compensation in the along-track dimension is also applied (i.e. to compensate for situations when one antenna is in front or behind another antenna). The signal amplitude is not affected.
      • Zero-padding is done at this step at the beginning and end of a data segment to prevent circular convolution from occurring. For all blocks not at the beginning or end, data from neighboring chunks are loaded to prevent circular convolution. The zero-padding has the affect of reducing the signal power at the beginning and end of the segment by 3 dB since the aperture is only 50% supported.
    5. Slow time FFT. The signal amplitude is not affected.
    6. Apply frequency domain along-track position shifts caused by lever arm deviation relative to the reference point of the antenna array so that all SAR processed channels will be aligned in along-track.
    7. Zero pad in along track if at the beginning or end of the segment to prevent circular convolution effects from contaminating the output
    8. F-k migration SAR processor (f-k migration for layered media as described in Leuschen et al. 20001).
      • The total signal power in the scene is not changed, but the signal power is focused. This leads to a R^1 increase in the signal power since the synthetic aperture for a target increases proportionally with range and therefore there is more signal power. Note that there is also spherical spreading loss of 1/R^4 for point targets so the net effect is 1/R^3 signal loss with range. For an area distributed nadir target which is pulse limited in the cross-track dimension, the net effect is 1/R^2.5 since the area of the nadir scatterer grows according to R^0.5. The signal power does not change except for any focusing effects that occur. The focusing effectively shifts energy from multiple unfocused pixels into a single pixel. Poor focusing due to position errors or directional scattering may cause results to deviate from the ideal case. A specular target, which has 1/R^2 spherical spreading loss, has perfect directivity and this means there is no R^1 increase in signal power due to focusing. Therefore, the output power is still 1/R^2. The noise power is normalized so that it is not range dependent. From unfocussed raw data to SAR data, specular target SNR does not change and distributed target SNR increases by a relative change in R (a target twice as far from the radar will see its SNR on an individual pixel level increase by R since the energy from the whole aperture will be focused to a single point. In the raw phase history image, the energy is spread over many pixels (usually in the shape of a hyperbola) and any one of these pixels will have an SNR that is R lower than the SAR processed pixel for that target.
      • Supports subaperture processing. Subaperture processing does not signal amplitude, but directional scattering and loss affects will cause variations between subapertures.
      • Supports wavenumber windowing to reduce sidelobes. The default is a Hanning window. This will reduce the focusing gain for point targets, but does not affect specular targets.
    9. IFFT is applied in fast time and slow time.
    10. Remove fast-time motion compensation (necessary to do this for array processing) and the complex single look complex (SLC) SAR images are stored to disk.

    1Leuschen, Carl, S. Gogineni, and Dilip Tammana. "SAR processing of radar echo sounder data." In IGARSS 2000. IEEE 2000 International Geoscience and Remote Sensing Symposium. Taking the Pulse of the Planet: The Role of Remote Sensing in Managing the Environment. Proceedings (Cat. No. 00CH37120), vol. 6, pp. 2570-2572. IEEE, 2000.

    Time Domain Back Projection (TDBP) Steps

    sar_type is "tdbp"

    1. Repeat the steps for each output image pixel (target)
    2. For targets at and above the ice surface, apply differential time delays to each range line contributing to the target using a truncated since interpolation kernel
    3. For targets below the ice surface, determine the refraction point and then compute the time delay in each medium

    For time delay calculations, use true 3D coordinates always, but limit the search for the refraction point and the interpretation of the surface in the following two ways:

    1. Consider refraction points in a way that is convenient to solving the problem. I imagine the simplest way to do this is assume that the plane of incidence is the XZ-plane of the SAR coordinate system at the radar location projected onto the line connecting the target and radar. In other words, the plane of incidence always contains the line connecting the target and radar. Since this has an infinite number of choices, we choose the one that is closest to the XZ-plane of the SAR coordinate system. Make the new XZ-plane have a Z parallel to the SAR coordinate system Z at the radar.
    2. Assume all surface variation is along SAR coordinate system x-axis only. Since this is ill defined because the x-axis is continuously changing, here is a more detailed process: When finding the surface elevation, simply scale the new XZ-plane x coordinate to match the SAR coordinate x different and use that scaling when accessing the surface elevation.

    Steps for full 3D processor (NOT IMPLEMENTED):

    1. Inside sar_task.m: Load DEM using custom binary reader that only loads the part of the DEM required for the current block of data. DEM loader should handle fine resolution tiled DEMs such as GIMP and ArcticDEM (e.g. 2 to 15 m DEMs of Antarctica and Greenland).
    2. Smooth DEM to ensure continuous slopes
    3. If it is helpful in debugging or potentially in speeding up computations, translate coordinates of target, radar, and surface into the most convenient coordinate system for solving the problem.
    4. STAGE 1: Replace step 1 with 2D linear interpolation to compute surface between target and radar
    5. STAGE 2: Replace time delay calculation with 3D solver that can handle slope orthogonal to the flight line

    Array Worksheet

    cmd-records-qlook-sar-array-radar-post-analysis

    An example array worksheet is shown below:

    Rds combinewfchan worksheet img1of2.jpg

    1. Each segment is listed in the first two columns in the order that it was collected. The segment ID is formed as YYYYMMDD_SS where YYYY is the year, MM is the month, DD is the day, and SS is the segment number. The ordering must match the command worksheet.
    2. bin_rng: Range bin range to use in multilooking (i.e. neighboring pixel snapshots).
    3. dbin: Range bin decimation rate (1 is no decimation). The default is 1.
    4. DCM.bin_rng: Only used for MVDR processing. The DCM is constructed using this range bin range. The multilooking is still done with array.bin_rng.
    5. DCM.line_rng: Only used for MVDR processing. The DCM is constructed using this range line range. The multilooking is still done with array.line_rng.
    6. diag_load: Optional argument for data covariance matrix methods (can be left empty to do no diagonal loading).
    7. dline: Range line decimation rate (1 is no decimation). The default is to decimate at half the rate of the length of rline_rng.
    8. frm_types: See qlook worksheet description.
    9. img_comb: See img_comb fields below.
    10. imgs: specifies the waveform-adc pair list for each image.
    11. in_path: Should match the "out_path" from sar. If empty, the default is in_path = 'out' (CSARP_out).
    12. line_rng: Range line range to use in multilooking (i.e. neighboring pixel snapshots). Set the same way as bin_rng. A setting [-5:5] would grab 5 pixels before the current pixel being processed and 5 pixels after the current pixel being processed for a total of 11 multilooks in along-track.
    13. method: array processing method. The options are:
      1. combine_rx: The wf-adc pairs are actually combined before SAR processing in the sar stage. This option must be used if and only if "combine rxs" from the sar worksheet is true. This is the typical field processing array method.
      2. standard: Periodogram (delay and sum) method of wf-adc pair combination (i.e. just add together each of the array elements and apply the window specified in "array wind"
      3. mvdr: Use minimum variance distortionless response (MVDR) algorithm to combine wf-adc pairs
      4. mvdr_robust: Use MVDR algorithm with a robust constraint on the desired direction of arrival to combine wf-adc pairs
      5. music: Use multiple signal classiciation (MUSIC) algorithm to combine wf-adc pairs (Ralph Schmidt, Multiple Emitter Location and Signal Parameter Estimation, 1986).
      6. eig: Use peig Matlab function to combine wf-adc pairs
      7. risr: Use reiterative super resolution (IEEE Transactions of Aerospace and Electronics Society 2011).
      8. geonull: Use maximum likelihood estimation beam former which places perfect nulls on surface clutter based on a priori surface information. The param.array.surf_layer.source should be set to surfData for best results.
      9. gslc: Generalized sidelobe canceler which uses MVDR with null constraints on the surface clutter similar to geonull.
      10. music_doa: direction of arrival method based on MUSIC (Ralph Schmidt, Multiple Emitter Location and Signal Parameter Estimation, 1986).
      11. mle: direction of arrival method using maximum likelihood estimation (mostly based on Wax and Ziskind Maximum Likelihood Localization of Multiple Sources by Alternating Projection 1988).
      12. dcm: direction of arrival method using a data covariance correlation method (Theresa Stumpf MS Thesis University of Kansas 2015).
      13. pf: direction of arrival method using the particle filter (Mohanad Al-Ibadi Dissertation University of Kansas 2019).
    14. Nsrc: Number of signals to assume for MUSIC processing.
    15. Nsv: Number of steering vectors (1 is the minimum). This is an argument to the combine.sv_fh. A value of 64 to array_proc_sv
    16. out_path: should be left empty for regular processing so the default directory is used. If special processing is done, enter a special folder name here. The default directory depends on the "array method" and this field:
      1. If this field contains "my_special_processing", the output directory will be CSARP_my_special_processing
      2. If this field is empty and the array method is $METHOD, the output directory is CSARP_$METHOD
    17. surf_layer: Layer filenames to use for aiding tomographic processing. If left blank, default is "layerData"). The default path for layer data files is used so that if "myLayer" is entered here, the software will look in the CSARP_myLayer directory inside the directory returned by ct_filename_out.m.
    18. sv_fh: Steering vector function handle (produces the steering vectors that are used for array processing). This is typically set to array_proc_sv.
    19. theta_rng: 1x2 vector which specifies the range of theta (in radians) to select the maximum value from. It can also be set to a single scalar which picks the closest theta-bin that matches that scalar. The default is "0", which grabs the nadir cross track frequency bin.
    20. tomo_en: Enable three dimensional (tomographic) processing. Default is true when DOA method is used or Nsv > 1. Otherwise the default is false. See Generating 3D Images, 3D Surface Tracking, and 3D DEM Generation.
    21. update_surf Logical of whether or not to use img_comb_layer_params (required to be true for get_heights/qlook)
    22. window: Standard beam forming uses this to determine the cross track window to apply (e.g. '@hanning).

    img_comb fields

    One image is formed for each wf-adc list in the imgs field. By default, these images are stored in files with a string "img_01", "img_02", and so on added to the filename. These may be combined into a single image with img_combine.m which is called from qlook_combine_task.m and array_combine_task.m. This combined image is referred to as image zero and will have the "img_II" string omitted from the filename. Except when array.tomo.en==true, if only a single image is specified in imgs, then no img_01 file is created and output is stored in the image zero file from the beginning (i.e. no file with "img_01" is created).

    The behavior of image filenames follows this logic:

    1. If array.tomo.en==true (tomography), then the output will be stored in "img_II" file's and img_combine.m will not be called. The img_comb field is ignored and no combined image file is made.
    2. elseif length(imgs) == 1, then the output will be stored in the combined image file (no "img_II" tag) and no "img_01" file will be made. Whether or not img_combine.m is called is determined by:
      1. If a deramp system, the img_combine.m command will not be called
      2. If a non-deramp system, the img_combine.m command will be called (used to trim off the start/end of the image in fast time where the pulse compression overlap is low)
    3. elseif length(imgs) > 1 and img_comb is not empty, then each image is stored in the corresponding "img_II" filename and a combined image will be created with img_combine.m with a filename which omits "img_II" (the "zero" image).
    4. elseif length(imgs) > 1 and img_comb is empty, then each image is stored in the corresponding "img_II" filename and img_combine.m is not called

    The img_comb parameters of param.qlook and param.array are:

    1. img_comb field specifies the point at which images are combined. If undefined or empty, images are not combined. When there are N images, you may specify an (N-1)*3 length vector for this field and the images will be combined. Image 2 is combined with image 1 using the first three entries of the vector, image 3 is combined with that result using entries 4 through 6, and so on. Each combination uses three numbers which we call [T_comb, T_blank and T_guard]. The combined image always starts with image 1 at the top of the echogram, and then combines image 2 after max(T_blank,T_comb+td_surface) seconds where td_surface is the time delay to the surface for the particular range line in question. If the surface two way travel time is unknown (a value of NaN), then a surface return of td_surface = 0 seconds will be used. If the surface return is too far away, it is possible that image 1's time gate will end before image 2 is supposed to be combined. The point where this occurs is T_end - T_guard where T_end is the end of the time gate for image 1. Usually T_comb is set equal to the pulse duration of image 2 (to avoid using the saturated surface response in image 2) and T_guard is set equal to the pulse duration of image 1 (to avoid roll off from incomplete support during pulse compression). T_blank is usually set equal to the receiver blanking setting for the radar. Generally, if there is no blanking used, then T_blank <= 0 (e.g. -inf or 0 are often used for T_blank). Image 3 and beyond are combined with the result using the same method (but with entries 4-6 for image 3, entries 7-9 for image 4, and so on).
      1. Example for radar with no blanking: For 3 waveform collection with 1 us, 3 us, and 10 us waveforms, we would probably set this field to [3e-6 -inf 1e-6 10e-6 -inf 3e-6]. This normally results in image 2 being combined with image 1 3e-6 after the surface return and image 3 being combined with image 2 10e-6 after the surface return. The other numbers ("-inf 1e-6" and "-inf 3e-6") are noramlly not used. Sometimes T_comb can be reduced slightly as long as the compression sidelobes do not become a problem (e.g. [2.75e-6 -inf 1e-6 9e-6 -inf 3e-6] may be valid and would let the higher sensitivity image start sooner in fast time).
    2. img_comb_layer_params If update_surf is true, then this specifies the surface layer information from for combining. The format follows the opsLoadLayers parameter structure. For example: struct('name','surface','source','layerdata','layerdata_source','CSARP_post/layerData')
    3. img_comb_weights This field applies a weight (specified in dB) to each image being combined. numel(img_comb_weights) must equal numel(imgs). Normally this field should not be used, because these weights should be applied using the adc_gains field (gains that depend on ADC) or the chan_equal_dB field (gains which depend on the receiver path).
    4. img_comb_weights_mode If this field is set to the string "auto", the intensity difference between the images at the transition point will be estimated using the transitions bins (bins used to blend specified in img_comb_bins) to estimate weights to apply to the second image when combining. If it is not equal to auto, then no weighting is applied by this step before combining. Note that img_comb_weights are applied regardless of this img_comb_weights_mode setting. Normally this field should only be used for visualization purposes since it introduces a time varying gain correction for each frame. It is helpful when there are errors in the adc_gains and chan_equal_dB fields or when a smoother transition between image intensities along the combine edge is desired.
    5. img_comb_mult This field affects the T_comb field in imb_comb. It has been used when there is along-track doppler aliasing of the surface that shows up at a fixed constant multiple of the surface time delay. If this field is specified, T_comb becomes the minimum of td_surface+T_comb and td_surface*img_comb_mult.
    6. img_comb_bins This is a positive integer scalar specifying the number of bins to use when blending two images together. The default is no blend or a value of 1 which results in an abrupt transition from one image to the next.

    NOT IMPLEMENTED: It should be possible to create a specific "img_II" file without recreating all the images. Right now, to just create the "img_02" file, you would have to create the "img_01" file again because there is no way to just do "img_02". This could probably be a "imgs_en_mask" field that is a logical array the same size as "imgs".

    Array Processing Steps

    The array.m's main purpose is to create output power intensity images using SAR processed data as inputs. Usually this involves combining multiple wf-adc pairs in array processing to form low-gain/high-gain images that are then combined into a final image.

    The processing steps are split between array.m, array_task.m, and array_combine_task.m.

    array.m

    1. Load the records information (trajectory, attitude, radar settings, and raw data record information)
    2. Break each desired output image into blocks (called chunks in the source code) based on input range lines
    3. Determine what input data is required for each block and create tasks for each block

    array_task.m

    Loads SAR SLC images and applies array processing for each output image pixel.

    1. Load single look complex SAR images from SAR processing. All the wf-adc pairs specified for the current image are loaded for one chunk (one block of along-track data). These are read in from the in_path directory.
    2. Array processing is applied to these pairs and an image is generated.
      • The "CSARP_standard" data product uses delay and sum beam forming with incoherent power multilooking (a.k.a. snapshots). The signal power is not affected by this process.
      • The "CSARP_mvdr" data product uses the minimum variance distortionless response adaptive filter (adaptive because the signal model covariance matrix is estimated from the data). While a perfect covariance matrix does not affect the signal power, in practice, the number of snapshots is too low and self nulling will always occur and the signal power will be reduced by some unknown amount. This means that MVDR is not good for radiometric work.
      • The "CSARP_music" data product outputs the MUSIC "cepstrum" which is the inverse of the noise eigenvector array pattern and therefore is not representative of signal power at all. However, it is loosely speaking proportional to signal quality so that higher SNR results in larger cepstrum values in general.

    array_proc.m

    The array processor, array_proc.m, is called by array_task.m. The function takes multichannel array data and a param structure and processes the measurements in the cross-track dimension. The function is a large switch yard that offers multiple array processing methods based on the assignment of param.array.method. The multichannel SAR measurements may be either beamformed into a combined two-dimensional image or used to obtain estimates of the arrival angles of sources incident on the array (needed for three-dimensional imaging). The following is a list of Beamformers and DOA estimators currently supported in array_proc.m.

    For more detailed information on the beamformers and DOA estimators supported by array_proc.m, see the array_proc page.

    array_combine_task.m

    This steps takes the block/chunk outputs from each of the tasks and concatenates these together into frame images, one per image specified in array.imgs. If image combining is enable with array.img_combine, then the images are combined in the fast-time dimension to form a combined image. This image is truncated to remove pulse compression roll off for non-deramp-on-receive systems; the default is to cutoff at 50% support or a 3 dB drop. Besides the roll off on the edges, the signal amplitude is not affected during array_combine_task.

    Radar images formed using the standard method (i.e. CSARP_standard) will have:

    • For specular targets: 1/R^2 losses for specular targets and the signal power will be equal to the signal power at the antenna input (1/R^2).
    • For point targets: 1/R^3 losses for point targets and the signal power will be equal to the signal power at the antenna input (1/R^4) with focusing gain (R^1).

    Radar Worksheet

    cmd-records-qlook-sar-array-radar-post-analysis

    The radar worksheet specifies the radar parameters which are not set when loading the raw data and also the processing parameters that are common for all processing methods. Most of this worksheet's fields are common across all radar systems, but some fields depend on the radar system.

    An example radar worksheet is shown below for a radar depth sounder.

    All waveform-adc pairs (wf-adc) that will be combined to create a single image, must have the same sampling rate and the same time gate. Therefore, the radar worksheet input sampling parameters (e.g. fs, wfs(wf).DDC_dec, wfs(wf).Tadc_adjust, wfs(wf).Tadc) and the processing time parameters (e.g. wfs(wf).BW_window and wfs(wf).ft_dec) must be set to make this happen. Typically, these fields will be the same for all waveforms in a segment.

    Radar worksheet.jpg
    Radar worksheet wfs1.jpg
    Radar worksheet wfs2.jpg

    Field Description
    Date/YYYYMMDD

    1/Segment

    Each segment is listed in the first two columns in the order that it was collected. The segment ID is formed as YYYYMMDD_SS where YYYY is the year, MM is the month, DD is the day, and SS is the segment number. The ordering must match the command worksheet.
    fs double scalar that specifies the sampling frequency (Hz), must be specified
    PRF double scalar that specifies the pulse repetition frequency (Hz). This does not consider hardware presumming. For example, if the PRF is 12000 with 12 presums, then the effective PRF (EPRF) is 1000 Hz. That is 1000 records are stored per second. The PRF field would be specified as 12000 Hz in this case.
    adc_bits positive integer scalar that specifies the number of ADC bits (used to convert quantization to voltage). The quantized data are divided by 2^adc_bits. Default is 0.
    Vpp_scale double scalar that specifies the full scale voltage for the ADCs (used to convert quantization to voltage). The quantized data are multiplied by Vpp_scale. Default is 1.
    lever_arm_fh function handle that specifies the lever arm function used to translate GPS/IMU trajectories/attitude to the radar wf-adc pair phase center. This field should not be set if GPS data are not available. This field should always be @lever_arm if it is set. For new radar campaigns, the lever_arm.m function must be modified. See lever_arm.m for more detail.
    wfs structure array describing each waveform transmitted by the radar system. See #wfs fields below. Since this structure has many fields it is usually specified using the special structure array format.
    size/ar:wfs Specifies a structure array called "wfs". The "size" field specifies the number of waveforms for this segment (i.e. number of elements in the "wfs" structure array). Default is 1.

    Radar Worksheet: wfs

    This section describes the fields in the wfs structure array. The transmit waveform is always assumed to be chirp with start and stop frequencies given by: [wfs(wf).f0 wfs(wf).f1]*wfs(wf).fmult + wfs(wf).fLO. Up-chirps and down-chirps are supported. fmult and fLO are provided for convenience only since f0 and f1 can be used alone since fmult defaults to 1 and fLO defaults to 0. The fast time axis of the raw data is assumed to be wfs(wf).Tadc + wfs(wf).Tadc_adjust + dt*(0:Nt-1).' where dt = 1/fs*wfs(wf).DDC_dec. The data are shifted by wfs(wf).Tsys during pulse compression so this acts a little like an adjustment to the time axes except that it is applied to the data instead of the time-axis and can be specified independently for each wf-adc pair.

    Field Description
    adc_gains_dB Double Nc length vector which specifies the receive gain for each ADC in power-dB. For radiometric calibration, this is set to normalize a specular reflection 1 m from the radar and includes all system effects (e.g. transit power, antenna gain, etc.). The default is all zeros.
    bad_value The fill value to use when a bad record is loaded or for undefined range bins that occur when the time gate is allowed to vary (e.g. as with deramp radars that track the surface return). Typically this will be set to zero or NaN. The default is zero. When NaN is used, it may be combined with nan_dec in qlook.m to produce results that take the bad records into consideration.
    bit_shifts Nc length integer vector indicating the number of hardware right bit shifts for each adc (used to convert quantization to voltage). The digital receiver hardware presum circuitry may presum enough samples to overflow the accumulator register. To handle this situation, bit shifts are usually applied to keep this from happening. Usually this field is read from the raw data file header. Occasionally the file is incorrect and some data formats do not include this information and this allows the software operator to provide the correct value. The default for each ADC is the value stored in records.settings.wfs(wf).bit_shifts or 0 if that is not set. A single right bit shift in hardware is equivalent to dividing the signal by 2 hence the raw data are multiplied by 2^bit_shifts.
    blank Two element double vector used to indicate the region of the signal to be blanked. This region of the signal will be set to zero after loading. The blank time is the larger of two numbers passed in:
    • Number 1 is added to surface time delay and is usually equal to pulse duration
    • Number 2 is unmodified and is usually equal to hardware blank setting
    • Set either number to -inf to disable. The default is -inf for both numbers.
    BW_Window 2 element double array that specifies the lower and upper frequency of the frequency domain window. The default is to match the start and stop frequency of the chirp. This field is typically used with a low time bandwidth product chirp which has substantial energy outside the start and stop frequencies. It is also used for deramp on receive radars to remove bad data at the start and stop of the chirp. (Typically there is a substantial amount of the RF signal that is lost because the deramp LO is not present unless stretch processing is used.) Note: For deramp on receive radars, this value must be an integer multiple of two times the chirp rate divided by the greatest common divisor of the possible sampling frequencies (e.g. in a system that has variable decimation rates). data_pulse_compress.m checks to make sure that wfs.BW_window is an integer multiple of 2*wfs.chirp_rate/wfs.fs_dec. For example, for 6e9/250e-6 = 2.4e13 Hz/sec chirp rate and 125e6 sampling frequency with no decimation allowed, the BW window must be a multiple of 2*6e9/250e-6/125e6 = 384 kHz. The reason for this constraint on BW_window is that many functions will use BW_window directly to infer the precise bandwidth or sampling frequency of the pulse compressed data. If BW_window is not an exact multiple, then there will be an inconsistency between the fast-time axis of the data matrix and the expected sampling frequency of the pulse compressed data, BW_window. For systems that allow different decimation rates, the BW_window should be chosen so that all decimation rates satisfy the constraint. The function run_BW_window_gen.m can be used to determine the best BW_window settings.
    chan_equal_dB Double Nc length vector which specifies the power level of this wf-adc channel relative to the reference (a value of +3 will cause the corresponding wf-adc pair to have its power weighted by -3 dB). The default is all zeros. See Receiver equalization for setting the chan_equal_dB field.
    chan_equal_deg Double Nc length vector which specifies the phase weights for each path in degrees (a value of +15 will cause this channel to be weighted -15 deg; i.e. have its phase delayed by 15 deg). The default is all zeros. See Receiver equalization for setting the chan_equal_deg field.
    coh_noise_arg Structure controlling coherent noise removal. See coh_noise_arg for more detail.
    coh_noise_method String containing '', 'estimated', or 'analysis'. If empty, then no coherent noise removal is done. 'estimated' causes the coherent noise to be estimated on each loaded block of data independently; this is the quicker and less accurate coherent noise removal. 'analysis' uses the method described in Coherent noise. The default is empty. See coh_noise_arg for more detail.
    conjugate Logical indicating that this waveform should be conjugated when loaded.
    DDC_dec Scalar positive integer that specifies the amount of decimation that was applied inside the digital receiver before storage. The effective sampling rate is fs = param.radar.fs/wfs(wf).DDC_dec. The default is 1 (no decimation).
    DDC_dec_max The maximum allowed value for DDC_dec. This is used by the deramp pulse compression to ensure that the output time sampling (which is affected by the DDC_dec) always starts on a multiple of the coarsest sample spacing. (The maximum DDC_dec value produces the coarsest sample spacing.) The default value is to set this field equal to DDC_dec.
    DDC_freq Scalar double that specifies the NCO frequency (in Hz) use to down convert the data in the digital receiver. The default is zero (no complex baseband process).
    DDC_NCO_delay Time delay to apply to the NCO's time axis to ensure that the phases align for deramp on receive followed by FIR decimation (DDC_dec ~= 1).
    deconv Structure controlling the post-pulse compression deconvolution. See deconv section for more detail.
    f0 double scalar start frequency (Hz)
    f1 double scalar stop frequency (Hz)
    fLO double scalar local oscillator (shift in frequency). Default is 0. E.g. with 12-18 GHZ DDS output, the Ku-band Radar could have an LO setting of zero so that the transmitted signal is 12-18 GHz and the and the Snow Radar would have an LO setting of -10 GHz. Normally not defined in the spreadsheet.
    fmult double scalar frequency multiplication. Default is 1. Normally not defined in the spreadsheet except for radars that use a PLO or frequency multiplier to generate the transmit chirp (e.g. the old Snow Radar).
    ft_dec Used with fast time decimation after pulse compression. This is a 1x2 vector of positive integers which represents the resampling ratio. The Matlab function resample is used to apply a P/Q resampling polyphase anti-aliasing filter to the fast-time data. This vector contains the [p q] arguments to the resample function. Also check if DDC has reduced the bandwidth with additional sampling, this might change the required ft_dec. If not specified, the default values are [p q] = rat((wfs(wf).f1-wfs(wf).f0)/fs*wfs(wf).DDC_dec). Usually it is best to over sample by 2 times the bandwidth for power detected products since power detection doubles the bandwidth.
    ft_wind Function handle to fast time frequency domain window (e.g. @hanning).
    ft_wind_time Function handle to fast time time domain window (e.g. @hanning). Note that this is usually not set and the default is an empty array [] for no time domain windowing. The tukey time domain window is applied separately from this ft_wind_time function.
    gain_dir Specifies the location of the fast time gain directory. The default is 'analysis' for CSARP_analysis.
    gain_en Logical enabling fast time gain correction. Default is false.
    next Nc by 2 wf-adc pair list with one entry for each adc in this waveform. This allows wf-adc loading to link and automatically load other wf-adc pairs. This is useful when the transmit . If no additional wf-adc pairs need to be loaded, then the field can be left undefined or [NaN NaN] placed in the [wf-adc] pair for the corresponding ADC. The default is an array filled with NaNs (i.e. no linking is done). For example, with a 4-channel ADC system, the field for wfs(1) might be [2 1; 2 2; 2 3; 2 4]. Now, when wf-adc "1-1" is loaded, it will automatically load wf-adc "2-1". This is useful when zero-pi modulation (0 and 180 deg) or IQ (0 and 90 deg) transmission is stored in separate wf-adc pairs so that the two pairs need to be loaded together always. Usually this field is used in conjunction with the "weight" field.
    num_sam Scalar positive integer. Usually this field is read from the raw data file header. Occasionally the file is incorrect and some data formats do not include this information and this allows the software operator to provide the correct value. The default is the value stored in records.settings.wfs(wf).num_sam. It is an error if it is not set in one of these places (raw data file header, records file settings, or radar worksheet).
    nz_trim Nyquist zone trim. Allows different numbers of samples to be trimmed from each Nyquist zone. Only used for deramp on receive pulse compression. Specified as a cell array of 2-element vectors. The index into the cell array is the Nyquist zone (1-indexed). The first entry in the 2-element vector is number of samples to trim from the start of the range window and the second entry is the number of samples to trim from the end of the range window. This is done after pulse compression. The default is [0 0] for each Nyquist zone (i.e. no trimming).
    prepulse_H Prepulse compression deconvolution. The transfer function of the system is deconvolved using an inverse filter. The software assumes an S2P file is available. (The system transfer function is usually measured with a network analyzer.) See Prepulse compression deconvolution. See prepulse_H fields below.
    presums Scalar double indicating the number of hardware presums for this waveform (used to convert quantization to voltage). Usually this field is read from the raw data file header. Occasionally the file is incorrect and some data formats do not include this information and this allows the software operator to provide the correct value. The default is the value stored in records.settings.wfs(wf).presums or 1 if that is not set. The raw data are divided by the number of presums.
    presum_threshold Scalar double indicating the percentage of software presums that must be good records. This refers to the special mode in data_load that allows additional software presumming during loading. The default is 0.5 which allows for about 50% of the records to be bad before marking the whole presummed result bad. The sofware presumming is set by qlook.presum, sar.presum, and analysis.presum which all have defaults of 1 or no software presumming.
    quantization_to_V_dynamic Logical scalar that enables or disables dynamic quantization to voltage conversions. For some systems the bit shifts are changed during recording and so the conversion has to be determined during file loading. Normally this field is not defined and its default value is automatically determined based on the file version.
    ref_fn Old deconvolution method. Generation of these files is not supported any longer, but old files should still work.
    rx_paths See Waveform-adc pair for an explanation of receiver path (rx_paths) settings. This is a positive integer Nc-length vector which specifies which receiver paths each adc is connected to. Nc is the number of ADCs for this waveform. If a receiver path is set to NaN, then it is assumed to not be connected. The default is a length 1 vector equal to [1]. This field must be specified if Nc > 1. It is acceptable to have multiple ADCs have the same rx_paths entry. This would indicate that the receiver path was split and sent to two separate ADCs (e.g. in the case of a low gain/high gain receiver setup where two ADCs are used to sample a low gain version and a high gain version of a signal coming from a common receive antenna).
    t_ref Double scalar indicating the delay of the reference LO signal for deramp receivers. This is the field used to delay all ADCs for a particular waveform for deramp receivers to capture system time delay (Tadc_adjust delays the raw data and t_ref delays the pulse compressed data for deramp systems). A positive value indicates that the t_ref starts after the transmit waveform. The default value is zero.
    Tadc Double scalar override of the record start time (sec) which is normally specified in the data file header or metadata file. This field should reflect the radar control software settings. This should be left blank or not included except when the file format requires this field to be provided or in rare cases where the file header is in error. Normally, the system time delay should be removed with the Tadc_adjust and Tsys variables and not this field. The fast-time vector will be affected by this field. This field defaults to zero.
    Tadc_adjust Double scalar used to remove the system time delay from all adcs. This should be used to remove the average system time delay. Tsys should only remove relative delay between channels. Tadc_adjust is added to the start time and has opposite sign to Tsys. For example, if a target is showing up at 4e-6 and should be showing up at 3e-6, then set Tadc_adjust to -1e-6. The fast-time vector will be affected by this field. Generally this field should be constant for all wf-adc pairs. See System time delay for details on setting the system time delay.
    td_mean Double scalar indicating the mean time delay for deramp on receive system. Deskew is done relative to this time delay. The default is 3e-6 sec.
    time_raw_trim 2 element positive integer vector indicating the number of samples to remove from the start and the number of samples to remove from the end of the raw data. Currently only implemented in data_load.m for some file versions. The default is [0 0] or no trimming.
    Tpd Double scalar indicating the pulse duration (sec). This field must be specified.
    Tsys Double Nc length vector which specifies the system time delay for each receiver path. This should not be used to remove a constant delay that applies to all channels, use Tadc_adjust for this. The Tsys variable should only be used to remove relative time delays between channels. A value of +10 ns will mean that 10 ns is removed... i.e. targets will shift 10 ns closer to the radar). These values are calculated from the receiver equalization process. See System time delay for details on setting the system time delay. See Receiver equalization for setting the Tsys field. For example, if the surface is at 3.5 us and the multiple is at 6.5 microseconds, we can deduce the absolute offset to be td = 2*3.5 - 6.5 = 0.5 us. The default is all zeros.
    tukey double scalar that specifies the alpha in tukeywin(N,alpha) for the time domain Tukey weighting used on transmit. This Tukey window will be applied to the reference function in the time domain and will have a length, N, equal to the reference function.
    tx_weights Double vector of voltage-scale amplitudes for each transmit antenna phase center (indexing must be aligned with lever_arm.m function). There should be one entry for each transmit phase center identified in lever_arm.m.
    zero_pad Scalar integer indicating how much additional zero-padding to add. It used to be specified in seconds and is now specified in samples. Default is zero additional zero-padding which means zero-padding will be equal to the length of the reference function minus one (the length of the reference function defaults to the number of samples to cover the pulse duration, Tpd). When the time bandwidth product is small and/or fast time windowing has abrupt edges like with boxcar windowing, the frequency domain decimation can have a very long impulse response that causes circular convolution and increasing zero padding may help.
    weight Nc-element double vector specifying a weight to apply to each wf-adc pair. This is usually used in conjunction with the "next" field to allow different wf-adc pairs to be automatically combined during loading. The default is an all ones vector (i.e. no weighting). For example, if zero-pi modulation (0/180 deg transmission) is used where the 0 deg transmission is stored in wf-adc 1-1 and the 180 deg transmission is stored in wf-adc 2-1, then the weight for 1-1 should be "0.5" and the weight for 2-1 should be "-0.5". The two wf-adc pairs should be linked by setting the next field of the first waveform 1-1. When the "1-1" wf-adc pair is specified in qlook, sar, or analysis, it will trigger the 1-1 and 2-1 waveforms to be loaded and combined using the weights 0.5 and -0.5 to combine the waveforms.

    Radar Worksheet: wfs.coh_noise_arg

    Structure containing fields which control the coherent noise removal. See Coherent noise for more detail.

    Field Description
    fn Directory containing coherent noise removal files. The default is 'analysis' for CSARP_analysis. Used when 'type' == 'analysis'.
    DC_remove_en Default is true. Enables DC removal. Used when 'type' == 'estimated'.
    B_coh_noise Default is [1] or no feed forward filtering. Specifies feed forward coefficients of filter. Used when 'type' == 'estimated'.
    A_coh_noise Default is [1] or no feedback filtering. Specifies feedback coefficients of filter. Used when 'type' == 'estimated'.

    Radar Worksheet: wfs.deconv

    Structure containing fields which control the post-pulse compression deconvolution. See Deconvolution for more detail.

    Field Description
    en Default is false. Logical that enables deconvolution.
    fn Directory containing deconvolution files. The default is 'analysis' for CSARP_analysis.

    Radar Worksheet: wfs.prepulse_H

    Fields for wfs(wf).prepulse_H structure. See Prepulse compression deconvolution for more details.

    Field Description
    dir Directory containing deconvolution files. The default is 'analysis' for CSARP_analysis. Used when 'type' == 'inverse_filter'.
    fn Filename for deconvolution files. The default is 'prepulse_H'. Used when 'type' == 'inverse_filter'.
    type String containing the type of prepulse compression deconvolution. Default is an empty string, '', for no deconvolution.

    Options for wfs(wf).prepulse_H.type field. If left as an empty matrix, then no prepulse deconvolution is done.

    prepulse_H.type Description
    filtfilt Deconvolution using filtfilt function.
    inverse_filter Deconvolution using inverse filter specified in deconvolution files.
    NI_DDC_2019 Hardcoded delay/phase correction for the AWI version of the Snow Radar.

    Post Worksheet

    cmd-records-qlook-sar-array-radar-post-analysis

    The post worksheet specifies the parameters used by the post.m function which is run from run_creating_posting.m or run_master.m. Create posting creates all the outputs in the CSARP_post directory. It is used for the following tasks:

    • Create radar echogram image files (e.g. jpg files). There are a number of format parameters to help visualize the data in the best way. These are stored in the images directory.
    • Create radar echogram PDF files. Very similar to image file generation. These are stored in the pdf directory.
    • Copy L1B radar echogram data files from their regular location into the CSARP_post directory. These are stored in the same directory name that they came from.
    • Create CSARP_layerData files in the CSARP_post directory. Layer information can come from CSARP_layerData or from OPS.
    • Create CSV and KML files for the surface and bottom layer information. The csv and kml directories contain all the data even when no layer data exist (nonexistent values are set to -9999). The csv_good and kml_good directories are just versions that have data points for both the surface and bottom layers. Layer information can come from CSARP_layerData or from OPS. You must use run_post.m to create the concatenated KML and CSV files. run_master.m will not run the final step that is required.

    An example post worksheet is shown below for the accumulation radar:
    Accum post worksheet img1of2.jpg Accum post worksheet img2of2.jpg

    An example post worksheet is shown below for the radar depth sounder:
    Rds post worksheet img1of2.jpg Rds post worksheet img2of2.jpg

    An example post worksheet is shown below for the snow radar:
    Snow post worksheet img1of2.jpg Snow post worksheet img2of2.jpg

    Each segment is listed in the first two columns in the order that it was collected. The segment ID is formed as YYYYMMDD_SS where YYYY is the year, MM is the month, DD is the day, and SS is the segment number. The ordering must match the command worksheet.

    • out_path: output directory (overrides CSARP_post), leave empty for default
    • data_dirs: cell vector of strings containing the output directories that will be posted, the first one listed is the "master" data which will be used to create images, pdfs, etc.
      • For RDS field processing {'csarp-combined','qlook'}
      • For RDS post-processing {'mvdr','standard','qlook'}
      • For other radars {'qlook'}
    • layer_dir: Leave empty when using OPS. Set to ' 'layerData' for RDS posting. If left empty, the "master" data will be used for layer information (typical setting for non-rds data). To create layerData, use run_make_layer_files.m.
    • layers: A struct array of the layers to be plotted. The format of the struct array should match the format required by opsLoadLayers.m. Default is:
        param.post.layers = [struct('name', 'surface', 'source', 'layerData') ...
          struct('name', 'bottom', 'source', 'layerData')];
      

      To use OPS:

      layers = [struct('name', 'surface', 'source', 'ops')];
      layers(end+1) = [struct('name', 'bottom', 'source', 'ops')];
      params = ct_set_params(params,'post.layers',layers);
      
    • surface_source: A struct denoting the layer to use as the surface. Default is:
        param.post.surface_source = struct('name', 'surface', 'source', 'layerData', 'existence_check', false);
      

      To use OPS:

        param.post.surface_source = struct('name', 'surface', 'source', 'ops', 'existence_check', false);
      
    • maps_en: Boolean, create map images
    • echo_en: Boolean, create echogram images
    • layers_en: Boolean, create echogram images with layers if echo_en also enabled AND always creates layerData files
    • data_en: Boolean, copy data files
    • csv_en: Boolean, create csv files of layer information
    • concat_en: Boolean, create concatenated csv files
    • pdf_en: Boolean, create pdf files from image files (image file creation must be enabled when this command is)
    • img_type: set to "jpg" or "png", typically this should be 'jpg'
    • img_dpi: dpi of output images
    • map.location: 'Arctic', 'Greenland', 'Canada', 'Norway', 'Antarctica', or 'geotiff'
      • Default for Sea Ice in the Arctic is 'Arctic'
      • Default for Greenland in the Arctic is 'Greenland'
      • Default for Canada in the Arctic is 'Canada'
      • Default for Sea or Land Ice in the Antarctic is 'Antarctica'
    • map.type: 'contour', 'combined', 'ascat' (use ascat_to_geotiff.m before running ascat or the geotiffs required may not be downloaded yet).
      • Default for RDS and accum is 'combined'
      • Default for FMCW radars land ice is 'contour'
      • Default for FMCW radars sea ice is 'ascat' (unless it is not available, then use 'contour')
      • For map.location == 'geotiff', map.type should also be set to 'geotiff'. Also, a new field or column called "map.geotiff" of type "r" should be added. This should be a 2 element cell vector where the first element is a string containing the geotiff path using the standard ct_filename_gis convention (i.e. relative paths will be appended to gRadar.gis_path). The second element is a string containing the map title.
    • echo.*: This structure is passed directly to publish_echogram.m in the param input argument for this function.
    • echo.detrend: Optional structure that controls detrending. For example:
      'struct('mode','polynomial','poly_order',7,'depth','[min(Surface_Depth)+0.5 max(Surface_Depth) +20 ]')
      
    • echo.filter: Optional structure that controls additional along-track multilooking. For example:
      @(data) fir_dec(data,ones(1,7)/7,1)
      
    • echo.caxis: Optional 1x2 vector that controls caxis. caxis is normally set to the full dynamic range. The two numbers specify the ratio of values to use. For example [0.1 0.9] would set the caxis to cover 10% to 90% of the finite values so that the bottom and top 10% would be saturated (a sort operation is used). Setting this field (even to [0 1]) has a special function when polynomial detrending is enabled and that is the caxis is based on the detrending depth range values only. For all other detrend modes or no detrending, [0 1] has no effect because it says to just use 100% or all of the image pixel values to determine the dynamic range.
    • echo.elev_comp: Type of elevation compensation to apply when images are created. Settings are:
      • 0 to do no compensation
      • 1 to compensate for elevation variations and use the mean surface for the origin of the y-axis
      • 2 to compensate for surface elevation changes and set the origin of the y-axis to the mean surface
      • 3 to plot on WGS-84 axis
    • echo.depth: specifies the range on the depth axis that will be shown in the images, typical settings are:
      • FMCW Sea Ice missions: [min(Surface_Depth)-3 max(Surface_Depth) + 4]
      • FMCW Land Ice missions: [min(Surface_Depth)-10 max(Surface_Depth) + 40]
      • Accum Sea Ice missions: [min(Surface_Depth)-10 max(Surface_Depth) + 40]
      • Accum Land Ice missions: [min(Surface_Depth)-50 max(Surface_Depth) + 300]
      • RDS Land Ice missions: [publish_echogram_switch(Bbad,0.25,Surface_Elev,-3500,DBottom,-100),max(Surface_Elev+100)]
    • echo.er_ice: the relative permittivity that will be used for the depth axis
    • echo.plot_params: a 1x2 cell vector of two cell vectors which will be passed to the set command of the plot
    • ops.en: Enable use of open polar server (gets layer data from server rather than layer data files)
    • ops.location: arctic or antarctic
    • ops.gaps_dist: 1x2 double vector passed to data_gaps_check.m. First number specifies the maximum gap between two layer points in meters that will be interpolated. The second number specifies how far interpolation will be performed on edges of gaps.

    Analysis Worksheet

    cmd-records-qlook-sar-array-radar-post-analysis

    See the analysis page for details on running analysis.

    Radiometric Calibration Worksheet (OLD/DEFUNCT)

    THIS SECTION IS OLD/DEFUNCT. Radiometric calibration is now done as part of deconvolution. The param.radar.wfs(wf).system_dB(rx_path) takes care of wf-adc specific system effects (solves for radar equation to remove all system effects except spherical spreading and channel losses). The param.qlook.radiometric_corr_dB and param.array.radiometric_corr_dB fields provide a final image correction to account for any imperfections in the preceding steps.

    --

    OLD TEXT:

    The Radiometric Calibration Worksheet specifies the parameters required to apply radiometric corrections to products created by other toolbox processes. This worksheet is implemented as a generic worksheet and must adhere to the convention that all variable names are listed in the first row of the worksheet and that all fields are interpreted as commands.

    An example of the Radiometric Calibration Worksheet is available in mcrds_param_2008_Greenland_TO.xls

    1. Each segment is listed in the first two columns in the order that it was collected. The segment ID is formed as YYYYMMDD_SS where YYYY is the year, MM is the month, DD is the day, and SS is the segment number. The ordering and content must exactly match the command worksheet.
    2. "in_path": input base directory for data, leave empty for default
    3. "out_path": output directory, leave empty for default (./CSARP_radCal/)
    4. "data_dirs": cell vector of strings containing the specific input data directory and output directories that will be created. Each string is handled sequentially in processing and there is no interdependence or order expected. This string will deternime the input directory postfix as well as the output subdirectory. In example, a string of 'qlook' will look for input data in the CSARP_qlook folder and create an equivelenat output folder under CSARP_radCal/qlook/.
    5. "type": determines the reference geometry for the calibration corrections. Expected values are 'specular' and 'distributed'.
    6. "ice_loss_constant": a single numeric value representing the ice loss in dB/km. If this value is empty, the function will search for a loss profile as given in the next field.
    7. "ice_loss_fh": file handle for the permeability profile written as a function. It is expected that this file fits a standard format as similar files in the toolbox and that the file is in the path.
  • Personal tools
    Namespaces

    Variants
    Actions
    Navigation
    Tools