Last modified 7 weeks ago Last modified on 2018-08-27 11:28:08

The general steps for taking processed data to release on stellwagen:

Stellwagen is the sediment transport group's location for distribution of published (released) data. Our primary data product is netCDF files, that follow the EPIC convention. They are later converted to CF-1.6 convention and served elsewhere. Data is allowed to be released on stellwagen as long as it has been through the process described in | OFR2007-1194. This page provides more details about how the Best Basic Version review and subsequent data release is accomplished. Reviewed data from /BBV_reviewed is used to generate the catalog, maps, and kml, so much of the process can't start until all the data files have been reviewed and corrected.

  • get files for BBV approval from stg-proc in some_experiment_name/candidate_final_files
  • evaluate and fix issues found in files (see paragraph below)
  • save BBV approved files to some_experiment_name/BBV_reviewed
  • make KML files for each experiment (do_expName_to_ge.m)
  • make docx version of experiment page and generate catalog-html pages (mk_catalog.htmls)
    • put docx, kml and html for the experiment in a new directory on the google drive, and share with Andrea
    • submit web form to get the experiment page peer reviewed
  • put reviewed data files where Rob Wertz can get them to put on stellwagen on CMGDS
    • ask him to put the data in a new directory on the server
  • copy the reviewed data to silt.whoi.edu in Data/expName
    • add a line for the experiment to project_metadata.csv
    • run collect.py to generate a CF-1.6 version of the data
  • get dataset DOI for new experiment page from Andrea and add citation info to the bottom of experiment page
  • address reviewer comments
  • generate ARC-GIS interactive map from mini-kml
  • cut and paste the reviewed material into an existing html, then add the ARCGIS link to interactive map
  • add the link to the new experiment page, to index.html
  • check that both the EPIC and CF versions of the data are available via THREDDS
  • once all components are there, ask Andrea to check then make "live"

I get the files to review from stg-proc/data/experiment_name/candidate_final_files, and return the reviewed files to stg-proc/data/experiment_name/BBV_reviewed. Scripts to check and plot the data are in stg-proc/data/experiment_name/qc_plots_sctipts.

The stuff the reviewer looks at in the review is a whole 'nother thing. Basically I look at "does the metadata match the mooring log?" so that the data will be located correctly in 3-d space and time, and "is the data reasonable, with bad data replaced by the appropriate FillValue??". I try to make overplots to check reasonableness, since if the wiggles look the same in ncBrowse, I don't always pick up on order of magnitude differences that sometimes occur. We can go over this in greater detail. I have some scripts that check, but a lot still requires human eyeballs to verify. I keep 2 *BBV*.xls files to track the progress of which files have been checked in the review, and if any fixes are needed. Those files get uploaded to stg-proc/data/experiment as the review progresses- they provide a record of what was checked and what was changed.

Should you need to prepare the web pages and maps to go with a data release after the BBV is done, here's what to do.