Skip to content

Project Journal

Christina Bandaragoda edited this page Apr 5, 2019 · 46 revisions

SCL LANDSLIDE PROJECT JOURNAL

Purpose: to track progress and planning for project activities. NOTE: moved most recent to top for east of follow-up

Intentions for 4/16/2019 0. Run model for last 10 years

  1. Finish GIS inputs - on Hydroshare and ready to use in a Notebook Be ready to edit Landslide component
  2. Figure out code workflow needs for May work - big.asc on Hydroshare with python code to generate animations; clip skagit row/col to landlab row/col
  3. finalize hydroshare DHSVM appendix from Skagit Report

Housekeeping:

  1. Update and Add new Github issues (realignment process, collaboration culture review,...)
  2. Add due dates and milestones to the issues.
  3. Set next sprints in calendar.
  4. Amend contract

4/5/2019

  1. look at DHSVM results DONE
  2. upload GIS files to hydroshare DONE ++ Mapped depth to water table, soil, and ice for sample outputs. Investigated inconsistencies in Okanogan soil data Cleaned up merged soil polygon file for mapping DHSVM parameter inputs. Reviewed soil depth and soil parameter processes from previous work - started new Github issue. Finished study area outline and boundary - on Hydroshare. Discussed strategy to subset landslide model as a clip within Skagit DHSVM

4/4/2019

Notice that the check marks are faint but definitive. DONE!
Ronda and Christina cleaned up this Github repo, closed or assigned remaining issues and dropped cards into Project.

Landslide Sprint #1

10/1/2018 Nicoleta and Christina and Ronda met at eScience. Reviewed code to print dates for DHSVM model runs. Discussed model runs on Mox.Hyak and transfer formats. Outlined paper:

Is landslide probability more sensitive to climate change (recharge) or fire (cohesion) ? Use Goodell Fire footprint 2015 August.

10/1/2018 Nicoleta and Christina met to review 9-24 decisions and plans for next steps - deliver list of peak annual saturation events in DHSVM formatted dates for all provided time periods, scenarios, and models (N) to start Skagit model runs on AWS (C).

9-24-2018 Nicoleta and Christina met to review code for selecting storms. Reviewing sediment peaks, temperature model availability, updated Github with all available saturation_extent.txt files. Later work for paper discussion and figures will look at peak events from streamflow and precipitation.

It would take 2 Terrabytes of space to run the Sauk temperature model for 1915-2099 for 20 models (10 GCM; 2 Scenarios). Can we change DHSVM temperature model to print saturation_extent but not other big outflow.only? To get a continuous saturation_extent time series, we need to rerun the temperature model for missing parts on Pogolinux.

Christina and Ronda met to review the contract. Files from the DHSVM-RBM Sauk model results for historic and future were committed to this repo under the folder 'saturation_extent_files'. 'What is a Storm?' Issue was updated.

8-30-2018 Nicoleta presented draft code and results for saturation extent - Jupyter Notebook is on this repo (see Code).

8-2018 Christina worked on Skagit and Sauk model instance updates. The temperature model must be 'On' for the saturation extent file to be printed. This result includes printing a very large Outflow.Only file which gives streamflow at every timestep for each link in the network (1300 for Skagit; 400 for Sauk). The plan is to use the Sauk saturation extent output file to extract the date of the annual maximum peak saturation extent, and use this to print DHSVM model outputs. Pseudo code and plan discussed with Nicoleta. The limitation is that the Sauk report and model runs include only 1960-2010 and 2030-2099 time series. Our goal is to have an annual time series from with to develop a distribution. TBD - do we need to rerun the Sauk model? Or the Skagit model? for 1915-2099.

Christina worked on setting up purchase order and virtual machine instance the correct size for the Skagit model runs on Amazon Web Service. The cost per run for the Skagit is estimated at $1000 (actual cost TBD).

Hyak has been overhauled with a new operating system and batch job submission code needs to be rewritten. Christina investigated with the Computational Hydrology lab (Bart) and has a 8/31 meeting set up with Oriana to learn the new code needed to run the Skagit model on Hyak - but currently there is not sufficient storage space to run the model for 1915-2099 (for all future runs - how many do we really need?)

7-2018 Rebudgeted project to include Nicoleta Cristea and Crystal Raymond.

6-2018 Hired Nicoleta Cristea as Freshwater-eScience research scientist

5-2018 Developed project slide show and Ronda presented to the Climate Impacts Group.

4-19-18

Added Issue for code publication with checklist of plan PPT of project overview added to Github. Christina will share with CEE collaborators (Jeff, Jessica, Erkan) to see how Jeff's paper contributes (or not).

What is a metadata management plan? Who is funding? Contributing? Collaborating? So that research products are accurately annotated.

Deliverable management plan: list of research products = data, code, papers, maps, reports, Storymap (list similar to NSF prior results section "demonstration that products are available", FAIR statement

Data management plan: list of datasets - archive of inventory that has HydroSHare DOIs/hyperlink

Code management plan: Github -> collaborating Githubs e.g. PNNL -> JOSS publication(s)

Provenance/metadata management plan: funder, author, owner, contributor, institution, communication

4-16-18 Developed PowerPoint presentation for CIG brownbag presentation - given this date Feedback from the presentation includes: *some wanted more information about uncertainty and were confused about how the probability accounts for uncertainty...need to make this explicit/clearer *Amy asked what am I most worried about - said, just getting this done - allocation of time to do this project and some anxiety about using DHSVM as a new hydrologic model input *Rob could help improve maps...consider rolling up to HUC 12 watershed level in the end for communication *How are planning on comparing...said possibly 'percent change' *Should think about making a 'story map' for this project using Esri's app, where we can have a time slider and roll over historical map to future map Put powerpoint slide on GitHub

3-6-18 Upload signed Data Use Agreement as a new HydroShare project. Add description to a new Wiki page. Document project management approach on new Wiki page. Decide that next priority is to schedule and prepare for a CIG project introduction presentation. Agenda for next week: Draft presentation. Upcoming data tasks: develop saturation time series python code

2-27-18 Upload DHSVM model outputs for the saturation extent time series. (See Code folder in this repository) Develop Issues and next steps for linking DHSVM to Landlab landslide component.

2-20-18 Review Hydroshare and Github project setup Added Project in Github with task

Clone this wiki locally