Skip to content

PREEVENTS hydromet flood forcing sensitvity on geomorpholgy Jeff Keck Paper #1 Project Log

Jeffrey Keck edited this page Oct 5, 2019 · 46 revisions

10/4/19

Discussed:

Observed trends in roughness relative to flow

Best way to code roughness into DHSVM for sediment transport modeling

Conclusion:

Since the empirically derived hydraulic geometry relation adequately models the peak flows and causes only minor attenuation of the flood hydrograph, use the 150 stream classes with nb computed from the hydraulic geometry. Calibrate using the current calibration scheme without including channel roughness in calibration. The attenuation in the larger flood hydrographs at the outlet could be reduced by using a hydraulic geometry relation for the roughness values equal to the mean value of all flows equal to and greater than bankfull flow; however, bankfull flow is generally thought of as transporting the most sediment and control river morphology. Thus, accurate values for bankfull may be the most helpful for sediment transport modeling. If there is time, add the coefficients and exponents of the power relation as calibration parameters: the emerpically derived hydraulic geometry likely does not correctly predict roughness in the upper reaches of the basin. By including the coefficient and exponent of the power relation, and evaluating the fit of the power function to the three observations and keeping only those with a sufficiently high correlation coefficient, roughness in the low order streams can be corrected through calibration. This calibration may be part of a study in the best ways to model low flows

7/18/19

Preliminary calibration runs complete. Now working on determining channel routing parameters and refining calibration script before implementing final calibration runs

3/18/19 meeting summary

Discussed: Calibration strategy:

1 month calibration: calibrate DHSVM to an early fall, warm rainfall event (no snow) using only soil parameters.

3 year calibration: calibrate DHSVM to 3 year period representative of entire dataset using the snow and rainfall transition parameters and the most sensitive soil parameters. (Or only calibrate using the snow and rainfall parameters?). Literature has snow up to 6 deg C for biased temperature data.

Question from Erkan: Can multiple calibration scripts be running on AWS or only one at a time?

A question I have: What grid cell elevation does DHSVM use to downscale from met data? The DEM elevation at the coordinate of met station OR the elevation listed in config file OR mean elevation of grid cell?

1/14/18 meeting summary

Discussed:

How to approximate basal shear stress and present it in a way that can be applied to other rivers/grain distributions

Tasks:

Read Tucker and Bras, 2000, Yetemen et al., 2015, Solyom and Tucker, 2004, Istanbulluoglu et al., 2003, Istanbulluoglu and Bra, 2006

Using the uniform precipitation rate do the following:

For each storm of the ~300 storm events:

compute storm volume, baseflow volume and

change cdf to exceedance probability plots

compute shear stress using regime relationship, assuming parabolic channel

convert shear stress to non-dimensional form

compute non-demensional sediment transport.

Once method for present results of uniform precipitation are complete, use all PNNL grid points in basin to finalize the results

1/9/18 meeting summary

Discussed:

Motivation for paper 1: recent studies inferring climate change effects on landslide and flood rates that do not account for how temporal resolution and temperature bias of forcing data may affect model results. Possibly only focus on temporal resolution in first paper; look at temperature bias effect in second paper.

Develop a conceptual diagram for how temporal resolution of forcing data affects modeled flow and soil water results

Incorporate Solyom and Tucker 2004 and Istanbulluoglu and Bras 2006 into introduction

Tasks:

Send recent papers that do not account for temporal resolution of forcing data

Change CDF plots to exceedance probabilities

Complete analysis for all temporal resolutions of PNNL data

Add sediment volume transported by storm event, consider varying plot for different bed grain sizes

Compute saturated area fraction for each storm event and develop exeedance probability plot

1/7/18 meeting summary

Discussed:

How to interpret comparison of point observations to grid-meteorology data averages. Add SNOTEL and other observations to increase point sample size and details on aspect and installation of sensors will help validity of this argument and motivation for examining temperature bias on modeled hydrology.

DHSVM debugging and preparing initial model state

Presently using ASCE 2005 method to determine daily net long radiation and net short radiation for Salathe and Livneh datasets, consider changing to METCLIM (https://github.com/UW-Hydro/MetSim) and physics.py methods

Tasks:

Fix bugs in forcing data

Look into METCLIM

12/3/18 meeting summary

Discussed: The Salthe 2014 and PNNL 2018 WRF datasets are too cold in the late winter and spring months. This problem is especially apparent in the higher elevations. Estimates of high elevation temperature from low elevation stations plus a lapse rate are also too cold. The consistent cold bias is likely due to cold air pockets in the valleys that throw off the lapse rate used to develop the WRF gridded data sets.

TO DO: Understand how Salathe bias corrected dataset was developed. Check to see if low atmosphere temperature outputs from the PNNL data set are available. Run Sauk DHSVM setup using all three datasets. Does cold bias in late winter and spring months change hydrograph characteristics of the large, channel forming flows?

11/27/18 meeting summary

Discussed:

Livneh 2013, Salathe 2014 and PNNL 2018 temperature min, max and spread relative to observations at 1000 to 2000 m elevation in cascades near Sauk watershed

Problems with physically modeled meteorology dataset surface temperature

All datasets are too cold BUT spread of Salthe and PNNL are closer to observations than Livneh and PNNL dataset is warmer than Salathe dataset. Of the three gridded datasets, PNNL may be best.

To Do:

Add bias corrected Salathe data set to comparison

Add 3 low elevation temperature readings plus lapse rate (Justin Minder 2010) to comparison

Finalize waterbudget table for 3 data sets

Consider PRISM temperature time series?

Final temperature time series can be converted to hourly using vic pre-processing method

Physical reasons to select PNNL dataset over Salathe? Ask Guillaume

11/13/18 Meeting Summary

Discussed:

Bias correction of Salthe WRF, listing grid dimensions in same unit

Study scope - Use a nested study, rather than a single large basin - Add a small basin, such as Thunder Creek and Basin east of divide

To Do:

(1) Select best PET to evaluate water balance of basin where best PET is computed from from dataset that mostly closely matches observations

(2) Also use observations to aid in deciding which dataset to rule out

(3) Use Cristea et al., 2012 regression equation approximation of P-M PET to avoid using Tmin and Tmax

(4) Compare a single storm:

  • Run DHSVM with each of 3 gridded datasets => use mean value and single equivalent grid location or use all grid points in basin
  • compare input P, T and modeled R and observed R

(5) Maps of spatial distribution of P still needed?

11/06/18 Meeting Summary

Discussed:

Use regression equations from Nicoleta et al., 2012 to compute PET, compare to PET from VIC output and Priestly Taylor estimate

Importance of water year

Next steps:

(1) Add water year version of gridclimdict to OGH (or short term use own script to summarize Livneh 2013 and Salath 2014 data)

(2) Finaize 1989 to 2009 plot of mean annual precip in Sauk watershed for Livneh, Salathe and PNNL data with plots of PET from regression equation and VIC output

(3) Create Budyko plots with updated estimate of PET

(4) prepare plot of single storm, talk to Ryan to confirm methods described in Currier et al., 2017 for separating solid and liquid precip

11/01/18 Meeting Summary

Discussed:

(1) Comparison of vertical and spatial distribution of accumulated precipitation in Sauk based on PNNL 2018, Livneh and Salathe datasets

(2) Methods from past studies that can be used for structuring comparison of different gridded datasets

Next Steps

(1) Review Currier et al, 2017 for methods for determing liquid and solid component of precipitation

(2) Review Henn et al, 2018, Nicoleta et al., 2012, 2013 paper for method for estimating PET

(3) Compare potential ET and observed (gridded dataset P- observed R) ET for Sauk for 1981 to 2010 using water year

(4) Compare Budyko of each data set

(5) Begin working on preparing plot of differences in spatial distribution of precipitation datasets relative to mean value: need to resample datasets to a single grid system

10/22/18, 10/23/18 Meeting Summary

Discussed:

(1) Method for defining spatial extent used to crop PNNL data from indices in OGH functions

(2) Annual accumulated precip in basin computed using weighted average differs by less than one percent from value computed using average of grid cells in basin

(3) Except for 1995 and 1997 in the Livneh dataset, late 90's runoff is generally 10 to 30 percent less than precip in Livneh and Salathe(NCEP-forced WRF simulation) datasets

Next Steps:

(1) Find references for ET to compare against observations and gridded climate data

(2) Finish modifying OGH and OXL functions to be compatible with PNNL netCDF files

(3) Compare distribution of precipitation and temperature with elevation for Livneh, Salathe and PNNL datasets

10/02/18 Meeting Summary

Discussed:

(1) new python tools for downloading PNNL 2018 WRF data

(2) determining the weighted average of accumulated precip in basin with python

(3) CDF of baseflow at beginning of storm and CDF of storm volume

(4) Plots of depth to watertable

Next Steps

(1) Visualize downloaded PNNL data using python OGH functions

(2) Improve approximation of annual precip in basin with OGH grid_clim_dict function by using only points in basin. Check result against weighted average manually computed using ArcGIS tools.

(3) Fix how end of storm is selected and how K coefficient is determined in MATLAB script

(4) Change scale of CDF, improve titles and units, overall documentation of what's plotted

(5) Add contours to plots of depth to water table, plot depth to water table, not just difference in depth to water table.

7/18/18 Meeting Summary

Attending: Jeff, Christina

Discussed changes to Hyak batch submission - new issue will be needed for understanding how to run DHSVM on new Hyak batch system.

Updated Mapping Issue by breaking down scripts for postprocessing into beginner, advanced, and more advanced steps for converting multiple folders of model outputs for multiple model variables into ascii formats.

7/18/18 Meeting Summary

Attending: Jeff, Jessica

Assigned following tasks for next (7/30/18) meeting

  • make cdf of each flow metric (hydrograph slope, peak and duration) for 6 month and 10 year model runs for each forcing data temporal resolution (1hr, 3hr, 6hr, 12hr, 24hr) - done
  • For modeled flow from each flow data, compare value of 75th and 95th percentile flow metrics. Does temporal resolution of forcing data affect interpretation of how modeled flow is expected to affect sediment transport?
  • Does difference in model ouput change by season?
  • Specific, extreme storm events? Compare differences in WRF data sets
  • Use OGH accumulated precip functgion
  • Add tool for reading PNNL WRF run

June 2018 Meetings

Attending: Jeff, Christina Code and processing steps shared and described. See Issue updates on https://github.com/Freshwater-Initiative/pyDHSVM/issues/8

5/15/18 Meeting Summary

Attending: Jeff, Erkan, Jessica, Christina, Guillaume

Summary: 1st study proposal

Sensitivity of modeled sediment forcing thresholds to temporal precision of gridded precipitation inputs

Objective Evaluate how temporal resolution of precipitation data affects modeled sediment production and transport rates relative to several assumed soil conditions and two storm types

Methods

  1. Organize PNNL hourly WRF data set that includes the November, 2006 atmospheric river flood into a hourly, 3 hour and daily time series
  2. Create a time series of forcing data that represents a summer convective storm system and organize into hourly, 3 hour and daily time series
  3. Prepare 2 or 3 sets of soil conditions
  4. Run DHSVM on the Sauk watershed using the forcing data sets and different soil conditions. For each run create the following output:

Hillslope Map of watershed shaded by duration that depth to water table exceeds some threshold Slope vs Area plot of watershed; points that depth to water table exceeds threshold are shaded

Channel Map of channel network shaded according to Storm hydrograph duration and symmetry Unsteadiness parameter Basal shear stress Stream power

Results Describe dependence (or lack of dependence) of modeled hillslope and channel sediment transport rates on temporal resolution of precipitation data relative to soil thickness and storm type (AR vs convective storm)

Next steps: See link: https://github.com/Freshwater-Initiative/pyDHSVM/issues/8

2nd study proposal

Sensitivity of modeled sediment forcing thresholds to the spatial precision of gridded precipitation inputs

Examine DHSVM modeled depth to water table and flow using several different gridded precipitation data sets including the Livneh et al., 2013 data and several WRF data sets.

Issues Created:Designing maps for gridded precip effect on forcing thresholds