Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Raster Workflow #82

Open
10 tasks
mwdunlap2004 opened this issue Aug 13, 2024 · 14 comments
Open
10 tasks

Raster Workflow #82

mwdunlap2004 opened this issue Aug 13, 2024 · 14 comments
Assignees

Comments

@mwdunlap2004
Copy link

mwdunlap2004 commented Aug 13, 2024

Goals

  • Integrate temporal disaggregation step into wdm workflow. #86
  • Test processing the full life-cycle of isolating raster values for a given coverage, re-classifying and re-sampling to the template resolution (for now is indicated by varkey daymet_mod_daily).
  • Export snapshots of daily and hourly values at several time periods and in different parts of the process as TIFFs, to verify that we are getting the full extent of data when resampling.
  • Verify the water balance for rasters that are temporally disagreggated from daily to hourly (this will best be done with un-clipped data, since we cannot expect the resampled then clipped data to be the same, in fact, we are hoping that resampling and then clipping will lead to somewhat DIFFERENT data)
  • Experiment with resampling and clipping larger time frames (1 year to full time period) -- note: we will likely NOT want to test storing a resampled and 24 hour disaggregated tables as these could impact hard drive storage, although, small watersheds for testing should be safe)
  • Experiment with creating a table with many daily records from querying multiple source rasters (i.e., daymet and PRISM) based on the ratings file for that watershed and best fit method. See Raster Mashups #55
    • mixed table records should all end up as daymet resolution (using raster_templates table for source)
    • mixed table should not include nldas2 records, since those records don't need to be processed we will handle them at a later time in the workflow
    • We need to include a varid for the final set of rasters, based on the scenario -- those are NOT yet created, but we will create them, and they will be the method for us to store data in the dh_timeseries_weather table
  • Insert clipped raster for best fit attached to the coverage in question

This is the order of steps for our workflow, this downloads all of our data, clips them to our new size, does our calculations on them, resamples to daymets resolution, and then clips them to the correct watershed size.

Downloads our full nldas2 data for 1 day

create temp table tmp_nldas2_hourly as
select a.tid, a.tstime, a.tsendtime, b.hydroid as varid, a.featureid, a.entity_type, a.rast
from dh_timeseries_weather as a
left outer join dh_variabledefinition as b
on (a.varid = b.hydroid and b.varkey='nldas2_obs_hourly')
WHERE extract(year from to_timestamp(tsendtime)) = 2023
AND extract(month from to_timestamp(tsendtime)) = 2
AND extract(day from to_timestamp(tsendtime)) = 12
AND b.varkey = 'nldas2_obs_hourly';

Downloads our full prism data for one day

create temp table tmp_prism_feb12_daily as
select a.tid, a.tstime, a.tsendtime, b.hydroid as varid, a.featureid, a.entity_type, a.rast
from dh_timeseries_weather as a
left outer join dh_variabledefinition as b
on (a.varid = b.hydroid and b.varkey='prism_mod_daily')
WHERE extract(year from to_timestamp(tsendtime)) = 2023
AND extract(month from to_timestamp(tsendtime)) = 2
AND extract(day from to_timestamp(tsendtime)) = 12
AND b.varkey = 'prism_mod_daily';

Downloads our full daymet data for one day

create temp table tmp_daymet_feb12_daily as
select a.tid, a.tstime, a.tsendtime, b.hydroid as varid, a.featureid, a.entity_type, a.rast
from dh_timeseries_weather as a
left outer join dh_variabledefinition as b
on (a.varid = b.hydroid and b.varkey='daymet_mod_daily')
WHERE extract(year from to_timestamp(tsendtime)) = 2023
AND extract(month from to_timestamp(tsendtime)) = 2
AND extract(day from to_timestamp(tsendtime)) = 12
AND b.varkey = 'daymet_mod_daily';

Reclass our rasters, changes our value of -9999 to NULL for all of our datasets

UPDATE tmp_nldas2_hourly SET rast = ST_Reclass(rast,1, '(-999999-0):0-0,[0-999999]:0-999999', '64BF', -9999);
UPDATE tmp_prism_feb12_daily SET rast = ST_Reclass(rast,1, '(-999999-0):0-0,[0-999999]:0-999999', '64BF', -9999);
UPDATE tmp_daymet_feb12_daily SET rast = ST_Reclass(rast,1, '(-999999-0):0-0,[0-999999]:0-999999', '64BF', -9999);

Clips our nldas2 to our larger extent

create temp table tmp_nldas2_01665500_hourly as
select  w.tid, w.tstime, w.tsendtime, w.varid, w.featureid, w.entity_type, 
st_clip(w.rast, st_buffer(st_envelope(fgeo.dh_geofield_geom),st_Scalex(w.rast))) as rast
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature')
left outer join tmp_nldas2_hourly as w
on (
  1=1
)
where f.hydrocode = 'usgs_ws_01665500'
and w.tid is not null;

Creates our daily precip for nldas2

create temp table tmp_nldas2_daily_01665500_precip as
select min(a.tstime) as tstime, max(a.tsendtime) as tsendtime, a.featureid, a.entity_type, ST_Union(a.rast, 'SUM') as rast
from tmp_nldas2_01665500_hourly as a
GROUP BY a.featureid, a.entity_type;

Clips our daymet to the nldas2 extent

create temp table tmp_daymet_01665500 as
select  w.tid, w.tstime, w.tsendtime, w.varid, w.featureid, w.entity_type, 
st_clip(w.rast, st_buffer(st_envelope(fgeo.dh_geofield_geom),st_Scalex(nldas2.rast))) as rast
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature')
left outer join tmp_daymet_feb12_daily as w
on (
  1=1
)
left outer join tmp_nldas2_daily_01665500_precip as nldas2
on (
  1=1
)
where f.hydrocode = 'usgs_ws_01665500'
and w.tid is not null;

Clips our prism to the nldas2 extent

create temp table tmp_prism_01665500 as
select  w.tid, w.tstime, w.tsendtime, w.varid, w.featureid, w.entity_type, 
st_clip(w.rast, st_buffer(st_envelope(fgeo.dh_geofield_geom),st_Scalex(nldas2.rast))) as rast
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature')
left outer join tmp_prism_feb12_daily as w
on (
  1=1
)
left outer join tmp_nldas2_daily_01665500_precip as nldas2
on (
  1=1
)
where f.hydrocode = 'usgs_ws_01665500'
and w.tid is not null;

This creates our nldas2 fraction raster that we use for multiplication later

create temp table tmp_fraction_raster_01665500 as
SELECT hourly_table.tstime, hourly_table.tsendtime, st_mapalgebra(hourly_table.rast, 1, ST_SetBandNoDataValue(daily_table.rast,0), 1, '[rast1] / [rast2]') as rast
from tmp_nldas2_01665500_hourly as hourly_table
left outer join tmp_nldas2_daily_01665500_precip as daily_table
on (1=1);

Multiplies our nldas2 fraction by our prism data

create temp table tmp_prism_fraction_01665500_raster as
SELECT tid, tstime, tsendtime,st_mapalgebra(
  ST_Resample(
    st_clip(
      fraction_table.rast,
      st_envelope(daily_table.rast),
    TRUE), 
    daily_table.rast
  ),1, 
  ST_SetBandNoDataValue(daily_table.rast,-9999), 1, 
  '[rast1] * [rast2]'
) as rast
from tmp_fraction_raster_01665500 as fraction_table
left outer join tmp_prism_01665500 as daily_table
on (1=1);

Multiplies daymet by our nldas2 fraction

create temp table tmp_daymet_fraction_01665500_raster as
SELECT tid, tstime, tsendtime,st_mapalgebra(
  ST_Resample(
    st_clip(
      fraction_table.rast,
      st_envelope(daily_table.rast),
    TRUE), 
    daily_table.rast
  ),1, 
  ST_SetBandNoDataValue(daily_table.rast,-9999), 1, 
  '[rast1] * [rast2]'
) as rast
from tmp_fraction_raster_01665500 as fraction_table
left outer join tmp_daymet_01665500 as daily_table
on (1=1);

Resamples prism and nldas2 to be at daymet's resolution

create temp table tmp_prism_resample_01665500 as
select prism.tstime, prism.tsendtime, ST_Resample(prism.rast, reference.rast), prism.rast as rast
from tmp_prism_fraction_01665500_raster as prism, raster_templates as reference where varkey = 'daymet_mod_daily';
create temp table tmp_nldas2_resample_01665500 as
select nldas2.tstime, nldas2.tsendtime, ST_Resample(nldas2.rast, reference.rast), nldas2.rast as rast
from  tmp_nldas2_01665500_hourly as nldas2, raster_templates as reference where varkey = 'daymet_mod_daily';

Clips all of our datasets to the correct watershed limits.

create temp table tmp_prism_01665500_clipped as
select w.tstime, w.tsendtime,  st_clip(w.rast, fgeo.dh_geofield_geom) as rast
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature')
left outer join tmp_prism_resample_01665500 as w
on (
  1=1
)
where f.hydrocode = 'usgs_ws_01665500';
create temp table tmp_daymet_01665500_clipped as
select w.tstime, w.tsendtime,  st_clip(w.rast, fgeo.dh_geofield_geom) as rast
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature')
left outer join tmp_daymet_fraction_01665500_raster as w
on (
  1=1
)
where f.hydrocode = 'usgs_ws_01665500';
create temp table tmp_nldas2_01665500_clipped as
select w.tstime, w.tsendtime,  st_clip(w.rast, fgeo.dh_geofield_geom) as rast
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature')
left outer join tmp_nldas2_resample_01665500 as w
on (
  1=1
)
where f.hydrocode = 'usgs_ws_01665500';
@mwdunlap2004
Copy link
Author

mwdunlap2004 commented Aug 13, 2024

create temp table tmp_prism_daily_01665500_precip_clipped as
select min(a.tstime) as tstime, max(a.tsendtime) as tsendtime, ST_Union(a.rast, 'SUM') as rast
from tmp_prism_01665500_clipped as a;
select tid, lowrite(lo_open(tid, 131072), tiff) As num_bytes
FROM
( VALUES (lo_create(0),
ST_Astiff( (SELECT rast FROM tmp_prism_daily_01665500_precip_clipped))
) ) As v(tid,tiff);

\lo_export 10133143 '/tmp/prism_01665500_daily_clipped.tiff'
SELECT lo_unlink(10133143);
install -D /tmp/daymet_01665500_daily_clipped.tiff /media/model/met/test/rasters/daymet_01665500_daily_clipped.tiff

@mwdunlap2004
Copy link
Author

mwdunlap2004 commented Aug 13, 2024

@rburghol @COBrogan This is my take on a workflow for our raster methods, at this moment, these steps download our three datasets, clip them to the extent of the nldas2 raster with the buffers, does our conversion to hourly data, resamples nldas2 and prism to daymets resolution, and then clips them to the actual watershed extent.
These are three rasters, all of which occur at the end of the process, after resampling to daymet's resolution, and being clipped to the correct extent. The first image is the PRISM raster, the second is the daymet raster, and the last one is nldas2.

Screenshot 2024-08-13 at 8 49 16 AM Screenshot 2024-08-13 at 9 03 58 AM Screenshot 2024-08-13 at 9 04 13 AM

I'm not sure how to feel about these rasters, obviously we can't just create more nldas2 data, but I thought resampling would lead to it having at least the same extent of daymet, instead all we have is two squares. I will say, the pro of this method is we have the full extent of daymet and prism, which last week we did not.

@rburghol
Copy link
Contributor

Hey @mwdunlap2004 thanks for working through this. I would say that your workflow is really close, and to your final raster, this just suggests that we need to tweak something in the final steps, as it clearly is not getting the resample correctly. In other words, our proposed workflow will result in a daymet style raster, but we've just got something a little out of order here.

By the way, that spatial variation in the daymet raster looks really stark in contrast to the other rasters, indicating the potential benefit -- we just have to figure out what went wrong in the processing to end up at 2 blocks. I'm going to step through your process now and get back with you with suggestions. THis is timely as I am going to try to put this into the meta_model steps today.

@durelles
Copy link

durelles commented Aug 14, 2024 via email

@rburghol
Copy link
Contributor

rburghol commented Aug 14, 2024

@mwdunlap2004 these look really good - I should say that from my vantagepoint the code works as desired for the PRISM and daymet, successfully creating a 24 hour time series clipped to the boundary. My sense is that the issue you are having with the resampled NLDAS2 is due to clipping the nldas2 dataset when it is still in nldas2 format. In other words, you're starting with only 2 cells from the NLDAS2 data, and thus, the only daymet info you'll get are from cells that overlap those two cells, so, it is bound to look like 2 blocks (even if there is variation in sub-cells within those 2 large blocks).

I am thinking that you will need to resample before you clip, or buffer the extent of the watershed before you clip (as @COBrogan suggested earlier when we were having a similar issue with PRISM). The same sequence tweak will be required to also insure that the PRISM captures the full data extent. One thing that will help to understand where things are happening are topic headers before each of your code blocks so that we know what you are trying to accomplish at each step.

Note: given our growing understanding of memory requirements needed to store the 24 hour daymet data, these resampling steps will need to be moved to the final export phase, i.e., the wdm workflow outlined here: #72
I think that we might be doing this resampling step first, then doing the multiplication step against the 24 hour fraction factors as the final step in our work-flow, and exported directly to a CSV file... but we have to lay out the pros and cons of that and decide.

If you want to explore the code needed to compute buffer in a robust fashion, I will prepare to leverage some of this code in the wdm workflow an we can review all things in our meeting this morning.

@mwdunlap2004
Copy link
Author

@rburghol I resampled both prism and nldas2 to daymet's resolution in the step before clipping. Are you saying it might be better to try and do the resample and clipping step together? Or are you saying resample should be one of our first steps instead of the last one?

@rburghol
Copy link
Contributor

@mwdunlap2004 I think you have options, I think either could work:

  • If you resample nldas2/prism before clipping (or in the same statement), your clip should operate on the smallest resolution, which should achieve the goal.
  • Or, If you use the buffer technique on nldas2/PRISM. IU mistakenly thought you did not buffer first, since everything got clipped, but the buffer definitely should work, unless the buffer is insufficiently large.
create temp table tmp_prism_01665500 as
select  w.tid, w.tstime, w.tsendtime, w.varid, w.featureid, w.entity_type, 
st_clip(w.reclass_rast, st_buffer(st_envelope(fgeo.dh_geofield_geom),st_Scalex(nldas2.rast))) as rast
from dh_feature as f

Since both of these should work, we just have to choose which is more efficient for us. But first we need to figure our why it's not yet working -- or determine why the theory is in err :)

@mwdunlap2004
Copy link
Author

@rburghol The output raster looks the same.... As in, we still only have two squares, and the values for those two squares match what they did in the previous rasters, but still. Not what we would have liked or expected to see. The colors look a little weird, but that's just because there are only two values, our previous rasters of hour 15, had around 1.7 as their value.
Screenshot 2024-08-14 at 11 14 57 AM

@COBrogan
Copy link
Collaborator

Thanks for adding in the headers! I think that makes this a bit easier to understand at a glance. I also agree with Rob. Something is clearly going wrong at the resampling stage. I think focusing in on the st_resample would be a worth endeavor. These rasters all should end up at Daymet's resolution. It would also be helpful to start including the watershed GIS file on these plots so we can see the raster compared to the watershed boundary. You can pull the field_data_dh_geofield.field_data_dh_geofield field for the USGS gage drainage area. With this, we could see if the two NLDAS cells are sufficient or if we think we should be capturing more with our buffer query.

@mwdunlap2004
Copy link
Author

Yearly example of our workflow:

Downloads our full nldas2 data for 1 year

create temp table tmp_nldas2_hourly as
select a.tid, a.tstime, a.tsendtime, b.hydroid as varid, a.featureid, a.entity_type, a.rast
from dh_timeseries_weather as a
left outer join dh_variabledefinition as b
on (a.varid = b.hydroid and b.varkey='nldas2_obs_hourly')
WHERE extract(year from to_timestamp(tsendtime)) = 2022
AND b.varkey = 'nldas2_obs_hourly';

Downloads our full prism data for 1 year

create temp table tmp_prism_feb12_daily as
select a.tid, a.tstime, a.tsendtime, b.hydroid as varid, a.featureid, a.entity_type, a.rast
from dh_timeseries_weather as a
left outer join dh_variabledefinition as b
on (a.varid = b.hydroid and b.varkey='prism_mod_daily')
WHERE extract(year from to_timestamp(tsendtime)) = 2022
AND b.varkey = 'prism_mod_daily';

Downloads our full daymet data for 1 year

create temp table tmp_daymet_feb12_daily as
select a.tid, a.tstime, a.tsendtime, b.hydroid as varid, a.featureid, a.entity_type, a.rast
from dh_timeseries_weather as a
left outer join dh_variabledefinition as b
on (a.varid = b.hydroid and b.varkey='daymet_mod_daily')
WHERE extract(year from to_timestamp(tsendtime)) = 2022
AND b.varkey = 'daymet_mod_daily';

Reclass our rasters, changes our value of -9999 to NULL for all of our datasets

UPDATE tmp_nldas2_hourly SET rast = ST_Reclass(rast,1, '(-999999-0):0-0,[0-999999]:0-999999', '64BF', -9999);
UPDATE tmp_prism_feb12_daily SET rast = ST_Reclass(rast,1, '(-999999-0):0-0,[0-999999]:0-999999', '64BF', -9999);
UPDATE tmp_daymet_feb12_daily SET rast = ST_Reclass(rast,1, '(-999999-0):0-0,[0-999999]:0-999999', '64BF', -9999);

Clips our nldas2 to our larger extent

create temp table tmp_nldas2_01665500_hourly as
select  w.tid, w.tstime, w.tsendtime, w.varid, w.featureid, w.entity_type, 
st_clip(w.rast, st_buffer(st_envelope(fgeo.dh_geofield_geom),st_Scalex(w.rast))) as rast
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature')
left outer join tmp_nldas2_hourly as w
on (
  1=1
)
where f.hydrocode = 'usgs_ws_01665500'
and w.tid is not null;

Creates our daily precip for nldas2

create temp table tmp_nldas2_daily_01665500_precip as
select min(a.tstime) as tstime, max(a.tsendtime) as tsendtime, a.featureid, a.entity_type, ST_Union(a.rast, 'SUM') as rast
from tmp_nldas2_01665500_hourly as a
GROUP BY a.featureid, a.entity_type, extract(day from to_timestamp(tsendtime)), extract(year from to_timestamp(tsendtime)), extract(month from to_timestamp(tsendtime));

Clips our daymet to the nldas2 extent

create temp table tmp_daymet_01665500 as
select  w.tid, w.tstime, w.tsendtime, w.varid, w.featureid, w.entity_type, 
st_clip(w.rast, st_buffer(st_envelope(fgeo.dh_geofield_geom),st_Scalex(nldas2.rast))) as rast
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature')
left outer join tmp_daymet_feb12_daily as w
on (
  1=1
)
left outer join tmp_nldas2_daily_01665500_precip as nldas2
on (
  extract(day from to_timestamp(w.tstime)) = extract(day from to_timestamp(nldas2.tstime)) and
  extract(year from to_timestamp(w.tstime)) = extract(year from to_timestamp(nldas2.tstime)) and
  extract(month from to_timestamp(w.tstime)) = extract(month from to_timestamp(nldas2.tstime))
)
where f.hydrocode = 'usgs_ws_01665500'
and w.tid is not null;

Clips our prism to the nldas2 extent

create temp table tmp_prism_01665500 as
select  w.tid, w.tstime, w.tsendtime, w.varid, w.featureid, w.entity_type, 
st_clip(w.rast, st_buffer(st_envelope(fgeo.dh_geofield_geom),st_Scalex(nldas2.rast))) as rast
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature')
left outer join tmp_prism_feb12_daily as w
on (
    1=1
)
left outer join tmp_nldas2_daily_01665500_precip as nldas2
on (extract(day from to_timestamp(w.tstime)) = extract(day from to_timestamp(nldas2.tstime)) and
  extract(year from to_timestamp(w.tstime)) = extract(year from to_timestamp(nldas2.tstime)) and
  extract(month from to_timestamp(w.tstime)) = extract(month from to_timestamp(nldas2.tstime))
)
where f.hydrocode = 'usgs_ws_01665500'
and w.tid is not null;

This creates our nldas2 fraction raster that we use for multiplication later

create temp table tmp_fraction_raster_01665500 as
SELECT hourly_table.tstime as tstime, hourly_table.tsendtime as tsendtime, st_mapalgebra(hourly_table.rast, 1, ST_SetBandNoDataValue(daily_table.rast,0), 1, '[rast1] / [rast2]') as rast
from tmp_nldas2_01665500_hourly as hourly_table
left outer join tmp_nldas2_daily_01665500_precip as daily_table
on (extract(day from to_timestamp(hourly_table.tstime)) = extract(day from to_timestamp(daily_table.tstime)) and
  extract(year from to_timestamp(hourly_table.tstime)) = extract(year from to_timestamp(daily_table.tstime)) and
  extract(month from to_timestamp(hourly_table.tstime)) = extract(month from to_timestamp(daily_table.tstime)));

Multiplies our nldas2 fraction by our prism data

create temp table tmp_prism_fraction_01665500_raster as
SELECT tid, fraction_table.tstime, fraction_table.tsendtime,st_mapalgebra(
  ST_Resample(
    st_clip(
      fraction_table.rast,
      st_envelope(daily_table.rast),
    TRUE), 
    daily_table.rast
  ),1, 
  ST_SetBandNoDataValue(daily_table.rast,-9999), 1, 
  '[rast1] * [rast2]'
) as rast
from tmp_fraction_raster_01665500 as fraction_table
left outer join tmp_prism_01665500 as daily_table
on (extract(day from to_timestamp(fraction_table.tstime)) = extract(day from to_timestamp(daily_table.tstime)) and
  extract(year from to_timestamp(fraction_table.tstime)) = extract(year from to_timestamp(daily_table.tstime)) and
  extract(month from to_timestamp(fraction_table.tstime)) = extract(month from to_timestamp(daily_table.tstime))
);

Multiplies daymet by our nldas2 fraction

create temp table tmp_daymet_fraction_01665500_raster as
SELECT tid, tstime, tsendtime,st_mapalgebra(
  ST_Resample(
    st_clip(
      fraction_table.rast,
      st_envelope(daily_table.rast),
    TRUE), 
    daily_table.rast
  ),1, 
  ST_SetBandNoDataValue(daily_table.rast,-9999), 1, 
  '[rast1] * [rast2]'
) as rast
from tmp_fraction_raster_01665500 as fraction_table
left outer join tmp_daymet_01665500 as daily_table
on (extract(day from to_timestamp(fraction_table.tstime)) = extract(day from to_timestamp(daily_table.tstime)) and
  extract(year from to_timestamp(fraction_table.tstime)) = extract(year from to_timestamp(daily_table.tstime)) and
  extract(month from to_timestamp(fraction_table.tstime)) = extract(month from to_timestamp(daily_table.tstime))
);

Resamples prism and nldas2 to be at daymet's resolution

create temp table tmp_prism_resample_01665500 as
select prism.tstime, prism.tsendtime, ST_Resample(prism.rast, reference.rast), prism.rast as rast
from tmp_prism_fraction_01665500_raster as prism, raster_templates as reference where varkey = 'daymet_mod_daily';
create temp table tmp_nldas2_resample_01665500 as
select nldas2.tstime, nldas2.tsendtime, ST_Resample(nldas2.rast, reference.rast), nldas2.rast as rast
from  tmp_nldas2_01665500_hourly as nldas2, raster_templates as reference where varkey = 'daymet_mod_daily';

Clips all of our datasets to the correct watershed limits.

create temp table tmp_prism_01665500_clipped as
select w.tstime, w.tsendtime,  st_clip(w.rast, fgeo.dh_geofield_geom) as rast
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature')
left outer join tmp_prism_resample_01665500 as w
on (
 1=1
)
where f.hydrocode = 'usgs_ws_01665500';
create temp table tmp_daymet_01665500_clipped as
select w.tstime, w.tsendtime,  st_clip(w.rast, fgeo.dh_geofield_geom) as rast
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature')
left outer join tmp_daymet_fraction_01665500_raster as w
on (
  1=1
)
where f.hydrocode = 'usgs_ws_01665500';
create temp table tmp_nldas2_01665500_clipped as
select w.tstime, w.tsendtime,  st_clip(w.rast, fgeo.dh_geofield_geom) as rast
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature')
left outer join tmp_nldas2_resample_01665500 as w
on (
  1=1
)
where f.hydrocode = 'usgs_ws_01665500';

@mwdunlap2004
Copy link
Author

@rburghol overall our yearly workflow seems pretty good, but when I run the tmp_fraction_raster step (the one that creates our nldas2 fraction raster) I get a bunch of notices about null values, of course I set our null value as 0 for that step as we can't be dividing by zero so that could be why it's telling me that. But I just wanted to mention this error so when we meet later today it's on our minds.

But the rasters I took a look at during the process looked exactly like we would have expected! so assuming those errors are just informing us that there were numerous zeros in our process, then I'm not considered, and I think this is a good method

@rburghol
Copy link
Contributor

@mwdunlap2004 this is great news! I think that I still don't have a clear sense of the best way to handle NULL in the fraction raster step, so that will be one area to insure that we create a set of excellent case studies to validate the math and the water balance. I think maybe zooming in on a small watershed like the one you've been targeting would be a good plan.

@COBrogan
Copy link
Collaborator

COBrogan commented Aug 27, 2024

Just an FYI for when we get you the gage watershed geometries as WKT, you can read them in and plot them with:

#Read in your well known text (WKT) using read.delim, read.csv, etc.
wktWatershed <- read.delim("poundCreek.txt",header = FALSE)
#Turn the WKT column into an sfc object with the appropriate coordinate system. I like to think of these as "raw" spatial files that don't have info besides geometries
watershed <- sf::st_as_sfc(wktWatershed$V1,crs=4326)
#Turn the sfc into sf, giving it attributes and stuff we can reference easily in R
watershedSF <- sf::st_as_sf(watershed)

#Read in an arbitrary raster file:
nldas <- raster("http://deq1.bse.vt.edu:81/met/PRISM/2015/004/PRISM_ppt_stable_4kmD2_20150104_bil.bil_CBP.gtiff")

#Crop the raster to speed up plotting
watershedbbox <- st_as_sfc(sf::st_bbox(watershedSF))
watershedBuffer <- st_buffer(watershedbbox,15000)
cropNLDAS <- raster::crop(nldas,st_as_sf(watershedBuffer))

#Create a plot, setting all margins to 2
par(mar=c(2,2,2,2))
plot(cropNLDAS,axes = TRUE)
plot(add = TRUE, watershedSF,lwd = 2)

@rburghol
Copy link
Contributor

rburghol commented Sep 3, 2024

@COBrogan - we can pull the WKT in via rest as well, maybe the easiest way when showing an individual segment like you are demo'ing here.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants