Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Postgresql settings and raster performance #80

Open
Tracked by #54
rburghol opened this issue Aug 7, 2024 · 5 comments
Open
Tracked by #54

Postgresql settings and raster performance #80

rburghol opened this issue Aug 7, 2024 · 5 comments

Comments

@rburghol
Copy link
Contributor

rburghol commented Aug 7, 2024

Comparison Summary

  • Base settings:
    • Single lseg, 5 years:
      • Before clipping to CBP rectangle: Time: 65521.824 ms (01:05.522)
      • After clipping to CBP rectangle: 33915.735 ms (00:33.916)
    • Single lseg, 40 years:
    • 2 lsegs, 40 years:
      • Before clipping 26 minutes. (estimated, can't recall)
      • After clipping nldas2 Time: 875047.339 ms (14:35.047)
  • Recommended by enterprisedb.com (see details in comment below)
    • Single lseg, 5 years: Time: 68054.561 ms (01:08.055)
    • 2 lsegs, 40 years Time: 1483813.021 ms (24:43.813)
  • Setting conf values (see postgresql docs here):
    • Use SET work_mem = 'XXmb;` to change config for current connection
    • Use ALTER SYSTEM SET work_mem... to change config semi-permanently
      • This creates a file called postgresql.auto.conf which persists until removed by admin
    • To make the aboe SET... commands go into effect must call SELECT pg_reload_conf();
    • Use ALTER SYSTEM RESET ALL; SELECT pg_reload_conf(); to restore default settings (undoes previous ALTER SYSTEM ... commands)

Set Up Base Table and Timer function

\set met_covid 617850
\set met_varid 1451
\timing on
\set start_year 2001
\set end_year 2005

drop table tmp_raster_export;
create temp table tmp_raster_export(tid serial, featureid integer, tstime bigint, tsendtime bigint, varid integer, precip float8);

Change Working memory

SHOW work_mem;
 work_mem
----------
 4MB

ALTER SYSTEM SET work_mem = '256MB';
-- this forces non-startup configs to take effect for new connections
SELECT pg_reload_conf();

show shared_buffers;
 shared_buffers
----------------
 128MB

ALTER SYSTEM SET shared_buffers = '8GB';
SELECT pg_reload_conf();

Test with 5-year Sample

-- Gives precip in inches
insert into tmp_raster_export(featureid, varid, tstime, tsendtime, precip)
select f.hydroid, w.varid, w.tstime, w.tsendtime, 
  0.0393701 * (ST_SummaryStats(ST_Resample(st_clip(w.rast, fgeo.dh_geofield_geom), tpr.rast), 1, TRUE)).mean as precip
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature'
)
left outer join dh_timeseries_weather as w
on (
  w.featureid = :met_covid
  and w.varid = :met_varid 
  and extract(year from to_timestamp(w.tstime)) >= :start_year
  and extract(year from to_timestamp(w.tsendtime)) <= :end_year
)
left outer join raster_templates as tpr
on ( tpr.varkey = 'daymet_mod_daily')
where f.hydrocode = 'N51101'
  and f.ftype = 'cbp6_landseg'
  and f.bundle = 'landunit'
  and w.tid is not null
  AND w.featureid = :met_covid 
;


5-years, 1 landseg, with clip after resample

  • This does 2 versions, one where we clip to landseg bounds at old resolution, then resample to daymet res, then clip to landseg bounds again.
    • This was chosen because doing the innermost clip to domain bounds resulted in a 14 minute query
      • Time: 856076.663 ms (14:16.077)
  • the innermost st_clip() is simply to avoid the out of memory error because PRISM extent is wonky.
  • This version is very, very slow. Suggests that the st_resample() might be more memory intensive that previously thought?
-- Gives precip in inches
insert into tmp_raster_export(featureid, varid, tstime, tsendtime, precip)
select f.hydroid, w.varid, w.tstime, w.tsendtime, 
  0.0393701 * (ST_SummaryStats(st_clip(ST_Resample(st_clip(w.rast, st_envelope(fgeo.dh_geofield_geom)), tpr.rast), fgeo.dh_geofield_geom), 1, TRUE)).mean as precip
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature'
)
left outer join dh_timeseries_weather as w
on (
  w.featureid = :met_covid
  and w.varid = :met_varid 
  and extract(year from to_timestamp(w.tstime)) >= :start_year
  and extract(year from to_timestamp(w.tsendtime)) <= :end_year
)
left outer join raster_templates as tpr
on ( tpr.varkey = 'daymet_mod_daily')
where f.hydrocode = 'N51101'
  and f.ftype = 'cbp6_landseg'
  and f.bundle = 'landunit'
  and w.tid is not null
  AND w.featureid = :met_covid 
;

Just Clip for testing

Just resample for testing


create temp table tmp_raster_resample(featureid, varid, tstime, tsendtime, rast)
select f.hydroid, w.varid, w.tstime, w.tsendtime, 
  ST_Resample(st_clip(w.rast, st_envelope(fgeo.dh_geofield_geom)), tpr.rast), fgeo.dh_geofield_geom), 1, TRUE)).mean as precip
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature'
)
left outer join dh_timeseries_weather as w
on (
  w.featureid = :met_covid
  and w.varid = :met_varid 
  and extract(year from to_timestamp(w.tstime)) >= :start_year
  and extract(year from to_timestamp(w.tsendtime)) <= :end_year
)
left outer join raster_templates as tpr
on ( tpr.varkey = 'daymet_mod_daily')
where f.hydrocode = 'N51101'
  and f.ftype = 'cbp6_landseg'
  and f.bundle = 'landunit'
  and w.tid is not null
  AND w.featureid = :met_covid 
;

Two landsegs at the same time

  • After clipping nldas2 Time: 875047.339 ms (14:35.047)

-- Gives precip in inches
insert into tmp_raster_export(featureid, varid, tstime, tsendtime, precip)
select f.hydroid, w.varid, w.tstime, w.tsendtime, 
  0.0393701 * (ST_SummaryStats(st_clip(w.rast, fgeo.dh_geofield_geom), 1, TRUE)).mean as precip
from dh_feature as f
left outer join field_data_dh_geofield as fgeo
on (
  fgeo.entity_id = f.hydroid
  and fgeo.entity_type = 'dh_feature'
)
left outer join dh_timeseries_weather as w
on (
  w.featureid = :met_covid
  and w.varid = :met_varid 
)
where f.hydrocode in ('N51033', 'N51177')
  and f.ftype = 'cbp6_landseg'
  and f.bundle = 'landunit'
  and w.tid is not null
  AND w.featureid = :met_covid 
;

@rburghol rburghol mentioned this issue Aug 7, 2024
5 tasks
@rburghol
Copy link
Contributor Author

rburghol commented Aug 8, 2024

Try the following:

  • ALTER SYSTEM SET work_mem = '5GB';
  • ALTER SYSTEM SET maintenance_work_mem TO '8GB';
  • ALTER SYSTEM SET shared_buffers = '48GB';
  • ALTER SYSTEM SET maintenance_work_mem = '9 GB';
  • ALTER SYSTEM SET effective_cache_size = '96GB';
  • SELECT pg_reload_conf();

From https://www.enterprisedb.com/postgres-tutorials/how-tune-postgresql-memory

  • work_mem:

Setting the correct value of work_mem parameter can result in less disk-swapping, and therefore far quicker queries.
We can use the formula below to calculate the optimal work_mem value for the database server:
Total RAM * 0.25 / max_connections

  • shared_buffers

The value should be set to 15% to 25% of the machine’s total RAM. For example: if your machine’s RAM size is 32 GB, then the recommended value for shared_buffers is 8 GB. Please note that the database server needs to be restarted after this change.

  • maintenance_work_mem: The maintenance_work_mem parameter basically provides the maximum amount of memory to be used by maintenance operations like vacuum, create index, and alter table add foreign key operations.
    The default value for this parameter, which is set in postgresql.conf, is: It’s recommended to set this value higher than work_mem; this can improve performance for vacuuming. In general it should be:

Total RAM * 0.05

  • effective_cache_size (integer): The effective_cache_size parameter estimates how much memory is available for disk caching by the operating system and within the database itself. The PostgreSQL query planner decides whether it’s fixed in RAM or not. Index scans are most likely to be used against higher values; otherwise, sequential scans will be used if the value is low.

Recommendations are to set Effective_cache_size at 50% of the machine’s total RAM.

@rburghol
Copy link
Contributor Author

rburghol commented Aug 8, 2024

FYI @COBrogan @mwdunlap2004 - The system database settings had ZERO effect (see results above), but after clipping, time to process these basic land segment summaries was cut in half.

@mwdunlap2004
Copy link
Contributor

It is amazing to think so many of our slow run times can be connected to us just trying to use the entire globe in all of our functions, but it's really good to see that we seem to have a solution to that issue. I wonder how many of our assumptions about run time can be reevaluated based off this, like our statement about st_resample being slow.

@rburghol
Copy link
Contributor Author

rburghol commented Aug 8, 2024

@mwdunlap2004 Hundred percent agree. I expect some important improvements across the board.

@rburghol
Copy link
Contributor Author

rburghol commented Aug 8, 2024

Also @mwdunlap2004 Without the breakthrough on extents with yesterdays TIFF exports, I'd not have even messed with this, and found the efficiency increase.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants