Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mitigate disk space needed #80

Open
frost242 opened this issue Nov 27, 2024 · 1 comment
Open

Mitigate disk space needed #80

frost242 opened this issue Nov 27, 2024 · 1 comment

Comments

@frost242
Copy link
Contributor

Hello,

To share a bit my experience. As I've started a fresh install from scratch but using a list of 338 instances to monitor, that I've added from start. That brought me to another issue : running out of disk space because all instances filled powa_statements_history_current then the rows were aggregated powa_statements_history at the same time. So, powa_statements_history_current's size went up to +50GB and just a few aggregations were done.
To distribute the aggregations over time and help autovacuum to reuse the space left in the pshc table, I've applied the following query :

UPDATE powa_snapshot_metas psm
   SET coalesce_seq = ( psm.coalesce_seq + ps.powa_coalesce ) % psm.srvid
  FROM powa_servers ps
 WHERE psm.srvid = ps.id
   AND psm.srvid <> 0;

Hope this can help for future users doing the migration.

@frost242
Copy link
Contributor Author

To show the effect :

[powa5]# select srvid, coalesce_seq, aggts from powa_snapshot_metas order by aggts desc;
  srvid | coalesce_seq |             aggts
-------+--------------+-------------------------------
   195 |          100 | 2024-11-27 11:42:16.193694+01
    96 |          100 | 2024-11-27 11:39:02.532022+01
   296 |          100 | 2024-11-27 11:38:10.201984+01
   196 |          101 | 2024-11-27 11:37:22.197631+01
   297 |          101 | 2024-11-27 11:34:34.863152+01
   197 |          101 | 2024-11-27 11:33:52.965844+01
    97 |          102 | 2024-11-27 11:29:49.198538+01
   298 |          103 | 2024-11-27 11:27:03.202051+01
    98 |          103 | 2024-11-27 11:25:51.527593+01
   198 |          103 | 2024-11-27 11:25:07.84587+01
    99 |          104 | 2024-11-27 11:20:59.86121+01
   299 |          104 | 2024-11-27 11:20:22.197239+01
   199 |          104 | 2024-11-27 11:20:11.195823+01
   (...)

Autovacuum does its job well :

[powa5]# select * from pg_stat_user_tables where relname LIKE 'powa_statements_history_current';
-[ RECORD 1 ]-------+--------------------------------
relid               | 183066571
schemaname          | public
relname             | powa_statements_history_current
seq_scan            | 2
last_seq_scan       | 2024-11-27 10:20:04.514596+01
seq_tup_read        | 9873983
idx_scan            | 1042
last_idx_scan       | 2024-11-27 11:39:06.445797+01
idx_tup_fetch       | 257065554
n_tup_ins           | 31668137
n_tup_upd           | 0
n_tup_del           | 128532777
n_tup_hot_upd       | 0
n_tup_newpage_upd   | 0
n_live_tup          | 25572682
n_dead_tup          | 0
n_mod_since_analyze | 216021
n_ins_since_vacuum  | 0
last_vacuum         | 2024-11-27 09:29:24.536267+01
last_autovacuum     | 2024-11-27 11:41:44.306681+01
last_analyze        | [null]
last_autoanalyze    | 2024-11-27 11:41:03.433668+01
vacuum_count        | 3
autovacuum_count    | 70
analyze_count       | 0
autoanalyze_count   | 75

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant