You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently, the aggregation job creator will read a large number of reports (5000, at time of writing) in order to create aggregation jobs. For VDAFs with large reports, this can require a significant amount of memory.
Instead, the aggregation job creator could read report IDs & other (small) metadata required to generate reports, and use a SQL query which causes Postgres to directly copy the data from the relevant client_reports row into the relevant report_aggregations row. This would decouple the memory usage of the aggregation job creator from the report size of the relevant VDAFs.
The text was updated successfully, but these errors were encountered:
The aggregation job creator and its SQL proxy are currently outliers in CPU consumption. Doing this copying within the database will improve performance, and give us more headroom before we have to start sharding the aggregation job creator.
Currently, the aggregation job creator will read a large number of reports (5000, at time of writing) in order to create aggregation jobs. For VDAFs with large reports, this can require a significant amount of memory.
Instead, the aggregation job creator could read report IDs & other (small) metadata required to generate reports, and use a SQL query which causes Postgres to directly copy the data from the relevant
client_reports
row into the relevantreport_aggregations
row. This would decouple the memory usage of the aggregation job creator from the report size of the relevant VDAFs.The text was updated successfully, but these errors were encountered: