-
Notifications
You must be signed in to change notification settings - Fork 898
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improve transaction check in refresh_cagg #7566
Improve transaction check in refresh_cagg #7566
Conversation
@erimatnor, @gayyappan: please review this pull request.
|
@staticlibs thanks a lot for the PR. I'll have a look on it! |
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## main #7566 +/- ##
==========================================
+ Coverage 80.06% 82.24% +2.17%
==========================================
Files 190 238 +48
Lines 37181 43848 +6667
Branches 9450 11011 +1561
==========================================
+ Hits 29770 36064 +6294
- Misses 2997 3441 +444
+ Partials 4414 4343 -71 ☔ View full report in Codecov by Sentry. |
5ecfc69
to
ac22176
Compare
@fabriziomello thanks! I've just added a few fixes to format and spelling (no code changes) and added the changelog file to .unreleased dir. |
@staticlibs BTW very good catch... thanks a lot. |
5091105
to
6a92da6
Compare
a8fbd28
to
ca80649
Compare
ca80649
to
cabf588
Compare
b272ce7
to
4137d41
Compare
Procedures that use multiple transactions cannot be run in a transaction block (from a function, from dynamic SQL) or in a subtransaction (from a procedure block with an EXCEPTION clause). Such procedures use PreventInTransactionBlock function to check whether they can be run. Though currently such checks are incompete, because PreventInTransactionBlock requires isTopLevel argument to throw a consistent error when the call originates from a function. This isTopLevel flag (that is a bit poorly named - see below) is not readily available inside C procedures. The source of truth for it - ProcessUtilityContext parameter is passed to ProcessUtility hooks, but is not included with the function calls. There is an undocumented SPI_inside_nonatomic_context function, that would have been sufficient for isTopLevel flag, but it currently returns false when SPI connection is absent (that is a valid scenario when C procedures are called from top-lelev SQL instead of PLPG procedures or DO blocks) so it cannot be used. To work around this the value of ProcessUtilityContext parameter is saved when TS ProcessUtility hook is entered and can be accessed from C procedures using new ts_process_utility_is_context_nonatomic function. The result is called "non-atomic" instead of "top-level" because the way how isTopLevel flag is determined from the ProcessUtilityContext value in standard_ProcessUtility is insufficient for C procedures - it excludes PROCESS_UTILITY_QUERY_NONATOMIC value (used when called from PLPG procedure without an EXCEPTION clause) that is a valid use case for C procedures with transactions. See details in the description of ExecuteCallStmt function. It is expected that calls to C procedures are done with CALL and always pass though the ProcessUtility hook. The ProcessUtilityContext parameter is set to PROCESS_UTILITY_TOPLEVEL value by default. In unlikely case when a C procedure is called without passing through ProcessUtility hook and the call is done in atomic context, then PreventInTransactionBlock checks will pass, but SPI_commit will fail when checking that all current active snapshots are portal-owned snapshots (the same behaviour that was observed before this change). In atomic context there will be an additional snapshot set in _SPI_execute_plan, see the snapshot handling invariants description in that function. Closes timescale#6533.
1bd8e0d
to
2f38d9c
Compare
This release contains performance improvements and bug fixes since the 2.17.2 release. We recommend that you upgrade at the next available opportunity. **Features** * timescale#6901: Add hypertable support for transition tables. * timescale#7104: Hypercore table access method. * timescale#7271: Push down `order by` in real-time continuous aggregate queries. * timescale#7295: Support `alter table set access method` on hypertable. * timescale#7341: Vectorized aggregation with grouping by one fixed-size by-value compressed column * timescale#7390: Disable custom `hashagg` planner code. * timescale#7411: Change parameter name to enable hypercore table access method. * timescale#7412: Add GUC for `hypercore_use_access_method` default. * timescale#7413: Add GUC for segmentwise recompression. * timescale#7433 Add support for merging chunks * timescale#7436 Add index creation on orderby columns * timescale#7443: Add hypercore function and view aliases. * timescale#7455: Support `drop not null` on compressed hypertables. * timescale#7458: Support vecorized aggregation with aggregate `filter` clauses that are also vectorizable. * timescale#7482: Optimize recompression of partially compressed chunks. * timescale#7486: Prevent building against postgres versions with broken ABI. * timescale#7521 Add optional `force` argument to `refresh_continuous_aggregate` * timescale#7528 Transform sorting on `time_bucket` to sorting on time for compressed chunks in some cases. * timescale#7565 Add hint when hypertable creation fails * timescale#7587 Add `include_tiered_data` parameter to `add_continuous_aggregate_policy` API **Bugfixes** * timescale#7378: Remove obsolete job referencing `policy_job_error_retention`. * timescale#7409: Update `bgw_job` table when altering procedure. * timescale#7410: Fix the `aggregated compressed column not found` error on aggregation query. * timescale#7426: Fix `datetime` parsing error in chunk constraint creation. * timescale#7432: Verify that the heap tuple is valid before using. * timescale#7434: Fixes the segfault when internally setting the replica identity for a given chunk. * timescale#7488: Emit error for transition table trigger on chunks. * timescale#7514: Fix the error: `invalid child of chunk append`. * timescale#7517 Fixes performance regression on `cagg_migrate` procedure * timescale#7527 Restart scheduler on error * timescale#7557: Fix null handling for in-memory tuple filtering. * timescale#7566 Improve transaction check in CAgg refresh * timescale#7584 Fix NaN-handling for vectorized aggregation **Thanks** * @bharrisau for reporting the segfault when creating chunks. * @k-rus for suggesting the improvement * @pgloader for reporting the issue in an internal background job. * @staticlibs for sending PR to improve transaction check in CAgg refresh * @uasiddiqi for reporting the `aggregated compressed column not found` error.
This release contains performance improvements and bug fixes since the 2.17.2 release. We recommend that you upgrade at the next available opportunity. **Features** * timescale#6901: Add hypertable support for transition tables. * timescale#7104: Hypercore table access method. * timescale#7271: Push down `order by` in real-time continuous aggregate queries. * timescale#7295: Support `alter table set access method` on hypertable. * timescale#7341: Vectorized aggregation with grouping by one fixed-size by-value compressed column * timescale#7390: Disable custom `hashagg` planner code. * timescale#7411: Change parameter name to enable hypercore table access method. * timescale#7412: Add GUC for `hypercore_use_access_method` default. * timescale#7413: Add GUC for segmentwise recompression. * timescale#7433 Add support for merging chunks * timescale#7436 Add index creation on orderby columns * timescale#7443: Add hypercore function and view aliases. * timescale#7455: Support `drop not null` on compressed hypertables. * timescale#7458: Support vecorized aggregation with aggregate `filter` clauses that are also vectorizable. * timescale#7482: Optimize recompression of partially compressed chunks. * timescale#7486: Prevent building against postgres versions with broken ABI. * timescale#7521 Add optional `force` argument to `refresh_continuous_aggregate` * timescale#7528 Transform sorting on `time_bucket` to sorting on time for compressed chunks in some cases. * timescale#7565 Add hint when hypertable creation fails * timescale#7587 Add `include_tiered_data` parameter to `add_continuous_aggregate_policy` API **Bugfixes** * timescale#7378: Remove obsolete job referencing `policy_job_error_retention`. * timescale#7409: Update `bgw_job` table when altering procedure. * timescale#7410: Fix the `aggregated compressed column not found` error on aggregation query. * timescale#7426: Fix `datetime` parsing error in chunk constraint creation. * timescale#7432: Verify that the heap tuple is valid before using. * timescale#7434: Fixes the segfault when internally setting the replica identity for a given chunk. * timescale#7488: Emit error for transition table trigger on chunks. * timescale#7514: Fix the error: `invalid child of chunk append`. * timescale#7517 Fixes performance regression on `cagg_migrate` procedure * timescale#7527 Restart scheduler on error * timescale#7557: Fix null handling for in-memory tuple filtering. * timescale#7566 Improve transaction check in CAgg refresh * timescale#7584 Fix NaN-handling for vectorized aggregation **Thanks** * @bharrisau for reporting the segfault when creating chunks. * @k-rus for suggesting the improvement * @pgloader for reporting the issue in an internal background job. * @staticlibs for sending PR to improve transaction check in CAgg refresh * @uasiddiqi for reporting the `aggregated compressed column not found` error.
Intro: Hi, I was investigating an issue with
portal snapshots (0) did not account for all active snapshots (1)
error inside another Postgres extension (wdb-97, unrelated to Timescale) and stumbled upon the #6533 issue in Timescale that was reporting the same error. Decided to have a deeper look into it.This PR allows better error messages to be reported from
refresh_continuous_aggregate
when it is called from an atomic (no transaction allowed) context. One of the following messages:ERROR: refresh_continuous_aggregate() cannot run inside a transaction block
ERROR: refresh_continuous_aggregate() cannot be executed from a function
is reported now instead of:
ERROR: portal snapshots (N) did not account for all active snapshots (N+1)
. There are no other changes torefresh_continuous_aggregate
logic.Longer description, also included in
process_utility.h
:Procedures that use multiple transactions cannot be run in a transaction block (from a function, from dynamic SQL) or in a subtransaction (from a procedure block with an
EXCEPTION
clause). Such procedures use PreventInTransactionBlock function to check whether they can be run.Though currently such checks are incomplete, because
PreventInTransactionBlock
requiresisTopLevel
argument to throw a consistent error when the call originates from a function. ThisisTopLevel
flag (that is a bit poorly named - see below) is not readily available inside C procedures. The source of truth for it -ProcessUtilityContext
parameter is passed to ProcessUtility hooks, but is not included with the function calls. There is an undocumented SPI_inside_nonatomic_context function, that would have been sufficient forisTopLevel
flag, but it currently returns false when SPI connection is absent (that is a valid scenario when C procedures are called from top-level SQL instead of PLPG procedures orDO
blocks) so it cannot be used.To work around this the value of
ProcessUtilityContext
parameter is saved when TS ProcessUtility hook is entered and can be accessed from C procedures using newts_process_utility_is_context_nonatomic
function. The result is called "non-atomic" instead of "top-level" because the way how isTopLevel flag is determined from the ProcessUtilityContext value in standard_ProcessUtility is insufficient for C procedures - it excludesPROCESS_UTILITY_QUERY_NONATOMIC
value (used when called from PLPG procedure without anEXCEPTION
clause) that is a valid use case for C procedures with transactions. See details in the description of ExecuteCallStmt function.It is expected that calls to C procedures are done with
CALL
and always pass though theProcessUtility
hook. TheProcessUtilityContext
parameter is set toPROCESS_UTILITY_TOPLEVEL
value by default. In unlikely case when a C procedure is called without passing throughProcessUtility
hook and the call is done in atomic context, thenPreventInTransactionBlock
checks will pass, but SPI_commit will fail when checking that all current active snapshots are portal-owned snapshots (the same behaviour that was observed before this change). In atomic context there will be an additional snapshot set in_SPI_execute_plan
, see the snapshot handling invariants description in that function.With initial version of this PR, in TS
ProcessUtility
hook the savedProcessUtilityContext
value is reset back toPROCESS_UTILITY_TOPLEVEL
on normal exit but is NOT reset in case ofereport
exit. C procedures can callts_process_utility_context_reset
function to reset the saved value before doing the checks that can result inereport
exit. The scenario when more thorough reset may be necessary - when subsequent calls after the failed atomic call are not passed through theProcessUtility
hook - seems to be unlikely.Closes #6533.
Disable-check: commit-count