Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

16.4のリリース後(pg164tail)のdoc_ja_16の更新をdoc_ja_17にマージしました #3090

Merged
merged 64 commits into from
Sep 27, 2024

Conversation

noborus
Copy link
Contributor

@noborus noborus commented Sep 27, 2024

#3088 のREL_17_0のマージ後にマージされる想定です。

michaelpq and others added 30 commits September 4, 2024 08:05
This is confusing, as it exports twice the same variable.  Oversight in
6782709 that has spread in more places afterwards.

Reported-by: Alvaro Herrera, Tom Lane
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 17
These tests depend on the test module injection_points to be installed,
but it may not be available as the contents of src/test/modules/ are not
installed by default.

This commit adds a workaround based on a scan of pg_available_extensions
to check if the extension is available, skipping the test if it is not.
This allows installcheck to work transparently.

There are more tests impacted by this problem on HEAD, but for now this
addresses only the tests that exist on HEAD and v17 as the release is
close by.

Reported-by: Maxim Orlov
Discussion: https://postgr.es/m/CACG=ezZkoT-pFz6a9XnyToiuR-Wg8fGELqHLoyBodr+2h-77qA@mail.gmail.com
Backpatch-through: 17
This term was using an inconsistent casing between the code and the
documentation, using "CommitTsSLRU" in wait_event_names.txt and
"CommitTSSLRU" in the code.

Let's update the term in the code to reflect what's in the
documentation, "CommitTs" being more commonly used, so as
pg_stat_activity shows the same term as the documentation.

Oversight in 53c2a97.

Author: Alexander Lakhin
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 17
Since commit 2549f06, we reject an identifier immediately following
a numeric literal (without separating whitespace), because that risks
ambiguity with hex/octal/binary integers.  However, that patch used
token patterns like "{integer}{ident_start}", which is problematic
because {ident_start} matches only a single byte.  If the first
character after the integer is a multibyte character, this ends up
with flex reporting an error message that includes a partial multibyte
character.  That can cause assorted bad-encoding problems downstream,
both in the report to the client and in the postmaster log file.

To fix, use {identifier} not {ident_start} in the "junk" token
patterns, so that they will match complete multibyte characters.
This seems generally better user experience quite aside from the
encoding problem: for "123abc" the error message will now say that
the error appeared at or near "123abc" instead of "123a".

While at it, add some commentary about why these patterns exist
and how they work.

Report and patch by Karina Litskevich; review by Pavel Borisov.
Back-patch to v15 where the problem came in.

Discussion: https://postgr.es/m/CACiT8iZ_diop=0zJ7zuY3BXegJpkKK1Av-PU7xh0EDYHsa5+=g@mail.gmail.com
The updated comment provides more helpful guidance by mentioning that
escontext should be set when soft error handling is needed.

Reported-by: Jian He <[email protected]>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
The deparsing code in get_json_expr_options() unnecessarily emitted
the default column-specific ON ERROR / EMPTY behavior when the
top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that
by not overriding the column-specific default, determined based on
the column's JsonExprOp in get_json_table_columns(), with
JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior.

Note that this only removes redundancy; the current deparsing output
is not incorrect, just redundant.

Reviewed-by: Jian He <[email protected]>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
Use EMPTY ARRAY instead of EMPTY.

This change does not affect the runtime behavior of JSON_TABLE(),
which continues to return an empty relation ON ERROR. It only alters
whether the default ON ERROR behavior is shown in the deparsed output.

Reported-by: Jian He <[email protected]>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
When the ON ERROR / ON EMPTY behavior is to return NULL, returning
NULL directly from ExecEvalJsonExprPath() suffices. Therefore, there's
no need to create separate steps to check the error/empty flag or
those to evaluate the the constant NULL expression.  This speeds up
common cases because the default ON ERROR / ON EMPTY behavior for
JSON_QUERY() and JSON_VALUE() is to return NULL.  However, these steps
are necessary if the RETURNING type is a domain, as constraints on the
domain may need to be checked.

Reported-by: Jian He <[email protected]>
Author: Jian He <[email protected]>
Author: Amit Langote <[email protected]>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
Reverts c88ce38, 5067c23, and  e4e2797, because a few BF
animals didn't like one or all of them.
The deparsing code in get_json_expr_options() unnecessarily emitted
the default column-specific ON ERROR / EMPTY behavior when the
top-level ON ERROR behavior in JSON_TABLE was set to ERROR. Fix that
by not overriding the column-specific default, determined based on
the column's JsonExprOp in get_json_table_columns(), with
JSON_BEHAVIOR_ERROR when that is the top-level ON ERROR behavior.

Note that this only removes redundancy; the current deparsing output
is not incorrect, just redundant.

Reviewed-by: Jian He <[email protected]>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
Use EMPTY ARRAY instead of EMPTY.

This change does not affect the runtime behavior of JSON_TABLE(),
which continues to return an empty relation ON ERROR. It only alters
whether the default ON ERROR behavior is shown in the deparsed output.

Reported-by: Jian He <[email protected]>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
pg_stat_get_io() applied TimestampTzGetDatum twice to the
stat_reset_timestamp value.  On 64-bit builds that's harmless because
TimestampTzGetDatum is a no-op, but on 32-bit builds it results in
displaying garbage in the stats_reset column of the pg_stat_io view.

Bug dates to commit a9c70b4 which introduced pg_stat_io, so
back-patch to v16 where that came in.

Bertrand Drouvot

Discussion: https://postgr.es/m/[email protected]
…essions

As introduced by f9900df, a REINDEX CONCURRENTLY job done for an
index with predicates or expressions would set PROC_IN_SAFE_IC in its
MyProc->statusFlags, causing it to be ignored by other concurrent
operations.

Such concurrent index rebuilds should never be ignored, as a predicate
or an expression could call a user-defined function that accesses a
different table than the table where the index is rebuilt.

A test that uses injection points is added, backpatched down to 17.
Michail has proposed a different test, but I have added something
simpler with more coverage.

Oversight in f9900df.

Author: Michail Nikolaev
Discussion: https://postgr.es/m/CANtu0oj9A3kZVduFTG0vrmGnKB+DCHgEpzOp0qAyOgmks84j0w@mail.gmail.com
Backpatch-through: 14
When the ON ERROR / ON EMPTY behavior is to return NULL, returning
NULL directly from ExecEvalJsonExprPath() suffices. Therefore, there's
no need to create separate steps to check the error/empty flag or
those to evaluate the the constant NULL expression.  This speeds up
common cases because the default ON ERROR / ON EMPTY behavior for
JSON_QUERY() and JSON_VALUE() is to return NULL.  However, these steps
are necessary if the RETURNING type is a domain, as constraints on the
domain may need to be checked.

Reported-by: Jian He <[email protected]>
Author: Jian He <[email protected]>
Author: Amit Langote <[email protected]>
Discussion: https://postgr.es/m/CACJufxEo4sUjKCYtda0_qt9tazqqKPmF1cqhW9KBOUeJFqQd2g@mail.gmail.com
Backpatch-through: 17
We must drop whitespace while parsing the input, else libxml2
will include "blank" nodes that interfere with the desired
indentation behavior.  The end result is that we didn't indent
nodes separated by whitespace.

Also, it seems that libxml2 may add a trailing newline when working
in DOCUMENT mode.  This is semantically insignificant, so strip it.

This is in the gray area between being a bug fix and a definition
change.  However, the INDENT option is still pretty new (since v16),
so I think we can get away with changing this in stable branches.
Hence, back-patch to v16.

Jim Jones

Discussion: https://postgr.es/m/[email protected]
When building a JSON object, the code builds a hash table of keys, to
allow checking if the keys are unique. The uniqueness check and adding
the new key happens in json_unique_check_key(), but this assumes the
pointer to the key remains valid.

Unfortunately, two places passed pointers to keys in a buffer, while
also appending more data (additional key/value pairs) to the buffer.
With enough data the buffer is resized by enlargeStringInfo(), which
calls repalloc(), invalidating the earlier key pointers.

Due to this the uniqueness check may fail with both false negatives and
false positives, producing JSON objects with duplicate keys or failing
to produce a perfectly valid JSON object.

This affects multiple functions that enforce uniqueness of keys, all
introduced in PG16 with the new SQL/JSON:

- json_object_agg_unique / jsonb_object_agg_unique
- json_object / jsonb_objectagg

Existing regression tests did not detect the issue, simply because the
initial buffer size is 1024 and the objects were small enough not to
require the repalloc.

With a sufficiently large object, AddressSanitizer reported the access
to invalid memory immediately. So would valgrind, of course.

Fixed by copying the key into the hash table memory context, and adding
regression tests with enough data to repalloc the buffer. Backpatch to
16, where the functions were introduced.

Reported by Alexander Lakhin. Investigation and initial fix by Junwang
Zhao, with various improvements and tests by me.

Reported-by: Alexander Lakhin
Author: Junwang Zhao, Tomas Vondra
Backpatch-through: 16
Discussion: https://postgr.es/m/[email protected]
Discussion: https://postgr.es/m/CAEG8a3JjH0ReJF2_O7-8LuEbO69BxPhYeXs95_x7+H9AMWF1gw@mail.gmail.com
check_agglevels_and_constraints() asserted that if we find an
aggregate function in an EXPR_KIND_FROM_SUBSELECT expression, the
expression must be in a LATERAL subquery.  Alexander Lakhin found a
case where that's not so: because of the odd scoping rules for NEW/OLD
within a rule, a reference to NEW/OLD could cause an aggregate to be
considered top-level even though it's in an unmarked sub-select.
The error message that would be thrown seems sufficiently on-point,
so just remove the Assert.  (Hence, this is not a bug for production
builds.)

This Assert was added by me in commit eaccfde (9.3 era).  It looks
like I put it in to cross-check that the new logic for detecting
misplaced aggregates (using agglevelsup) caught the same cases that a
previous check on p_lateral_active did.  So there might have been some
related misbehavior before eaccfde ... but that's very ancient
history by now, so I didn't dig any deeper.

Per bug #18608 from Alexander Lakhin.  Back-patch to all supported
branches.

Discussion: https://postgr.es/m/[email protected]
Currently, when WITH CONDITIONAL WRAPPER is specified, array wrappers
are applied even to a single SQL/JSON item if it is a scalar JSON
value, but this behavior does not comply with the standard.

To fix, apply wrappers only when there are multiple SQL/JSON items
in the result.

Reported-by: Peter Eisentraut <[email protected]>
Author: Peter Eisentraut <[email protected]>
Author: Amit Langote <[email protected]>
Reviewed-by: Andrew Dunstan <[email protected]>
Discussion: https://postgr.es/m/8022e067-818b-45d3-8fab-6e0d94d03626%40eisentraut.org
Backpatch-through: 17
A few recent JSON aggregates have been added without much consideration
to the existing order.  Put these back in alphabetical order (with the
exception of the JSONB variant of each JSON aggregate).

Author: Wolfgang Walther <[email protected]>
Reviewed-by: Marlene Reiterer <[email protected]>
Discussion: https://postgr.es/m/6a7b910c-3feb-4006-b817-9b4759cb6bb6%40technowledgy.de
Backpatch-through: 16, where these aggregates were added
Discussion of commit ed055d2 revealed that we don't actually
want jsonpath's .string() method to depend on DateStyle, nor
TimeZone either, because the non-"_tz" jsonpath functions are
supposed to be immutable.  Potentially we could allow a TimeZone
dependency in the "_tz" variants, but it seems better to just
uniformly define this method as returning the same string that
jsonb text output would do.  That's easier to implement too,
saving a couple dozen lines.

Patch by me, per complaint from Peter Eisentraut.  Back-patch
to v17 where this feature came in (in 66ea94e).  Also
back-patch ed055d2 to provide test cases.

Discussion: https://postgr.es/m/[email protected]
Commit 4b82664 restricted a number of functions provided by
contrib modules to only relations that use the "heap" table access
method.  Sequences always use this table access method, but they do
not advertise as such in the pg_class system catalog, so the
aforementioned commit also (presumably unintentionally) removed
support for sequences from some of these functions.  This commit
reintroduces said support for sequences to these functions and adds
a couple of relevant tests.

Co-authored-by: Ayush Vatsa
Reviewed-by: Robert Haas, Michael Paquier, Matthias van de Meent
Discussion: https://postgr.es/m/CACX%2BKaP3i%2Bi9tdPLjF5JCHVv93xobEdcd_eB%2B638VDvZ3i%3DcQA%40mail.gmail.com
Backpatch-through: 12
Commit 2c27346 fixed the behavior of JSON_QUERY() when WITH
CONDITIONAL WRAPPER is used, but the documentation example wasn't
updated to reflect this change. This commit updates the example to
show the correct result.

Per off-list report from Andreas Ulbrich.

Backpatch-through: 17
I managed to break this test in two different ways in commit
05036a3.

First, the output of the new call to tuple_data_split() on the test
sequence is dependent on endianness.  This is fixed by setting a
special start value for the test sequence that produces the same
output regardless of the endianness of the machine.

Second, on versions older than v15, the new test case fails under
"force_parallel_mode = regress" with the following error:

	ERROR:  cannot access temporary tables during a parallel operation

This is because pageinspect's disk-accessing functions are
incorrectly marked PARALLEL SAFE on versions older than v15 (see
commit aeaaf52 for details).  This one is fixed by changing the
test sequence to be permanent.  The only reason it was previously
marked temporary was to avoid needing a DROP SEQUENCE command at
the end of the test.  Unlike some other tests in this file, the use
of a permanent sequence here shouldn't result in any test
instability like what was fixed by commit e2933a6.

Reviewed-by: Tom Lane
Discussion: https://postgr.es/m/ZuOKOut5hhDlf_bP%40nathan
Backpatch-through: 12
When we are building a hash index that is large enough to need
pre-sorting (larger than either maintenance_work_mem or NBuffers),
the initial sorting phase is interruptible, but the insertion
phase wasn't.  Add the missing CHECK_FOR_INTERRUPTS().

Per bug #18616 from Alexander Lakhin.  Back-patch to all
supported branches.

Pavel Borisov

Discussion: https://postgr.es/m/[email protected]
Reported-by: Nathan Bossart

Discussion: https://postgr.es/m/Ztcuwbs0FGCPOEu9@nathan

Backpatch-through: 17 only
Also clarify wording.

Reported-by: Álvaro Herrera

Discussion: https://postgr.es/m/[email protected]

Backpatch-through: 17 only
tglsfdc and others added 25 commits September 15, 2024 13:33
In existing releases of libxml2, xmlXPathCompile can be driven
to stack overflow because it fails to protect itself against
too-deeply-nested input.  While there is an upstream fix as of
yesterday, it will take years for that to propagate into all
shipping versions.  In the meantime, we can protect our own
usages basically for free by calling xmlXPathCtxtCompile instead.

(The actual bug is that libxml2 keeps its nesting counter in the
xmlXPathContext, and its parsing code was willing to just skip
counting nesting levels if it didn't have a context.  So if we supply
a context, all is well.  It seems odd actually that it works at all
to not supply a context, because this means that XPath parsing does
not have access to XML namespace info.  Apparently libxml2 never
checks namespaces until runtime?  Anyway, this seems like good
future-proofing even if its only immediate effect is to dodge a bug.)

Sadly, this hack only offers protection with libxml2 2.9.11 and newer.
Before that there are multiple similar problems, so if you are
processing untrusted XML it behooves you to get a newer version.
But we have some pretty old libxml2 in the buildfarm, so it seems
impractical to add a regression test to verify this fix.

Per bug #18617 from Jingzhou Fu.  Back-patch to all supported
versions.

Discussion: https://postgr.es/m/[email protected]
Discussion: https://gitlab.gnome.org/GNOME/libxml2/-/issues/799
Commit 5bf748b, which enhanced nbtree ScalarArrayOp execution, made
parallel index scans work with the new design for arrays via explicit
scheduling of primitive index scans.  A backend that successfully
scheduled the scan's next primitive index scan saved its backend local
array keys in shared memory.  Any backend could pick up the scheduled
primitive scan within _bt_first.  This scheme decouples scheduling a
primitive scan from starting the scan (by performing another descent of
the index via a _bt_search call from _bt_first) to make things robust.

The scheme had a deadlock hazard, at least when the leader process
participated in the scan.  _bt_parallel_seize had a code path that made
backends that were not in an immediate position to start a scheduled
primitive index scan wait for some other backend to do so instead.
Under the right circumstances, the leader process could wait here
forever: the leader would wait for any other backend to start the
primitive scan, while every worker was busy waiting on the leader to
consume tuples from the scan's tuple queue.

To fix, don't wait for a scheduled primitive index scan to be started by
some other eligible backend from within _bt_parallel_seize (when the
calling backend isn't in a position to do so itself).  Return false
instead, while recording that the scan has a scheduled primitive index
scan in backend local state.  This leaves the backend in the same state
as the existing case where a backend schedules (or tries to schedule)
another primitive index scan from within _bt_advance_array_keys, before
calling _bt_parallel_seize.  _bt_parallel_seize already handles that
case by returning false without waiting, and without unsetting the
backend local state.  Leaving the backend in this state enables it to
start a previously scheduled primitive index scan once it gets back to
_bt_first.

Oversight in commit 5bf748b, which enhanced nbtree ScalarArrayOp
execution.

Matthias van de Meent, with tweaks by me.

Author: Matthias van de Meent <[email protected]>
Reported-By: Tomas Vondra <[email protected]>
Reviewed-By: Peter Geoghegan <[email protected]>
Discussion: https://postgr.es/m/CAH2-WzmMGaPa32u9x_FvEbPTUkP5e95i=QxR8054nvCRydP-sw@mail.gmail.com
Backpatch: 17-, where nbtree SAOP execution was enhanced.
Since we introduced unlogged sequences in v15, identity sequences
have defaulted to having the same persistence as their owning table.
However, it is possible to change that with ALTER SEQUENCE, and
pg_dump tries to preserve the logged-ness of sequences when it doesn't
match (as indeed it wouldn't for an unlogged table from before v15).

The fly in the ointment is that ALTER SEQUENCE SET [UN]LOGGED fails
in binary-upgrade mode, because it needs to assign a new relfilenode
which we cannot permit in that mode.  Thus, trying to pg_upgrade a
database containing a mismatching identity sequence failed.

To fix, add syntax to ADD/ALTER COLUMN GENERATED AS IDENTITY to allow
the sequence's persistence to be set correctly at creation, and use
that instead of ALTER SEQUENCE SET [UN]LOGGED in pg_dump.  (I tried to
make SET [UN]LOGGED work without any pg_dump modifications, but that
seems too fragile to be a desirable answer.  This way should be
markedly faster anyhow.)

In passing, document the previously-undocumented SEQUENCE NAME option
that pg_dump also relies on for identity sequences; I see no value
in trying to pretend it doesn't exist.

Per bug #18618 from Anthony Hsu.
Back-patch to v15 where we invented this stuff.

Discussion: https://postgr.es/m/[email protected]
Commits 041b968 and 6377e12 changed the interface of
scan_analyze_next_block() to take a ReadStream instead of a BlockNumber
and a BufferAccessStrategy, and to return a value to indicate when the
stream has run out of blocks.

This caused integration problems for at least one known extension that
uses specially encoded BlockNumber values that map to different
underlying storage, because acquire_sample_rows() sets up the stream so
that read_stream_next_buffer() reads blocks from the main fork of the
relation's SMgrRelation.

Provide read_stream_next_block(), as a way for such an extension to
access the stream of raw BlockNumbers directly and forward them to its
own ReadBuffer() calls after decoding, as it could in earlier releases.
The new function returns the BlockNumber and BufferAccessStrategy that
were previously passed directly to scan_analyze_next_block().
Alternatively, an extension could wrap the stream of BlockNumbers in
another ReadStream with a callback that performs any decoding required
to arrive at real storage manager BlockNumber values, so that it could
benefit from the I/O combining and concurrency provided by
read_stream.c.

Another class of table access method that does nothing in
scan_analyze_next_block() because it is not block-oriented could use
this function to control the number of block sampling loops.  It could
match the previous behavior with "return read_stream_next_block(stream,
&bas) != InvalidBlockNumber".

Ongoing work is expected to provide better ANALYZE support for table
access methods that don't behave like heapam with respect to storage
blocks, but that will be for future releases.

Back-patch to 17.

Reported-by: Mats Kindahl <[email protected]>
Reviewed-by: Mats Kindahl <[email protected]>
Discussion: https://postgr.es/m/CA%2B14425%2BCcm07ocG97Fp%2BFrD9xUXqmBKFvecp0p%2BgV2YYR258Q%40mail.gmail.com
This commit adds query ID reports for two code paths when processing
extended query protocol messages:
- When receiving a bind message, setting it to the first Query retrieved
from a cached cache.
- When receiving an execute message, setting it to the first PlannedStmt
stored in a portal.

An advantage of this method is that this is able to cover all the types
of portals handled in the extended query protocol, particularly these
two when the report done in ExecutorStart() is not enough (neither is an
addition in ExecutorRun(), actually, for the second point):
- Multiple execute messages, with multiple ExecutorRun().
- Portal with execute/fetch messages, like a query with a RETURNING
clause and a fetch size that stores the tuples in a first execute
message going though ExecutorStart() and ExecuteRun(), followed by one
or more execute messages doing only fetches from the tuplestore created
in the first message.  This corresponds to the case where
execute_is_fetch is set, for example.

Note that the query ID reporting done in ExecutorStart() is still
necessary, as an EXECUTE requires it.  Query ID reporting is optimistic
and more calls to pgstat_report_query_id() don't matter as the first
report takes priority except if the report is forced.  The comment in
ExecutorStart() is adjusted to reflect better the reality with the
extended query protocol.

The test added in pg_stat_statements is a courtesy of Robert Haas.  This
uses psql's \bind metacommand, hence this part is backpatched down to
v16.

Reported-by:  Kaido Vaikla, Erik Wienhold
Author: Sami Imseih
Reviewed-by: Jian He, Andrei Lepikhov, Michael Paquier
Discussion: https://postgr.es/m/CA+427g8DiW3aZ6pOpVgkPbqK97ouBdf18VLiHFesea2jUk3XoQ@mail.gmail.com
Discussion: https://postgr.es/m/CA+TgmoZxtnf_jZ=VqBSyaU8hfUkkwoJCJ6ufy4LGpXaunKrjrg@mail.gmail.com
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 14
Doing so caused the leader to hang in wait_event=ParallelFinish, which
required an immediate shutdown to resolve.  Back-patch to v12 (all
supported versions).

Francesco Degrassi

Discussion: https://postgr.es/m/CAC-SaSzHUKT=vZJ8MPxYdC_URPfax+yoA1hKTcF4ROz_Q6z0_Q@mail.gmail.com
…nter.

The previous documentation stated that num_timed reflects the number of
scheduled checkpoints performed. However, checkpoints may be skipped
if the server has been idle, and num_timed counts both skipped and completed
checkpoints. This commit clarifies the description to make it clear that
the counter includes both skipped and completed checkpoints.

Back-patch to v17 where pg_stat_checkpointer was added.

Author: Fujii Masao
Reviewed-by: Alexander Korotkov
Discussion: https://postgr.es/m/[email protected]
In print output, there are too many commit links for footnotes in the
release notes to be useful.

Reported-by: Tom Lane

Discussion: https://postgr.es/m/[email protected]

Backpatch-through: 12
And suppress the symbol in print mode, where the section symbol does not
appear.

Discussion: https://postgr.es/m/[email protected]

Backpatch-through: 12
Calling \bind repeatedly would cause the memory allocated for the list
of bind parameters to be leaked after each call, as the list is reset
when beginning a single call.

This issue is fixed by making the cleanup of the bind parameter list
more aggressive, refactoring it into a single routine called after
processing a query and before running an individual \bind.

HEAD required more surgery and has been fixed by 87eeada.  Issue
introduced by 5b66de3.

Reported-by: Anthonin Bonnefoy
Discussion: https://postgr.es/m/[email protected]
Backpatch-through: 16
Reported-by: Andrew Dunstan

Discussion: https://postgr.es/m/[email protected]

Author: Andrew Dunstan

Backpatch-through: 12
FYI, during PDF builds, this link type generates a "Unresolved ID
reference found" warning because it is suppressed from the PDF output.

Backpatch-through: 12
Make paragraph empty instead of removing it.

Discussion: https://postgr.es/m/[email protected]

Backpatch-through: 12
Commit 5bf748b, which enhanced nbtree ScalarArrayOp execution, made
parallel index scans work with the new design for arrays via explicit
scheduling of primitive index scans.  Under this scheme a parallel index
scan with array keys will perform the same number of index descents as
an equivalent serial index scan (barring corner cases where an
individual parallel worker discovers that it can advance the scan's
array keys without anybody needing to perform another descent of the
index to get to the relevant page on the leaf level).

Despite all this, the pgstats accounting wasn't updated; it continued to
increment the total number of index scans for the rel once per _bt_first
call, no matter the details.  As a result, the number of (primitive)
index scans could be over-counted during parallel scans.

To fix, delay incrementing the count of index scans until after we've
established that another descent of the index (using either _bt_search
or _bt_endpoint) is required.  That way pg_stat_user_tables.idx_scan
always advances in the same way, regardless of whether or not the scan
makes use of parallelism.

Oversight in commit 5bf748b, which enhanced nbtree ScalarArrayOp
execution.

Author: Peter Geoghegan <[email protected]>
Reviewed-By: Tomas Vondra <[email protected]>
Discussion: https://postgr.es/m/CAH2-Wz=E7XrkvscBN0U6V81NK3Q-dQOmivvbEsjG-zwEfDdFpg@mail.gmail.com
Discussion: https://postgr.es/m/CAH2-WzkRqvaqR2CTNqTZP0z6FuL4-3ED6eQB0yx38XBNj1v-4Q@mail.gmail.com
Backpatch: 17-, where nbtree SAOP execution was enhanced.
This has always been possible, but the syntax is a bit obscure,
and our user-facing docs were not very helpful.  Spell it out
more clearly.

Per complaint from Dominique Devienne.  Back-patch to
all supported branches.

Discussion: https://postgr.es/m/CAFCRh-8JNEy+dV4SXFOrWca50u+d=--TO4cq=+ac1oBtfJy4AA@mail.gmail.com
Reported-by: Tom Lane

Discussion: https://postgr.es/m/[email protected]

Author: Jonathan Katz

Backpatch-through: 17 only
Also some trivial copy-editing.
Source-Git-URL: https://git.postgresql.org/git/pgtranslation/messages.git
Source-Git-Hash: 4b069f67b5be4227eb620a74c9900f079f2e59f4
doc_ja_16の最新をフォワードポートしました(2024-09-26)。
@noborus noborus added the version 17 version 17を対象 label Sep 27, 2024
@KenichiroTanaka KenichiroTanaka merged commit 17d5ea4 into pgsql-jp:doc_ja_17 Sep 27, 2024
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.