Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

(8.0) PS-7806: Column compression breaks async replication on PS #5162

Merged

Conversation

kamil-holubicki
Copy link
Contributor

@kamil-holubicki kamil-holubicki commented Nov 17, 2023

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:

  1. Index scan is performed
  2. For every matching row, the row is deleted
  3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

@kamil-holubicki kamil-holubicki changed the title PS-7806: Column compression breaks async replication on PS (8.0) PS-7806: Column compression breaks async replication on PS Nov 17, 2023
https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
@inikep inikep force-pushed the release-8.0.35-27 branch 2 times, most recently from 2d4e004 to 8515e86 Compare November 23, 2023 12:57
Copy link
Contributor

@satya-bodapati satya-bodapati left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@kamil-holubicki many commits came along, can you please check?

@kamil-holubicki kamil-holubicki merged commit 6797697 into percona:release-8.0.35-27 Dec 7, 2023
21 checks passed
oleksandr-kachan pushed a commit to oleksandr-kachan/percona-server that referenced this pull request Dec 12, 2023
)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)
oleksandr-kachan pushed a commit that referenced this pull request Dec 12, 2023
https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)
inikep pushed a commit that referenced this pull request Jan 16, 2024
https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
inikep pushed a commit to inikep/percona-server that referenced this pull request Jan 17, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
oleksandr-kachan pushed a commit to oleksandr-kachan/percona-server that referenced this pull request Jan 17, 2024
)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)
inikep pushed a commit to inikep/percona-server that referenced this pull request Jan 17, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
oleksandr-kachan pushed a commit to oleksandr-kachan/percona-server that referenced this pull request Jan 18, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)
oleksandr-kachan pushed a commit to oleksandr-kachan/percona-server that referenced this pull request Jan 22, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)
inikep pushed a commit to inikep/percona-server that referenced this pull request Jan 22, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
oleksandr-kachan pushed a commit to oleksandr-kachan/percona-server that referenced this pull request Jan 22, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)
inikep pushed a commit that referenced this pull request Jan 23, 2024
… PS (#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
oleksandr-kachan pushed a commit to oleksandr-kachan/percona-server that referenced this pull request Jan 23, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)
oleksandr-kachan pushed a commit to oleksandr-kachan/percona-server that referenced this pull request Jan 23, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)
oleksandr-kachan pushed a commit to oleksandr-kachan/percona-server that referenced this pull request Jan 26, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)
oleksandr-kachan pushed a commit to oleksandr-kachan/percona-server that referenced this pull request Jan 30, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)
oleksandr-kachan pushed a commit to oleksandr-kachan/percona-server that referenced this pull request Apr 12, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)
oleksandr-kachan pushed a commit to oleksandr-kachan/percona-server that referenced this pull request Apr 12, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)
dlenev pushed a commit to dlenev/percona-server that referenced this pull request Jul 30, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
dlenev pushed a commit to dlenev/percona-server that referenced this pull request Aug 21, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
dlenev pushed a commit to dlenev/percona-server that referenced this pull request Aug 28, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
dlenev pushed a commit to dlenev/percona-server that referenced this pull request Aug 30, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to inikep/percona-server that referenced this pull request Sep 10, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to inikep/percona-server that referenced this pull request Sep 11, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to inikep/percona-server that referenced this pull request Sep 12, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to inikep/percona-server that referenced this pull request Sep 17, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to inikep/percona-server that referenced this pull request Sep 17, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit that referenced this pull request Sep 23, 2024
… PS (#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit that referenced this pull request Sep 25, 2024
… PS (#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
inikep pushed a commit to inikep/percona-server that referenced this pull request Sep 25, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
dlenev pushed a commit to dlenev/percona-server that referenced this pull request Oct 1, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
dlenev pushed a commit to dlenev/percona-server that referenced this pull request Oct 17, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
dlenev pushed a commit to dlenev/percona-server that referenced this pull request Oct 17, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
dlenev pushed a commit to dlenev/percona-server that referenced this pull request Oct 22, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to inikep/percona-server that referenced this pull request Oct 28, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
inikep pushed a commit that referenced this pull request Oct 30, 2024
… PS (#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
inikep pushed a commit to inikep/percona-server that referenced this pull request Nov 11, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to inikep/percona-server that referenced this pull request Nov 14, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to inikep/percona-server that referenced this pull request Nov 14, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to inikep/percona-server that referenced this pull request Nov 18, 2024
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to inikep/percona-server that referenced this pull request Jan 23, 2025
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to inikep/percona-server that referenced this pull request Jan 23, 2025
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
inikep pushed a commit to inikep/percona-server that referenced this pull request Jan 27, 2025
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
inikep pushed a commit to inikep/percona-server that referenced this pull request Jan 28, 2025
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.
lukin-oleksiy pushed a commit to lukin-oleksiy/percona-server that referenced this pull request Jan 31, 2025
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to lukin-oleksiy/percona-server that referenced this pull request Feb 3, 2025
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to lukin-oleksiy/percona-server that referenced this pull request Feb 3, 2025
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
inikep pushed a commit to lukin-oleksiy/percona-server that referenced this pull request Feb 4, 2025
… PS (percona#5162)

https://jira.percona.com/browse/PS-7806

Work based on the original patch for 8.0 by Nitendra Bhosle.

Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;

Cause:
Queries like mentioned delete are implemented in the following way:
1. Index scan is performed
2. For every matching row, the row is deleted
3. Query is rewritten, using all columns in WHERE clause.

For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.

Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.

(cherry picked from commit 6797697)

----------------------------------------------------------------------

PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests)

https://perconadev.atlassian.net/browse/PS-9218

mysql/mysql-server@44a77b5
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants