-
Notifications
You must be signed in to change notification settings - Fork 481
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
(8.0) PS-7806: Column compression breaks async replication on PS #5162
Merged
kamil-holubicki
merged 1 commit into
percona:release-8.0.35-27
from
kamil-holubicki:PS-7806-8.0
Dec 7, 2023
Merged
(8.0) PS-7806: Column compression breaks async replication on PS #5162
kamil-holubicki
merged 1 commit into
percona:release-8.0.35-27
from
kamil-holubicki:PS-7806-8.0
Dec 7, 2023
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
kamil-holubicki
changed the title
PS-7806: Column compression breaks async replication on PS
(8.0) PS-7806: Column compression breaks async replication on PS
Nov 17, 2023
https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
kamil-holubicki
force-pushed
the
PS-7806-8.0
branch
from
November 17, 2023 15:26
af4dae3
to
63a8e48
Compare
inikep
force-pushed
the
release-8.0.35-27
branch
2 times, most recently
from
November 23, 2023 12:57
2d4e004
to
8515e86
Compare
satya-bodapati
requested changes
Nov 23, 2023
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@kamil-holubicki many commits came along, can you please check?
satya-bodapati
approved these changes
Nov 28, 2023
oleksandr-kachan
pushed a commit
to oleksandr-kachan/percona-server
that referenced
this pull request
Dec 12, 2023
) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697)
oleksandr-kachan
pushed a commit
that referenced
this pull request
Dec 12, 2023
https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697)
inikep
pushed a commit
that referenced
this pull request
Jan 16, 2024
https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Jan 17, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
oleksandr-kachan
pushed a commit
to oleksandr-kachan/percona-server
that referenced
this pull request
Jan 17, 2024
) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697)
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Jan 17, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
oleksandr-kachan
pushed a commit
to oleksandr-kachan/percona-server
that referenced
this pull request
Jan 18, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697)
oleksandr-kachan
pushed a commit
to oleksandr-kachan/percona-server
that referenced
this pull request
Jan 22, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697)
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Jan 22, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
oleksandr-kachan
pushed a commit
to oleksandr-kachan/percona-server
that referenced
this pull request
Jan 22, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697)
inikep
pushed a commit
that referenced
this pull request
Jan 23, 2024
… PS (#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
oleksandr-kachan
pushed a commit
to oleksandr-kachan/percona-server
that referenced
this pull request
Jan 23, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697)
oleksandr-kachan
pushed a commit
to oleksandr-kachan/percona-server
that referenced
this pull request
Jan 23, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697)
oleksandr-kachan
pushed a commit
to oleksandr-kachan/percona-server
that referenced
this pull request
Jan 26, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697)
oleksandr-kachan
pushed a commit
to oleksandr-kachan/percona-server
that referenced
this pull request
Jan 30, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697)
oleksandr-kachan
pushed a commit
to oleksandr-kachan/percona-server
that referenced
this pull request
Apr 12, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697)
oleksandr-kachan
pushed a commit
to oleksandr-kachan/percona-server
that referenced
this pull request
Apr 12, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697)
dlenev
pushed a commit
to dlenev/percona-server
that referenced
this pull request
Jul 30, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
dlenev
pushed a commit
to dlenev/percona-server
that referenced
this pull request
Aug 21, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
dlenev
pushed a commit
to dlenev/percona-server
that referenced
this pull request
Aug 28, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
dlenev
pushed a commit
to dlenev/percona-server
that referenced
this pull request
Aug 30, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Sep 10, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Sep 11, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Sep 12, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Sep 17, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Sep 17, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
that referenced
this pull request
Sep 23, 2024
… PS (#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
that referenced
this pull request
Sep 25, 2024
… PS (#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Sep 25, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
dlenev
pushed a commit
to dlenev/percona-server
that referenced
this pull request
Oct 1, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
dlenev
pushed a commit
to dlenev/percona-server
that referenced
this pull request
Oct 17, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
dlenev
pushed a commit
to dlenev/percona-server
that referenced
this pull request
Oct 17, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
dlenev
pushed a commit
to dlenev/percona-server
that referenced
this pull request
Oct 22, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Oct 28, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
inikep
pushed a commit
that referenced
this pull request
Oct 30, 2024
… PS (#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Nov 11, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Nov 14, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Nov 14, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Nov 18, 2024
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Jan 23, 2025
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Jan 23, 2025
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Jan 27, 2025
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
inikep
pushed a commit
to inikep/percona-server
that referenced
this pull request
Jan 28, 2025
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap.
lukin-oleksiy
pushed a commit
to lukin-oleksiy/percona-server
that referenced
this pull request
Jan 31, 2025
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to lukin-oleksiy/percona-server
that referenced
this pull request
Feb 3, 2025
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to lukin-oleksiy/percona-server
that referenced
this pull request
Feb 3, 2025
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
inikep
pushed a commit
to lukin-oleksiy/percona-server
that referenced
this pull request
Feb 4, 2025
… PS (percona#5162) https://jira.percona.com/browse/PS-7806 Work based on the original patch for 8.0 by Nitendra Bhosle. Problem: When the statement related to the partitioned table containing compressed BLOB columns is replicated, replica stops with error. e.g. DELETE FROM t1 WHERE d2 = 0.00000 ; Cause: Queries like mentioned delete are implemented in the following way: 1. Index scan is performed 2. For every matching row, the row is deleted 3. Query is rewritten, using all columns in WHERE clause. For partitioned tables, during the ordered scan, we read (and keep) the next record from every partition, and then do ordering of cached records, returning the first in order (logic in Partition_helper::handle_ordered_index_scan()). However, we use common prebuilt->compression_heap which is cleaned up before every row read. This causes that rows cached for particular partitions are freed and overwritten by next partition's row during rows read loop in Partition_helper::handle_ordered_index_scan(). Then the query is being binlogged, but blob pointer may be invalid, pointing to overwritten memory, so rewritten query contains wrong value for BLOB column. When received by replica, such row does not exists and replica stops. Solution: Implemented dedicated compression_heap for every partition, similarly to already existing blob_heap. (cherry picked from commit 6797697) ---------------------------------------------------------------------- PS-9218: Merge MySQL 8.4.0 (fix terminology in replication tests) https://perconadev.atlassian.net/browse/PS-9218 mysql/mysql-server@44a77b5
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
https://jira.percona.com/browse/PS-7806
Work based on the original patch for 8.0 by Nitendra Bhosle.
Problem:
When the statement related to the partitioned table containing
compressed BLOB columns is replicated, replica stops with error.
e.g. DELETE FROM t1 WHERE d2 = 0.00000 ;
Cause:
Queries like mentioned delete are implemented in the following way:
For partitioned tables, during the ordered scan, we read (and keep)
the next record from every partition, and then do ordering of cached
records, returning the first in order
(logic in Partition_helper::handle_ordered_index_scan()).
However, we use common prebuilt->compression_heap which is cleaned up
before every row read. This causes that rows cached for particular
partitions are freed and overwritten by next partition's row during
rows read loop in Partition_helper::handle_ordered_index_scan().
Then the query is being binlogged, but blob pointer may be invalid,
pointing to overwritten memory, so rewritten query contains wrong value
for BLOB column.
When received by replica, such row does not exists and replica stops.
Solution:
Implemented dedicated compression_heap for every partition, similarly to
already existing blob_heap.