Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BugFix] Fix query scheduler for pruned right local bucket shuffle join (backport #46097) #46282

Closed
wants to merge 1 commit into from

Conversation

mergify[bot]
Copy link
Contributor

@mergify mergify bot commented May 27, 2024

Why I'm doing:

Fix query scheduler for pruned right local bucket shuffle join. #46098 has an example.

What I'm doing:

As for local bucket shuffle right outer join, we add empty bucket scan range which is pruned by predicate and assign this bucket to arbitrary BE.

  • The reason is that, if we do not assign a BE to the pruned bucket of the left table, the right table will not send data to the bucket which has been pruned, and then the right join or full join will get wrong result.

The code is as follows:

        if (isRightOrFullBucketShuffleFragment && colocatedAssignment.isAllScanNodesAssigned()) {
            int bucketNum = colocatedAssignment.bucketNum;

            for (int bucketSeq = 0; bucketSeq < bucketNum; ++bucketSeq) {
                if (!bucketSeqToWorkerId.containsKey(bucketSeq)) { 
                    long workerId = workerProvider.selectNextWorker(); // --------------------- here ------------------
                    bucketSeqToWorkerId.put(bucketSeq, workerId);
                }
                if (!bucketSeqToScanRange.containsKey(bucketSeq)) {
                    bucketSeqToScanRange.put(bucketSeq, Maps.newHashMap());
                    bucketSeqToScanRange.get(bucketSeq).put(scanNode.getId().asInt(), Lists.newArrayList());
                }
            }
        }

This code assigns the pruned buckets to arbitrary BE after processing the first olap scan node of a fragment.

However, we should do this after processing the last instead of first scan node, because only after all OLAP scan nodes have been processed can it be determined whether a bucket has been pruned.

Fixes #46098.

What type of PR is this:

  • BugFix
  • Feature
  • Enhancement
  • Refactor
  • UT
  • Doc
  • Tool

Does this PR entail a change in behavior?

  • Yes, this PR will result in a change in behavior.
  • No, this PR will not result in a change in behavior.

If yes, please specify the type of change:

  • Interface/UI changes: syntax, type conversion, expression evaluation, display information
  • Parameter changes: default values, similar parameters but with different default values
  • Policy changes: use new policy to replace old one, functionality automatically enabled
  • Feature removed
  • Miscellaneous: upgrade & downgrade compatibility, etc.

Checklist:

  • I have added test cases for my bug fix or my new feature
  • This pr needs user documentation (for new or modified features or behaviors)
    • I have added documentation for my new feature or new function
  • This is a backport pr

Bugfix cherry-pick branch check:

  • I have checked the version labels which the pr will be auto-backported to the target branch
    • 3.3
    • 3.2
    • 3.1
    • 3.0
    • 2.5

This is an automatic backport of pull request #46097 done by [Mergify](https://mergify.com). ## Why I'm doing: Fix query scheduler for pruned right local bucket shuffle join. #46098 has an example.

What I'm doing:

As for local bucket shuffle right outer join, we add empty bucket scan range which is pruned by predicate and assign this bucket to arbitrary BE.

  • The reason is that, if we do not assign a BE to the pruned bucket of the left table, the right table will not send data to the bucket which has been pruned, and then the right join or full join will get wrong result.

The code is as follows:

        if (isRightOrFullBucketShuffleFragment && colocatedAssignment.isAllScanNodesAssigned()) {
            int bucketNum = colocatedAssignment.bucketNum;

            for (int bucketSeq = 0; bucketSeq < bucketNum; ++bucketSeq) {
                if (!bucketSeqToWorkerId.containsKey(bucketSeq)) { 
                    long workerId = workerProvider.selectNextWorker(); // --------------------- here ------------------
                    bucketSeqToWorkerId.put(bucketSeq, workerId);
                }
                if (!bucketSeqToScanRange.containsKey(bucketSeq)) {
                    bucketSeqToScanRange.put(bucketSeq, Maps.newHashMap());
                    bucketSeqToScanRange.get(bucketSeq).put(scanNode.getId().asInt(), Lists.newArrayList());
                }
            }
        }

This code assigns the pruned buckets to arbitrary BE after processing the first olap scan node of a fragment.

However, we should do this after processing the last instead of first scan node, because only after all OLAP scan nodes have been processed can it be determined whether a bucket has been pruned.

Fixes #46098.

What type of PR is this:

  • BugFix
  • Feature
  • Enhancement
  • Refactor
  • UT
  • Doc
  • Tool

Does this PR entail a change in behavior?

  • Yes, this PR will result in a change in behavior.
  • No, this PR will not result in a change in behavior.

If yes, please specify the type of change:

  • Interface/UI changes: syntax, type conversion, expression evaluation, display information
  • Parameter changes: default values, similar parameters but with different default values
  • Policy changes: use new policy to replace old one, functionality automatically enabled
  • Feature removed
  • Miscellaneous: upgrade & downgrade compatibility, etc.

Checklist:

  • I have added test cases for my bug fix or my new feature
  • This pr needs user documentation (for new or modified features or behaviors)
    • I have added documentation for my new feature or new function
  • This is a backport pr

…in (#46097)

Signed-off-by: zihe.liu <[email protected]>
Signed-off-by: ZiheLiu <[email protected]>
(cherry picked from commit fd1f78b)

# Conflicts:
#	fe/fe-core/src/main/java/com/starrocks/qe/ColocatedBackendSelector.java
#	fe/fe-core/src/main/java/com/starrocks/qe/scheduler/dag/ExecutionFragment.java
#	fe/fe-core/src/test/java/com/starrocks/lake/qe/scheduler/DefaultSharedDataWorkerProviderTest.java
#	fe/fe-core/src/test/java/com/starrocks/qe/ColocatedBackendSelectorTest.java
@mergify mergify bot added the conflicts label May 27, 2024
Copy link
Contributor Author

mergify bot commented May 27, 2024

Cherry-pick of fd1f78b has failed:

On branch mergify/bp/branch-2.5/pr-46097
Your branch is up to date with 'origin/branch-2.5'.

You are currently cherry-picking commit fd1f78b0da.
  (fix conflicts and run "git cherry-pick --continue")
  (use "git cherry-pick --skip" to skip this patch)
  (use "git cherry-pick --abort" to cancel the cherry-pick operation)

Changes to be committed:
	new file:   test/sql/test_join/R/test_pruned_right_outer_local_bucket_shuffle_join
	new file:   test/sql/test_join/T/test_pruned_right_outer_local_bucket_shuffle_join

Unmerged paths:
  (use "git add/rm <file>..." as appropriate to mark resolution)
	deleted by us:   fe/fe-core/src/main/java/com/starrocks/qe/ColocatedBackendSelector.java
	deleted by us:   fe/fe-core/src/main/java/com/starrocks/qe/scheduler/dag/ExecutionFragment.java
	deleted by us:   fe/fe-core/src/test/java/com/starrocks/lake/qe/scheduler/DefaultSharedDataWorkerProviderTest.java
	deleted by us:   fe/fe-core/src/test/java/com/starrocks/qe/ColocatedBackendSelectorTest.java

To fix up this pull request, you can check it out locally. See documentation: https://docs.github.com/en/pull-requests/collaborating-with-pull-requests/reviewing-changes-in-pull-requests/checking-out-pull-requests-locally

Copy link
Contributor Author

mergify bot commented May 27, 2024

@mergify[bot]: Backport conflict, please reslove the conflict and resubmit the pr

auto-merge was automatically disabled May 27, 2024 02:31

Pull request was closed

@mergify mergify bot deleted the mergify/bp/branch-2.5/pr-46097 branch May 27, 2024 02:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant