Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG]scale-in is always running after scale-out of pulsar cluster #8723

Closed
tianyue86 opened this issue Dec 27, 2024 · 1 comment · Fixed by apecloud/kubeblocks-addons#1380
Assignees
Labels
kind/bug Something isn't working severity/major Great chance user will encounter the same problem
Milestone

Comments

@tianyue86
Copy link

Describe the env
Kubernetes: v1.31.1-aliyun.1
KubeBlocks: 1.0.0-beta.21
kbcli: 1.0.0-beta.8

To Reproduce
Steps to reproduce the behavior:

  1. Use the yaml below to create pulsar cluster
    puls.yaml.txt
  2. scale-out: succeed
kbcli cluster scale-out pulsar-kcekhn --auto-approve --force=true --components bookies --replicas 5 --namespace default
OpsRequest pulsar-kcekhn-horizontalscaling-5drlj created successfully, you can view the progress:
	kbcli cluster describe-ops pulsar-kcekhn-horizontalscaling-5drlj -n default

kbcli cluster list-ops pulsar-kcekhn --status all  --namespace default     
NAME                                    NAMESPACE   TYPE                CLUSTER         COMPONENT   STATUS    PROGRESS   CREATED-TIME                 
pulsar-kcekhn-horizontalscaling-5drlj   default     HorizontalScaling   pulsar-kcekhn   bookies     Succeed   5/5        Dec 27,2024 16:59 UTC+0800
  1. Scale-in:always in running status
kbcli cluster scale-in pulsar-kcekhn --auto-approve --force=true --components bookies --replicas 3 --namespace default
OpsRequest pulsar-kcekhn-horizontalscaling-8mlhm created successfully, you can view the progress:
	kbcli cluster describe-ops pulsar-kcekhn-horizontalscaling-8mlhm -n default


kbcli cluster list-ops pulsar-kcekhn --status all  --namespace default
NAME                                    NAMESPACE   TYPE                CLUSTER         COMPONENT   STATUS    PROGRESS   CREATED-TIME                 
pulsar-kcekhn-horizontalscaling-5drlj   default     HorizontalScaling   pulsar-kcekhn   bookies     Succeed   5/5        Dec 27,2024 16:59 UTC+0800   
pulsar-kcekhn-horizontalscaling-8mlhm   default     HorizontalScaling   pulsar-kcekhn   bookies     Running   0/3        Dec 27,2024 17:09 UTC+0800

===> it takes more than 30 minutes, but the ops status is still running
  1. describe ops
Events:
  Type    Reason                    Age                From                    Message
  ----    ------                    ----               ----                    -------
  Normal  WaitForProgressing        10m                ops-request-controller  wait for the controller to process the OpsRequest: pulsar-kcekhn-horizontalscaling-8mlhm in Cluster: pulsar-kcekhn
  Normal  ValidateOpsRequestPassed  10m (x2 over 10m)  ops-request-controller  OpsRequest: pulsar-kcekhn-horizontalscaling-8mlhm is validated
  Normal  HorizontalScalingStarted  10m (x2 over 10m)  ops-request-controller  Start to horizontal scale replicas in Cluster: pulsar-kcekhn
  1. logs for reference
    report-cluster-pulsar-kcekhn-2024-12-27-17-45-12.zip.txt

Expected behavior
A clear and concise description of what you expected to happen.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

@tianyue86 tianyue86 added the kind/bug Something isn't working label Dec 27, 2024
@tianyue86 tianyue86 added this to the Release 1.0.0 milestone Dec 27, 2024
@tianyue86 tianyue86 added the severity/major Great chance user will encounter the same problem label Dec 27, 2024
@shanshanying
Copy link
Contributor

shanshanying commented Dec 30, 2024

Seems Scaled in blocked by memberLeave actions (check kb-agent log)
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug Something isn't working severity/major Great chance user will encounter the same problem
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants