-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature: support reschedule disk with resource exhausted error in del… #890
feature: support reschedule disk with resource exhausted error in del… #890
Conversation
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: mowangdk The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
6e3a796
to
ee8b39d
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
FYI, I have a private branch that uses topology to constraint disk type, which is waiting for the dependencies to be merged. By that way, we can avoid query for "node-selected" from APIServer.
Once I open that as a PR, it will overwrite the changes here.
@@ -969,7 +969,7 @@ func getDiskType(diskVol *diskVolumeArgs) ([]string, []string, error) { | |||
nodeInfo, err := client.CoreV1().Nodes().Get(context.Background(), diskVol.NodeSelected, metav1.GetOptions{}) | |||
if err != nil { | |||
log.Log.Infof("getDiskType: failed to get node labels: %v", err) | |||
goto cusDiskType | |||
return nil, nil, status.Errorf(codes.ResourceExhausted, "CreateVolume:: get node info by name: %s failed with err: %v, start to reschedule", diskVol.NodeSelected, err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about just return ResourceExhausted
if we got a NotFound err. And return Internal error for other err. To avoid possible infinite rescheduling loop.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The retry time follows the backoff algorithm and has little effect on CSI. But it still has side effect on the scheduler, so I'll modify it.
provisionDiskTypes := []string{} | ||
allTypes := deleteEmpty(strings.Split(diskVol.Type, ",")) | ||
if len(nodeSupportDiskType) != 0 { | ||
provisionDiskTypes = intersect(nodeSupportDiskType, allTypes) | ||
if len(provisionDiskTypes) == 0 { | ||
log.Log.Errorf("CreateVolume:: node(%s) support type: [%v] is incompatible with provision disk type: [%s]", diskVol.NodeSelected, nodeSupportDiskType, allTypes) | ||
return nil, nil, status.Errorf(codes.InvalidArgument, "CreateVolume:: node support type: [%v] is incompatible with provision disk type: [%s]", nodeSupportDiskType, allTypes) | ||
return nil, nil, status.Errorf(codes.ResourceExhausted, "CreateVolume:: node support type: [%v] is incompatible with provision disk type: [%s]", nodeSupportDiskType, allTypes) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We can only reach this line if the CSI plugin is just installed, and scheduling is performed before UpdateNode()
finished. Is that correct?
If the above is true, we can eliminate this kind of racing by using topology to constraint disk type.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, there are other scenarios. We'll leave it as it is for now.
It's okay. We'll go over your changes when you file your pr. |
What type of PR is this?
/kind feature
What this PR does / why we need it:
Support reschedule disk with resource exhausted error in delay provisioning
Which issue(s) this PR fixes:
None
Special notes for your reviewer:
None
Does this PR introduce a user-facing change?
Additional documentation e.g., KEPs (Kubernetes Enhancement Proposals), usage docs, etc.: