-
Notifications
You must be signed in to change notification settings - Fork 1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Some chan cleanups #864
Some chan cleanups #864
Conversation
also changed limits to not be required (I am probably missing something but I can't figure out why they're required)
✔️ Deploy Preview for karpenter-docs-prod canceled. 🔨 Explore the source changes: 471c5be 🔍 Inspect the deploy log: https://app.netlify.com/sites/karpenter-docs-prod/deploys/61a1b8166f382100079e3247 |
@@ -108,7 +108,7 @@ func withUserAgent(sess *session.Session) *session.Session { | |||
} | |||
|
|||
// Create a node given the constraints. | |||
func (c *CloudProvider) Create(ctx context.Context, constraints *v1alpha5.Constraints, instanceTypes []cloudprovider.InstanceType, quantity int, callback func(*v1.Node) error) chan error { | |||
func (c *CloudProvider) Create(ctx context.Context, constraints *v1alpha5.Constraints, instanceTypes []cloudprovider.InstanceType, quantity int, callback func(*v1.Node) error) <-chan error { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The original purpose of making this a channel (as opposed to a callback) was to allow cloud providers to implement arbitrary batching behavior. Instead, we implemented batching in vendor neutral binpacking code. I think we should sync on @bwagner5 on what the desired implementation strategy is.
I don't think this should necessarily block, but I wanted to bring it up since we're touching it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we actually want to keep this channel and refactor the way it's used currently. Right now, we receive from it right after calling which blocks per packing. Practically, it's not much overhead since the packing is batched now and we do have a concurrent work queue on schedules. But we could receive from the channels after we've called all the creates and actually utilize the concurrency this provides.
TLDR: I think this is good :)
@@ -45,6 +45,16 @@ type Launcher struct { | |||
CloudProvider cloudprovider.CloudProvider | |||
} | |||
|
|||
// Thread safe channel to pop off packed pod slices | |||
func queueFor(pods [][]*v1.Pod) <-chan []*v1.Pod { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does this code do? Is it trying to avoid the latency of a for loop and start creating immediately? It's not actually clear how this works at all -- isn't the queue closed immediately?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The channel is loaded and then closed since all items have been placed on it. After it is closed consumers can still dequeue pod batches, the channel just can't receive anymore. So this is just a thread-safe queue so that the CP can consume batches concurrently.
@@ -42,7 +42,6 @@ func (s *ProvisionerSpec) validate(ctx context.Context) (errs *apis.FieldError) | |||
return errs.Also( | |||
s.validateTTLSecondsUntilExpired(), | |||
s.validateTTLSecondsAfterEmpty(), | |||
s.Limits.validateResourceLimits(), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm
1. Issue, if available:
N/A
2. Description of changes:
Some minor cleanups to
chan
usage per Concurrency in Go book. The main thing is it is good to close when possible, and also it should be clear who owns channel (usually owner should only be able to write/close and reader should only read), using either<-chan
orchan<-
types.Also, I don't understand why provisioner limits are required, so reverted that. But I am probably missing something and someone will point out why :-)
3. Does this change impact docs?
By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.