-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Incorporate Build into Elafros. #36
Conversation
name = "everything", | ||
objects = [ | ||
":istio", # We depend on Istio. | ||
# TODO(mattmoor): Add the Build stuff here once we can import it properly. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this not done?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, this is for standing up the build elements, for which we need it vendored :(
@@ -17,6 +17,8 @@ limitations under the License. | |||
package v1alpha1 | |||
|
|||
import ( | |||
buildv1alpha1 "github.com/google/elafros/pkg/apis/cloudbuild/v1alpha1" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any reason not to just call this "build"?
} | ||
|
||
// Checks whether the Revision knows whether the build is done. | ||
func isBuildDone(u *v1alpha1.Revision) (done bool, failed bool) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: (done, failed bool)
} | ||
|
||
// For each of the revisions watching this build, mark their build phase as complete. | ||
for k, _ := range s { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: "for k := range s"
}, | ||
} | ||
|
||
// Ensure that the Revision status is updated.\ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Trailing \
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
2a4cf05
to
926d737
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is exciting! I haven't reviewed everything yet - I'm up to revision controller, but I thought I'd submit my comments so far.
General question about the approach: why does the Revision controller have to track and wait for Build statuses?
What if, hypothetically, the Revision controller is able to predict or extract the target image uri associated with a Build resource. Then it could create a Deployment with that image url (even though the image doesn't exist), and the pods would run the wait loop for the image to appear. Would this accomplish the same thing as the Revision controller tracking Builds?
) | ||
|
||
// +genclient | ||
// +genclient:noStatus |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I presume the noStatus
is here because CRDs don't support status currently (though hopefully kubernetes/kubernetes#55168 will fix that). Can you add a comment explaining that and linking to the fix PR?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd rather not touch this file in this repo, since it's a pristine copy of the upstream. Once I can, I plan to vendor things, but wanted to unblock progress.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
re: noStatus, I've just been carrying this over from the original sample.
LastTransitionTime metav1.Time `json:"lastTransitionTime,omitempty" description:"last time the condition transit from one status to another"` | ||
|
||
// +optional | ||
Reason string `json:"reason,omitempty" description:"one-word CamelCase reason for the condition's last transition"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like it could have its own type similar to BuildTemplateConditionType
, so the list of possible Reason
values is documented. I see the dev conventions doc shows it as a string though.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that would make sense if Reason
were specific to a particular Type
, but it seems strange to me to mix strings from different types in the same enum. I'm also not sure that folks should be treating this as a discrete enum, vs. a string with unbounded options.
I'd also rather not touch this file here, so keep it a pristine copy from the other repo (until we vendor).
) | ||
|
||
// +genclient | ||
// +genclient:noStatus |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question as above re noStatus
type BuildSpec struct { | ||
Source *SourceSpec `json:"source,omitempty"` | ||
Steps []corev1.Container `json:"steps,omitempty"` | ||
// TODO(mattmoor): Remove Images |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you expand on why Images
should be removed?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Image is syntactic sugar. In the new world, users should just have push steps.
|
||
// BuildSpec is the spec for a Build resource | ||
type BuildSpec struct { | ||
Source *SourceSpec `json:"source,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why is this a pointer (also the members of SourceSpec
)? I imagine most of the methods that deal with the enclosing objects are already using references at the top level (but I could be wrong). It's not that I'm opposed to pointers here, but their use seems a bit inconsistent (e.g. Template
is not a pointer).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Serialization won't omit empty objects.
Template should probably be a pointer, I'm sending a CL to fix that, but it should be cosmetic here.
}) | ||
// Let this trigger a reconciliation loop. | ||
if _, err := c.updateStatus(u); err != nil { | ||
log.Printf("Error recording the BuildComplete=False condition: %s", err) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Should probably use glog.Errorf
here
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
// mux guards modifications | ||
mux sync.Mutex | ||
// The collection of outstanding builds and the sets of revisions waiting for them. | ||
builds map[key]set |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be cleaner to wrap all the build tracking semantics into a single BuildTracker
struct containing the map, mutex, key/set manipulation functions, and higher-level track/check functions.
go func(k key) { | ||
// Look up the revision to mark complete. | ||
namespace, name := splitKey(k) | ||
hr, err := c.lister.Revisions(namespace).Get(name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This variable should not be called hr
(I recommend rev
)
for k, _ := range s { | ||
go func(k key) { | ||
// Look up the revision to mark complete. | ||
namespace, name := splitKey(k) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Elsewhere in the code we use cache.SplitMetaNamespaceKey
for this.
if err := c.markBuildComplete(hr, cond); err != nil { | ||
glog.Errorf("Error marking build completion for '%s/%s': %v", namespace, name, err) | ||
} | ||
}(k) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Based on my reading of https://github.com/kubernetes/client-go/blob/9f7db9794b9bd0183e59f73486b930e3733ff4c0/rest/config.go#L107, the default timeout for api calls is infinite. I don't think we're setting it (but I could be wrong).
Seems like this could leak goroutines if the maximum lifetime of a goroutine is longer than the 30 second resync period. We should set a lower timeout to avoid that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This feels like something we should perhaps adopt globally? If not I may be misunderstanding your comment.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree we should probably adopt timeouts globally, depending on what the standard practice is.
To expand on my earlier vague notions: this is the only place (that I know of) where we start one or more goroutines without waiting for them to complete. If those goroutines block forever, they'll leak resources since they can't be stopped otherwise. Plus, the next time those resources are synced, we'll run this code again (since removing elements from builds
requires these requests to succeed) and create more goroutines attempting to mark builds complete, and those will also leak. This could eventually lead to out of memory errors.
My understanding of the way this code works may be wrong, or I may be wrong about go's behavior, in which case please correct me!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, looking at this with fresh eyes, I see what you're seeing. This is clearly a gratuitous go routine, which I added for .
I've simply removed it, thanks for catching.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just couple of questions about optional fields. Plus github is not pulling all the diffs atm.
|
||
Volumes []corev1.Volume `json:"volumes,omitempty"` | ||
|
||
// Template, if specified,references a BuildTemplate resource to use to |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
BuildTemplate -> TemplateInstationSpec, or ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No, BuildTemplate
is the resource, and TemplateInstantiationSpec
is the instantiation.
Think function vs. call.
|
||
type TemplateInstantiationSpec struct { | ||
// Name references the BuildTemplate resource to use. | ||
Name string `json:"name,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can this really be empty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
No. I'm generally pretty aggressive with omitempty
for serialization. We have validation logic that checks this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I need to dig into this (and maybe this does not apply to CRD), but I thought that if you tagged something as omitempty validation would not complain if it was missing. Treat this as vicious rumor at best until I can find something
I'm on a bumpy bus and having a hard time typing much less searching anything. Here's apointer to something on the matter
kubernetes/kubernetes#25460
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
That's my standard operating mode ;-)
If there is something like this, I'd love to know about it.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I have found some random places that actually give hints about this, but frankly I'm not going to argue too much on this on hazy recollection :) However, if this field can never be empty, then what is the benefit of having omitempty there? To me when I see omitempty it implies that it could be empty (hence the question)
// ArgumentSpec defines the actual values to use to populate a template's | ||
// parameters. | ||
type ArgumentSpec struct { | ||
Name string `json:"name,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
same here can these be empty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same answer. Is there overarching guidance on the use of omitempty we should we aware of?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, when I see omitempty, to me it implies that it can be empty.
|
||
// Additional information based on the Builder executing this build. | ||
Cluster *ClusterSpec `json:"cluster,omitempty"` | ||
Google *GoogleSpec `json:"google,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Typically in these fields if some sub field is optional my understanding has been that then it should be a pointer instead of an empty struct.
926d737
to
c271f40
Compare
c.addBuildEvent(new) | ||
} | ||
|
||
// reconcileOnceBuilt handles enqueued messages that have an image. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
There's a lot of build specific stuff in the controller now. Could we pull it into a build.go file in this directory?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Per @grantr feedback elsewhere I've created a buildtracker.go
c271f40
to
0d2ac5a
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Just a couple of clarifications that I think will make the flow easier to understand later on.
|
||
type TemplateInstantiationSpec struct { | ||
// Name references the BuildTemplate resource to use. | ||
Name string `json:"name,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, I have found some random places that actually give hints about this, but frankly I'm not going to argue too much on this on hazy recollection :) However, if this field can never be empty, then what is the benefit of having omitempty there? To me when I see omitempty it implies that it could be empty (hence the question)
// ArgumentSpec defines the actual values to use to populate a template's | ||
// parameters. | ||
type ArgumentSpec struct { | ||
Name string `json:"name,omitempty"` |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Again, when I see omitempty, to me it implies that it can be empty.
} | ||
return nil | ||
} else { | ||
// The Build's complete, so stop tracking it, and keep going. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does keep going here mean? I'm trying to figure out what happens in this case. My expectation is that none of the other k8s resources are created but it's not obvious to me from here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I missed this, pushing a clarification now.
func (c *RevisionControllerImpl) markBuildComplete(u *v1alpha1.Revision, bc *buildv1alpha1.BuildCondition) error { | ||
switch bc.Type { | ||
case buildv1alpha1.BuildComplete: | ||
setCondition(u, "BuildFailed", nil) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Reading this, I assumed that setting something to nil means it gets 'removed' from the conditions if it already existed there, but this was not obvious to me, perhaps add a comment to setCondition method (since if I read the code correctly it actually removes a condition if it existed?) Just a thought.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I'm also thinking the right place for some of these methods is in foo_types.go
or something in the same package (where they are methods on the Foo[Status]
). WDYT?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think that's fine, and I think it's actually fine to do in a followup (refactoring). Just think it will make it easier to come back to this code or to a newcomer.
@@ -640,6 +759,19 @@ func (c *RevisionControllerImpl) removeFinalizers(u *v1alpha1.Revision, ns strin | |||
return nil | |||
} | |||
|
|||
func setCondition(u *v1alpha1.Revision, t string, new *v1alpha1.RevisionCondition) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
re: previous comment, maybe add a comment that if new is nil any existing condition is removed? Or I misunderstand what this does.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right, I'll add a removeCondition
(or method, depending on what you think of that)
I can't comment on the |
d5d1c8b
to
9ec03fd
Compare
Ok, I think that most of the changes to the |
9ec03fd
to
8a81daa
Compare
This copies over the Build type definitions and generates clients for them. This incorporates BuildSpec into RevisionTemplate and BuildName into Revision. A RevisionTemplate with a BuildSpec will stamp out a Build during reconciliation, and feed the resulting BuildName to the Revision. The Revision will reflect the Build's progress in its Status, and stand up resources upon completion. Fixes: #7
8a81daa
to
c9c2e0d
Compare
I opened #44 to track refactoring pure-data methods like |
// The Build's complete, so stop tracking it. | ||
c.buildtracker.Untrack(rev) | ||
if failed { | ||
return nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I feel like there should be a state on the revision that indicates to the user that the build has failed and no further work will be done on this resource. I think it happens elsewhere, but not entirely sure.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, that's in fact what this is keying off of. isBuildDone
returns failed = true
when it sees the BuildFailed
condition, and then it stops tracking the build.
@grantr I think I addressed / responded to your comments, but do you have anything else? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@mattmoor I clarified my goroutine question, and added a few more comments to later files.
I'd still like to better understand the reasoning behind this approach (see my question in #36 (review)) but that doesn't have to block this PR. We can talk about it in person if that's easier.
if err := c.markBuildComplete(hr, cond); err != nil { | ||
glog.Errorf("Error marking build completion for '%s/%s': %v", namespace, name, err) | ||
} | ||
}(k) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I agree we should probably adopt timeouts globally, depending on what the standard practice is.
To expand on my earlier vague notions: this is the only place (that I know of) where we start one or more goroutines without waiting for them to complete. If those goroutines block forever, they'll leak resources since they can't be stopped otherwise. Plus, the next time those resources are synced, we'll run this code again (since removing elements from builds
requires these requests to succeed) and create more goroutines attempting to mark builds complete, and those will also leak. This could eventually lead to out of memory errors.
My understanding of the way this code works may be wrong, or I may be wrong about go's behavior, in which case please correct me!
@@ -168,49 +168,298 @@ func TestCreateHRCreatesStuff(t *testing.T) { | |||
return hooks.HookComplete | |||
}) | |||
|
|||
// Look for the nginx configmap. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Was this intentionally removed or is it a merge casualty?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a casualty of how I moved changes from the other repo. thanks for catching.
@@ -258,6 +261,8 @@ func (c *Controller) syncHandler(key string) error { | |||
|
|||
return err | |||
} | |||
// Don't modify the informer's copy. | |||
rt = rt.DeepCopy() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for catching these!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack
pkg/controller/testing/hooks_test.go
Outdated
@@ -27,7 +27,7 @@ import ( | |||
"k8s.io/client-go/kubernetes/fake" | |||
) | |||
|
|||
func Example_hooks() { | |||
func TestHooks(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is named Example_
so it shows up in generated documentation. It get runs like a normal test, and the output is verified against the magic comments at the end. https://blog.golang.org/examples
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. TIL :)
h := NewHooks() | ||
f := fake.NewSimpleClientset() | ||
|
||
updates := 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This seems like a useful pattern! An improvement for a future PR might be to add this capability to the Hooks struct.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it took me a while to get it to work, and I added this because I didn't think it did :)
Turns out it does and my bug was elsewhere, but I wanted to make sure it didn't unknowingly regress.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
General question about the approach: why does the Revision controller have to track and wait for Build statuses?
What if, hypothetically, the Revision controller is able to predict or extract the target image uri associated with a Build resource. Then it could create a Deployment with that image url (even though the image doesn't exist), and the pods would run the wait loop for the image to appear. Would this accomplish the same thing as the Revision controller tracking Builds?
On some level, it could work that way. However, if the build fails, it'd be good for the Revision to know and reflect that it's associated build failed and give up (vs. just never becoming ready).
This is especially true when the "wait loop" involves a DDoS of the registry that's supposed to host the image. Also, what if the user rebuilds but doesn't change the tag (this has other interesting implications).
We can be a bit more tactical about getting some resources ready in parallel with the build, but the deployment should wait for the image to be ready.
Frankly, if the kubelet hits a image-pull-back-off (built-in to avoid registry DDoS), precreating the deployment could actually slow down deployment process.
@@ -168,49 +168,298 @@ func TestCreateHRCreatesStuff(t *testing.T) { | |||
return hooks.HookComplete | |||
}) | |||
|
|||
// Look for the nginx configmap. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This was a casualty of how I moved changes from the other repo. thanks for catching.
@@ -258,6 +261,8 @@ func (c *Controller) syncHandler(key string) error { | |||
|
|||
return err | |||
} | |||
// Don't modify the informer's copy. | |||
rt = rt.DeepCopy() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ack
pkg/controller/testing/hooks_test.go
Outdated
@@ -27,7 +27,7 @@ import ( | |||
"k8s.io/client-go/kubernetes/fake" | |||
) | |||
|
|||
func Example_hooks() { | |||
func TestHooks(t *testing.T) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Interesting. TIL :)
h := NewHooks() | ||
f := fake.NewSimpleClientset() | ||
|
||
updates := 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, it took me a while to get it to work, and I added this because I didn't think it did :)
Turns out it does and my bug was elsewhere, but I wanted to make sure it didn't unknowingly regress.
if err := c.markBuildComplete(hr, cond); err != nil { | ||
glog.Errorf("Error marking build completion for '%s/%s': %v", namespace, name, err) | ||
} | ||
}(k) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, looking at this with fresh eyes, I see what you're seeing. This is clearly a gratuitous go routine, which I added for .
I've simply removed it, thanks for catching.
Should be RFAL. LMK if I missed anything else. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
On some level, it could work that way. However, if the build fails, it'd be good for the Revision to know and reflect that it's associated build failed and give up (vs. just never becoming ready).
This is especially true when the "wait loop" involves a DDoS of the registry that's supposed to host the image. Also, what if the user rebuilds but doesn't change the tag (this has other interesting implications).
We can be a bit more tactical about getting some resources ready in parallel with the build, but the deployment should wait for the image to be ready.
Frankly, if the kubelet hits a image-pull-back-off (built-in to avoid registry DDoS), precreating the deployment could actually slow down deployment process.
Thanks, this makes sense!
* Incorporate Build into Elafros. This copies over the Build type definitions and generates clients for them. This incorporates BuildSpec into RevisionTemplate and BuildName into Revision. A RevisionTemplate with a BuildSpec will stamp out a Build during reconciliation, and feed the resulting BuildName to the Revision. The Revision will reflect the Build's progress in its Status, and stand up resources upon completion. Fixes: knative#7
Produced via: `dep ensure -update knative.dev/test-infra knative.dev/pkg knative.dev/serving` /assign shashwathi tanzeeb /cc shashwathi tanzeeb
Co-authored-by: Stavros Kontopoulos <[email protected]>
This copies over the Build type definitions and generates clients for them.
This incorporates BuildSpec into RevisionTemplate and BuildName into Revision.
A RevisionTemplate with a BuildSpec will stamp out a Build during reconciliation, and feed the resulting BuildName to the Revision.
The Revision will reflect the Build's progress in its Status, and stand up resources upon completion.
Fixes: #7