-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
control: Add ResolveImageConfig() #472
Conversation
This is consistent with the gateway and frontend apis. The benefit of a server side lookup is that the cache outlives a single client invocation. Signed-off-by: Ian Campbell <[email protected]>
I agree that The problem with the suggested solution is that it does not work on private images.
If this is the only benefit it could also be achieved with client side cache. |
That's what we have today, the problem is that cache only lives as long as the client invocation, so two consecutive back to back runs do not benefit from the cache. I'm talking about a "one shot" build tool type client scenario rather than e.g. docker engine where the client persists over multiple builds. I suppose the client could build a persistent cache on disk somewhere. Perhaps helpers in the client library could be provided for that? Is the issue with private images not the same for the Does the underlying |
No, in gateway client private images work automatically through the build session.
If you are thinking about the |
OK. Perhaps the answer is to similarly attach a session to the
Does "clean" here imply "unauthenticated" or something else? The daemon side equivalent is the one in
was doing, although maybe I've followed an |
Yes. That is something that I meant with getting access to the
Yes, unauthenticated. I meant doing a pull with default containerd client functions directly. Another point is resolving the local images that the "clean" approach can't do but this probably should. |
I think that would be awesome! I've been trying to keep my client code in a state where flipping to using frontend wouldn't be too much effort, so far quite incompletely though. The main sticking point I've noticed there is |
@tonistiigi I had originally been assuming you were talking about a client side wrapper for existing I'd like to take a look at this but don't want to go down the wrong path since they are rather different implementation wise. |
We could do it in two ways. First would be to add methods like
to the control API. It should be possible to implement The second option would be to actually add the full I'm not fully convinced, what method is better so suggestions are welcome. A benefit for the second one could be that we get version support for free from the gateway API. |
That's the trickiest bit, I think. It looks like it should be possible for all the state about a Given that at the go level I think I would most naturally expect to be able to do something like
to get a It seems like it would be easiest is the I was also considering whether it would be possible to dynamically add new grpc servers on the server side, corresponding to the creation of a new "buildID", and have a client to that be returned (kind of like the inverse of the FSSync mechanism). Seems more complex than either of what you proposed though. |
Another way to accomplish this would be to associate a timeout with the
I'm not sure I understand this. Is this the second option? If yes, I don't understand the complications.
Problems with these solutions are that they are hard to document as they work on a hijacked stream. If it is possible to avoid, I'd just expose the grpc services directly. Eg. adding buildID to the context of gateway methods seems much cleaner than this. |
By "adding buildID to the context" I mean adding it to the grpc metadata https://godoc.org/google.golang.org/grpc/metadata that is transported through the context. Not that the user itself needs to pass it along when making the gateway queries. |
It was based on a misunderstanding of the second option (thought you meant I'll have a go at hacking up something based on gRPC contexts over the next days. |
Played with this a bit
In order to satisfy/implement the gateway variant of Then the existing controlapi I've not actually tried this yet. Since I've mostly been playing with your other alternative.
There are some RPC name clashes (the most obvious being At first I had been trying to make the I'll try the |
There is a
The latter. The receiver would validate the presence of buildid in metadata before continuing. |
@tonistiigi The head commit of my ijc/client-gateway branch is a first cut of your second proposal. Setup and use only, no teardown nor many of the other bits you'd need (no filesyn, no opts), but WDYT does the basic shape pass the initial sniff test? |
@ijc Basically yes, but for the client public API I'd expect something like.
The returned result would be exported and refs automatically released. We may want leave options to add something like |
I see: service LLBBridge {
«...»
rpc Return(ReturnRequest) returns (ReturnResponse);
}
message Result {
oneof result {
string ref = 1;
RefMap refs = 2;
}
map<string, bytes> metadata = 10;
}
«...»
message ReturnRequest {
Result result = 1;
google.rpc.Status error = 2;
}
message ReturnResponse {
} But I don't see where the exporter stuff is in that, nor can see anything like that in the backend code. In particular the only place I see |
The We could either change the |
So far there is no "main solver" in my code, so there is nothing at the moment which would act on any export requests. At the point where I've called the user provided Should I be calling I've pushed my current state to https://github.com/ijc/buildkit/tree/client-gateway (it's several commits though and very hacky). |
Ah, I think I need to reuse the tail of |
I got something working and pushed it to the branch. Ugly/WIP as all hell though, will clean up next week. |
Obsoleted by #533 |
This is consistent with the gateway and frontend apis.
The benefit of a server side lookup is that the cache outlives a single client
invocation.
Signed-off-by: Ian Campbell [email protected]