Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Upgrade to V2 API #37

Closed
drewbailey opened this issue Jun 29, 2020 · 9 comments
Closed

Upgrade to V2 API #37

drewbailey opened this issue Jun 29, 2020 · 9 comments

Comments

@drewbailey
Copy link
Contributor

Podman varlink API is no longer supported.

The V2 HTTP API is where all future effort and bugfixes will be implemented.

Need to decide if we are going to consume their swagger API and generate a swagger client or just create our own API client. Inclined to do the latter because the generated swagger API doesn't provide a lot of control and requires us to navigate through some interesting generated code.

Issues that are fixed in v2 api:
containers/podman#4560

@drewbailey
Copy link
Contributor Author

I think using an auto-generated client based off a swagger spec is out, the generated code won't suffice and isn't usable in a production setting.

Rolling our own API is going to be time consuming, I think we should look into using https://godoc.org/github.com/containers/libpod/pkg/bindings which is what podman uses for its remote-client interactions. It seems to support most of the API but it has a few small usability quirks from taking a quick look

The connection to the client is held in context and must be retrieved/err checked on each invocation

	conn, err := bindings.GetClient(ctx)
	if err != nil {
		return nil, err
	}

@towe75
Copy link
Collaborator

towe75 commented Jul 10, 2020

pkg/bindings does not live in a separate go module. So we might drag in the complete podman as a dependency.
Aside of that it's worth a try.

@drewbailey
Copy link
Contributor Author

yeah that part is really unfortunate. I think ultimately making our own API may be better... maybe others in the community would benefit as well. Maybe containers/podman#6866 will get addressed and they will create a module

@towe75
Copy link
Collaborator

towe75 commented Jul 11, 2020

Most work would surely be to handcraft all the payload types. But maybe we can borrow the needed types from https://github.com/containers/podman/blob/e38001f123a04f905a37bf038b2c983ebe350996/pkg/specgen/specgen.go ?

@towe75
Copy link
Collaborator

towe75 commented Jul 12, 2020

@drewbailey: i spent some time on this problem and learned a few lessons:

  • your are totally right about the swagger codegen approach. It's not feasible.
  • pulling/go-get'ing or raw copy of some podman structures quickly turns into a nightmare
  • having a dependency on the full podman project smells
  • implementing a few simple endpoints like "stats", "start" or "stop" was a no brainer.
  • we're using a range of endpoints but we usually leverage only a small number of features/attributes and thus we could stick to partially implemented types.

All in all i suggest to add a embedded, handcrafted, client package, tailed to our needs instead of going all-in on a generic podman client.

If you agree, i would start a branch and a WIP PR with a tick-list for each endpoint. Then i will refactor a few calls from client.go into the new package to get started. The plugin could, for the transition phase, simply use both varlink and api v2 so that we always have working unit-tests.

One question before i start: the varlink based client.go has a retry mechanism in case of network failures. Do you think we should have this for the new API as well?

@drewbailey
Copy link
Contributor Author

Hey @towe75 thanks for spending some time on this. I think our own client seems like our best bet as well.

One thing that could potentially be useful to at least copy from is the generated swagger model/structs I pushed a branch up here if you want to take a look. https://github.com/hashicorp/nomad-driver-podman/tree/experiment/swagger-structs

I think a retry mechanism sounds good to me, the docker driver checks against a set of known transient errors and retries for certain functions. Something like that where we only retry on known issues instead of retrying all failures sounds ideal to me.

@towe75 towe75 mentioned this issue Jul 20, 2020
17 tasks
@drewbailey
Copy link
Contributor Author

resolved by #51

@computator
Copy link

Perhaps a new binary release should be made since this is a rather major change?

@drewbailey
Copy link
Contributor Author

@computator we are planning on cutting a release soon we want to at least get #70 fixed before doing so.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants