Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[v4.5] backport my fixes from the past few weeks #18663

Merged
merged 11 commits into from
May 24, 2023

Conversation

Luap99
Copy link
Member

@Luap99 Luap99 commented May 23, 2023

see commits

Does this PR introduce a user-facing change?

Allow comma separate list of dns servers in `podman network create --dns` and podman network update --dns-add/--dns-drop` 
Fixed a problem where podman machine connections would try to connect to the ipv6 localhost ipv6 (::1).
The compat API now correctly accpets a tag in the images/create?fromSrc endpoint.
Fixed a bug in the compat container create endpoint which could result in a "duplicate mount destination" error when the volume path was not "clean", e.g. included a final slash at the end.
podman machine rm with the qemu backend now correctly removes the machine connection after the confirmation message not before.
podman save on windows can now write to stdout by default.
The remote API exec inspect call will now correctly displays updated information, e.g. when the exec process died.
Correctly tear down the network stack again when an error happened during the setup.
podman-remote logs now correctly display errors reported by the server.
podman stop will now correctly stop containers created with --restart=always in all cases.

Luap99 added 7 commits May 23, 2023 16:43
Commit 1ab833f improved the situation but it is still not enough.
If you run short lived containers with --restart=always podman is
basically permanently restarting them. To only way to stop this is
podman stop. However podman stop does not do anything when the
container is already in a not running state. While this makes sense we
should still mark the container as explicitly stopped by the user.

Together with the change in shouldRestart() which now checks for
StoppedByUser this makes sure the cleanup process is not going to start
it back up again.

A simple reproducer is:
```
podman run --restart=always --name test -d alpine true
podman stop test
```
then check if the container is still running, the behavior is very
flaky, it took me like 20 podman stop tries before I finally hit the
correct window were it was stopped permanently.
With this patch it worked on the first try.

Fixes containers#18259

[NO NEW TESTS NEEDED] This is super flaky and hard to correctly test
in CI. MY ginkgo v2 work seems to trigger this in play kube tests so
that should catch at least some regressions. Also this may be something
that should be tested at podman test days by users (containers#17912).

Signed-off-by: Paul Holzinger <[email protected]>
If the server responds with an error we must report it correct back to
the user.

Signed-off-by: Paul Holzinger <[email protected]>
The remote API will wait 300s by default before conmon will call the
cleanup. In the meantime when you inspect an exec session started with
ExecStart() (so not attached) and it did exit we do not know that. If
a caller inspects it they think it is still running. To prevent this we
should sync the session based on the exec pid and update the state
accordingly.

For a reproducer see the test in this commit or the issue.

Fixes containers#18424

Signed-off-by: Paul Holzinger <[email protected]>
By default podman save tries to write to /dev/stdout, this file doe snot
exists on windows and cannot be opened. Instead we should just use fd 1
in such case.

[NO NEW TESTS NEEDED]

Fixes containers#18147

Signed-off-by: Paul Holzinger <[email protected]>
the connection remove call must be done inside the function that is
returned so that we wait until the user confirmed it.

Fixes containers#18330

Signed-off-by: Paul Holzinger <[email protected]>
The logic which checks for duplicated volumes here did not work
correctly because it used filepath.Clean(). However the writes to the
volDestinations map did not thus the string no longer matched when you
included a final slash for example.

So we can either call Clean() on all or no paths. I decided to call it
on no path because this is what we do right now. Just the check did it.

Fixed containers#18454

Signed-off-by: Paul Holzinger <[email protected]>
Accept a tag in the compat api endpoint. For the fromImage param we
already parse it but for fromSrc we did not.

Fixes containers#18597

Signed-off-by: Paul Holzinger <[email protected]>
@github-actions github-actions bot added the kind/api-change Change to remote API; merits scrutiny label May 23, 2023
@openshift-ci openshift-ci bot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label May 23, 2023
Luap99 added 4 commits May 23, 2023 17:20
gvproxy listens on 127.0.0.1, using localhost as hostname can result in
the client trying to connect to the ipv6 localhost (`::1`). This will
fail as shown in the issue. This switches the hostname in the system
connection to 127.0.0.1 to fix this problem.
I switched the qemu, hyperV and WSL backend. I haven't touched the
applehv code because it uses two different ips and I am not sure what is
the correct thing there. I leave this to Brent to figure out.

[NO NEW TESTS NEEDED]

[1] https://github.com/containers/gvisor-tap-vsock/blob/main/cmd/gvproxy/main.go#L197-L199

Fixes containers#16470

Signed-off-by: Paul Holzinger <[email protected]>
The examples show that --dns-add 8.8.8.8,1.1.1.1 is valid but it fails,
fix this by using StringSliceVar which splits at commas.
Added tests to ensure it is working.

Fixes containers#18632

Signed-off-by: Paul Holzinger <[email protected]>
The wrong error was logged.

Signed-off-by: Paul Holzinger <[email protected]>
Make sure to tear down the netns again on errors. This is needed when a
later call fails and we do not have already stored the netns in the
container state.

[NO NEW TESTS NEEDED] My ginkgo-v2 PR will catch problem like this once
merged.

Fixes containers#18205

Signed-off-by: Paul Holzinger <[email protected]>
Copy link
Member

@vrothberg vrothberg left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM
@containers/podman-maintainers PTAL

Copy link
Member

@giuseppe giuseppe left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

/lgtm

@openshift-ci openshift-ci bot added the lgtm Indicates that a PR is ready to be merged. label May 24, 2023
@openshift-ci
Copy link
Contributor

openshift-ci bot commented May 24, 2023

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: giuseppe, Luap99, vrothberg

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:
  • OWNERS [Luap99,giuseppe,vrothberg]

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-merge-robot openshift-merge-robot merged commit bcc68fc into containers:v4.5 May 24, 2023
@Luap99 Luap99 deleted the v4.5-backports branch May 24, 2023 14:40
@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Aug 23, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 23, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. kind/api-change Change to remote API; merits scrutiny lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. release-note
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants