Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add podman-env command to allow access to the VM container runtime #961

Closed
4 tasks done
gbraad opened this issue Jan 27, 2020 · 34 comments · Fixed by #1001
Closed
4 tasks done

Add podman-env command to allow access to the VM container runtime #961

gbraad opened this issue Jan 27, 2020 · 34 comments · Fixed by #1001

Comments

@gbraad
Copy link
Contributor

gbraad commented Jan 27, 2020

As described in the #874 it is possible to re-use the VM to run containers from podman. This task will add the needed commands to crc, such as:

  • add podman-env
    • this sets up the environment inside the VM
    • sets env vars to allow access to the runtime
  • embed the podman-remote binary
@zeenix
Copy link
Contributor

zeenix commented Jan 29, 2020

* [ ]  this sets up the environment inside the VM

I would think that that part will be done already by SNC.

* embed the `podman-remote` binary

Same as oc binary I guess?

@gbraad
Copy link
Contributor Author

gbraad commented Jan 29, 2020

I would think that that part will be done already by SNC.

I can not answer this, so asking for feedback. Nothing needed for us externally to activate? No check of enabled service or starting a systemd unit? just socket activated?

@zeenix
Copy link
Contributor

zeenix commented Jan 30, 2020

Nothing needed for us externally to activate? No check of enabled service or starting a systemd unit? just socket activated?

As described here by @praveenkumar, in the VM, we need to:

  1. Install the libvarlink-util package
  2. sudo systemctl start io.podman.socket

First one needs to be done once and we can do that as part of SNC (I think we discussed that in f2f already). The second one I actually need to check how to enable by default on startup but I'm sure it's possible.

@zeenix
Copy link
Contributor

zeenix commented Jan 30, 2020

Actually, it seems @praveenkumar already handled:

* [ ]  this sets up the environment inside the VM

crc-org/snc@78aa9f8 . Marking that part as done.

@zeenix
Copy link
Contributor

zeenix commented Jan 30, 2020

Just to be certain, I tested against the latest 4.3.0 bundle and podman-remote worked out of the box with a freshly created/launched CRC VM:

$ PODMAN_USER=core PODMAN_HOST=192.168.130.11 PODMAN_IDENTITY_FILE=/home/zeenix/.crc/machines/crc/id_rsa PODMAN_IGNORE_HOSTS=1 podman-remote info
Error: error getting info: unexpected EOF
]$ PODMAN_USER=core PODMAN_HOST=192.168.130.11 PODMAN_IDENTITY_FILE=/home/zeenix/.crc/machines/crc/id_rsa PODMAN_IGNORE_HOSTS=1 podman-remote info
client:
  Connection: ssh -p 22 -T -i /home/zeenix/.crc/machines/crc/id_rsa -q -o StrictHostKeyChecking=no
    -o UserKnownHostsFile=/dev/null [email protected] -- varlink -A \'podman --log-level=error
    varlink \\\$VARLINK_ADDRESS\' bridge
  Connection Type: BridgeConnection
  OS Arch: linux/amd64
  Podman Version: 1.7.0
  RemoteAPI Version: 1
host:
  arch: amd64
  buildah_version: 1.12.0-dev
  cpus: 4
  distribution:
    distribution: '"rhcos"'
    version: "4.3"
  eventlogger: journald
  hostname: crc-zxxcq-master-0
  kernel: 4.18.0-147.3.1.el8_1.x86_64
  mem_free: 131379200
  mem_total: 7964925952
  os: linux
  swap_free: 0
  swap_total: 0
  uptime: 2h 34m 54.95s (Approximately 0.08 days)
insecure registries:
  registries: null
registries:
  registries: null
store:
  containers: 0
  graph_driver_name: overlay
  graph_driver_options: |-
    map[overlay.mount_program:map[Executable:/usr/bin/fuse-overlayfs Package:fuse-overlayfs-0.4.1-1.module+el8.1.0+4081+b29780af.x86_64 Version:fuse-overlayfs: version 0.4.1
    FUSE library version 3.2.1
    using FUSE kernel interface version 7.26]]
  graph_root: /var/home/core/.local/share/containers/storage
  graph_status:
    backing_filesystem: xfs
    native_overlay_diff: "false"
    supports_d_type: "true"
  images: 0
  run_root: /run/user/1000

Note however that it failed the first time with a EOF error. I've seen that before too and forgot it soon after since the subsequent commands worked. Seems like it always fails on the first attempt. My guess is that socket activation doesn't work fast enough or something.

@zeenix
Copy link
Contributor

zeenix commented Feb 10, 2020

For binaries, we'll need to use OS-specific method:

One tiny issue is the binary name difference between Linux and others, podman-remote vs podman. Dan Walsh said they were looking into solving this inconsistency but nothing has happened yet in that regard.

@praveenkumar
Copy link
Member

@zeenix I think we should ourself embed the podman-remote binary with CRC and extract it in ~/.crc/bin like we are doing for oc bits. How to get the bits with latest master is listed https://github.com/containers/libpod#library-and-tool-for-running-oci-based-containers-in-pods here but I would like to get the link which have bits for release side https://github.com/containers/libpod/releases/tag/v1.8.0 so I can get podman-remote v1.8.0 or v1.6.4 whichever version we have in our VM. May be @mheon or @rhatdan might help us to provide those urls.

@gbraad
Copy link
Contributor Author

gbraad commented Feb 10, 2020 via email

@cfergeau
Copy link
Contributor

If it's available in rhel/fedora, I'd prefer we use that version, same as what we do for virsh for example.

@rhatdan
Copy link

rhatdan commented Feb 10, 2020

Podman-remote, for linux, should probably be provided by the distribution.

@gbraad
Copy link
Contributor Author

gbraad commented Feb 10, 2020

Podman-remote, for linux, should probably be provided by the distribution.

can we guarantee the version provided by the OS/distro is usable? Besides, according to schedule and planning we target macOS (and later Windows). At the moment Linux is not considered. it is easier for those people to set up a local environment anyway. We will add support, but at a later stage => while podman-env can be provided cross-platformm other targeted OS will not provide the full setup code.

@rhatdan
Copy link

rhatdan commented Feb 10, 2020

Well, one issue would be people using an older CRC, would be avoided if the user was able to install a newer version on the MAC or windows box.

But getting the tools onto Windows and potentially MAC has proven difficult.

The Podman team should hopefully guarantee that the podman-remote is compatible, but we are fairly young with this support.

Providing a known good version for MAC and WIndows from the VM sounds like a good idea to me, but should not prevent the user from using a newer version of podman built for the host.

@gbraad
Copy link
Contributor Author

gbraad commented Feb 11, 2020

Providing a known good version for MAC and WIndows from the VM sounds like a good idea to me, but should not prevent the user from using a newer version of podman built for the host.

this is why we extract/refer to the the podman-remote in ~/.crc/bin and use podman-env to set this as part of the users PATH. This way we only temporarily enable the binary from the terminal session.

@gbraad
Copy link
Contributor Author

gbraad commented Feb 25, 2020

@dustymabe @dgilbery In reference to #453 and coreos/fedora-coreos-tracker#231 please have a look at this issue. We have since included the components for podman-remote into the RHCOS images.

@afbjorklund
Copy link

For minikube we are using sudo and the varlink bridge, in order to not have to allow root access to ssh and to have a podman socket up.

But boot2podman still allows both methods, and both of them work...

@praveenkumar
Copy link
Member

For minikube we are using sudo and the varlink bridge, in order to not have to allow root access to ssh and to have a podman socket up.

we are also not allowing the direct root access but using core user to connect with the socket atm.

zeenix added a commit to zeenix/crc that referenced this issue Feb 25, 2020
Add subcommand to setup environment variables to use `podman-remote` with
the CRC VM.

Fixes crc-org#961.
zeenix added a commit to zeenix/crc that referenced this issue Feb 25, 2020
zeenix added a commit to zeenix/crc that referenced this issue Feb 27, 2020
zeenix added a commit to zeenix/crc that referenced this issue Feb 27, 2020
zeenix added a commit to zeenix/crc that referenced this issue Mar 2, 2020
Add subcommand to setup environment variables to use `podman-remote` with
the CRC VM.

Fixes crc-org#961.
@rhatdan
Copy link

rhatdan commented Mar 4, 2020

@jwhonce PTAL

@jwhonce
Copy link

jwhonce commented Mar 4, 2020

@gbraad I looked at your code and downloaded the msi vs zip files, and found that the podman.exe's don't match. I'll get with the guys who set this up and see what has changed. Sorry for inconvenience.

@gbraad
Copy link
Contributor Author

gbraad commented Mar 5, 2020

@jwhonce Thanks. Would be great if this can be resolved soon. We will have a release soon and would like to include all of this. We have a code-freeze this Friday and last image/embed builds will happen on Monday.

Also, why is the latest a lower release than what is currently used in the RHCOS images?

@gbraad
Copy link
Contributor Author

gbraad commented Mar 5, 2020

I have been looking into an alternative strategy by using the *.msi, by doing

While this works there are however some issues with this approach

  • the /a archive.msi specifier needs an absolute path.
  • this will popup an InstallShield installer dialog, this can be suppressed with /qn (no UI), but does not provide any feedback if this was successful so would need additional logic
  • there is no way to filter which files to extract from the archive
  • extracted files end up like C:\Temp\PFiles\RedHat\Podman, which means we need to pick the needed binary from a temporary location
  • reliance on an external binary to perform the actual extraction, which might be denied access to. see previous statement about additional logic to verify and relocate this.

While possible, we would like this avoid this ...

@gbraad
Copy link
Contributor Author

gbraad commented Mar 5, 2020

The *.msi comes with a podman.bat file which seems to care about the config in AppData. We would like to avoid this, So I tried using podman-remote-client.exe directly (with the idea to rename this podman.exe?):

PS C:\Users\gbraad\.crc\bin> ssh -i $env:PODMAN_IDENTITY_FILE $env:PODMAN_USER@$env:PODMAN_HOST
Red Hat Enterprise Linux CoreOS 43.81.202001142154.0
  Part of OpenShift 4.3, RHCOS is a Kubernetes native operating system
  managed by the Machine Config Operator (`clusteroperator/machine-config`).

WARNING: Direct SSH access to machines is not recommended; instead,
make configuration changes via `machineconfig` objects:
  https://docs.openshift.com/container-platform/4.3/architecture/architecture-rhcos.html

---
Last login: Thu Mar  5 09:06:05 2020 from 172.30.64.1
[core@crc-w6th5-master-0 ~]$ exit
logout
Connection to 172.30.75.148 closed.
PS C:\Users\gbraad\.crc\bin> .\podman-remote-windows.exe --log-level debug version
>>
Client:
Version:            1.6.3-dev
RemoteAPI Version:  1
Go Version:         go1.12.10
Git Commit:         6c6e78374f5be949d11a8608080c96e2d22ca872
Built:              Wed Oct 30 03:12:23 2019
OS/Arch:            windows/amd64

Service:
time="2020-03-05T17:12:47+08:00" level=debug msg="unable to load configuration file at C:\\Users\\gbraad\\AppData\\podman\\podman-remote.conf"
time="2020-03-05T17:12:47+08:00" level=debug msg="creating a varlink bridge based on user input"
time="2020-03-05T09:12:47Z" level=debug msg="Using conmon: \"/usr/bin/conmon\""
time="2020-03-05T09:12:47Z" level=debug msg="Initializing boltdb state at /var/home/core/.local/share/containers/storage/libpod/bolt_state.db"
time="2020-03-05T09:12:47Z" level=debug msg="Using graph driver overlay"
time="2020-03-05T09:12:47Z" level=debug msg="Using graph root /var/home/core/.local/share/containers/storage"
time="2020-03-05T09:12:47Z" level=debug msg="Using run root /run/user/1000"
time="2020-03-05T09:12:47Z" level=debug msg="Using static dir /var/home/core/.local/share/containers/storage/libpod"
time="2020-03-05T09:12:47Z" level=debug msg="Using tmp dir /run/user/1000/libpod/tmp"
time="2020-03-05T09:12:47Z" level=debug msg="Using volume path /var/home/core/.local/share/containers/storage/volumes"
time="2020-03-05T09:12:47Z" level=debug msg="Set libpod namespace to \"\""
time="2020-03-05T09:12:47Z" level=debug msg="Not configuring container store"
time="2020-03-05T09:12:47Z" level=debug msg="Initializing event backend journald"
time="2020-03-05T09:12:47Z" level=debug msg="using runtime \"/usr/bin/runc\""
time="2020-03-05T17:12:47+08:00" level=error msg="Unable to obtain server version information: unexpected EOF"

I still see the same error?

The environment variables are picked up correct, as you can see this from the following snippet from the --help output:

      --identity-file string        identity-file (default "C:\\Users\\gbraad\\.crc\\machines\\crc\\id_rsa")
      --remote-host string          remote host (default "172.30.75.148")
      --username string             username on the remote host (default "core")

@gbraad
Copy link
Contributor Author

gbraad commented Mar 5, 2020

@Edward5hen ^^^ WDYT? Is there a regression in the Windows client or am I missing something? This same setup works for macOS and Linux.

@gbraad
Copy link
Contributor Author

gbraad commented Mar 6, 2020

Against a Fedora 30 VM:

PS> hvc ssh fedora-vm
$ sudo -i
# systemctl enable --now io.podman.socket
# exit
PS C:\Users\gbraad\.crc\bin> .\podman-remote-windows.exe --identity-file C:\Users\gbraad\.ssh\id_rsa --remote-host 172.30.72.141 --username gbraad --log-level debug ps
time="2020-03-06T16:27:29+08:00" level=debug msg="unable to load configuration file at C:\\Users\\gbraad\\AppData\\podman\\podman-remote.conf"
time="2020-03-06T16:27:29+08:00" level=debug msg="creating a varlink bridge based on user input"
time="2020-03-06T08:27:32Z" level=debug msg="using conmon: \"/usr/bin/conmon\""
time="2020-03-06T08:27:32Z" level=debug msg="Initializing boltdb state at /home/gbraad/.local/share/containers/storage/libpod/bolt_state.db"
time="2020-03-06T08:27:32Z" level=debug msg="Using graph driver overlay"
time="2020-03-06T08:27:32Z" level=debug msg="Using graph root /home/gbraad/.local/share/containers/storage"
time="2020-03-06T08:27:32Z" level=debug msg="Using run root /tmp/1000"
time="2020-03-06T08:27:32Z" level=debug msg="Using static dir /home/gbraad/.local/share/containers/storage/libpod"
time="2020-03-06T08:27:32Z" level=debug msg="Using tmp dir /run/user/1000/libpod/tmp"
time="2020-03-06T08:27:32Z" level=debug msg="Using volume path /home/gbraad/.local/share/containers/storage/volumes"
time="2020-03-06T08:27:32Z" level=debug msg="Set libpod namespace to \"\""
time="2020-03-06T08:27:32Z" level=debug msg="[graphdriver] trying provided driver \"overlay\""
time="2020-03-06T08:27:32Z" level=debug msg="overlay: mount_program=/usr/bin/fuse-overlayfs"
time="2020-03-06T08:27:32Z" level=debug msg="backingFs=xfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false"
time="2020-03-06T08:27:32Z" level=debug msg="Initializing event backend journald"
time="2020-03-06T08:27:32Z" level=debug msg="using runtime \"/usr/bin/runc\""
time="2020-03-06T08:27:32Z" level=info msg="running as rootless"
time="2020-03-06T08:27:32Z" level=debug msg="Using varlink socket: unix:/tmp/varlink-gu7wOk/socket"
time="2020-03-06T08:27:32Z" level=debug msg="using conmon: \"/usr/bin/conmon\""
time="2020-03-06T08:27:32Z" level=debug msg="Initializing boltdb state at /home/gbraad/.local/share/containers/storage/libpod/bolt_state.db"
time="2020-03-06T08:27:32Z" level=debug msg="Using graph driver overlay"
time="2020-03-06T08:27:32Z" level=debug msg="Using graph root /home/gbraad/.local/share/containers/storage"
time="2020-03-06T08:27:32Z" level=debug msg="Using run root /tmp/1000"
time="2020-03-06T08:27:32Z" level=debug msg="Using static dir /home/gbraad/.local/share/containers/storage/libpod"
time="2020-03-06T08:27:32Z" level=debug msg="Using tmp dir /run/user/1000/libpod/tmp"
time="2020-03-06T08:27:32Z" level=debug msg="Using volume path /home/gbraad/.local/share/containers/storage/volumes"
time="2020-03-06T08:27:32Z" level=debug msg="Set libpod namespace to \"\""
time="2020-03-06T08:27:32Z" level=debug msg="Initializing event backend journald"
time="2020-03-06T08:27:32Z" level=debug msg="using runtime \"/usr/bin/runc\""
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES
PS C:\Users\gbraad\.crc\bin>

so the executable is correct ... will verify again as this could have been issue with the socket activation?

@gbraad
Copy link
Contributor Author

gbraad commented Mar 6, 2020

Still the same results:

PS C:\Users\gbraad\.crc\bin> .\podman-remote-windows.exe --identity-file $env:PODMAN_IDENTITY_FILE --username $env:PODMAN_USER --remote-host $env:PODMAN_HOST ps
Error: unexpected EOF

Full re-run:

INFO Creating CodeReady Containers VM for OpenShift 4.3.1...
INFO Verifying validity of the cluster certificates ...
INFO Will run as admin: add dns server address to interface vEthernet (Default Switch)
INFO Check internal and public DNS query ...
INFO Check DNS query from host ...
INFO Copying kubeconfig file to instance dir ...
INFO Adding user's pull secret ...
INFO Updating cluster ID ...
INFO Starting OpenShift cluster ... [waiting 3m]
INFO
INFO To access the cluster, first set up your environment by following 'crc oc-env' instructions
INFO Then you can access it by running 'oc login -u developer -p developer https://api.crc.testing:6443'
INFO To login as an admin, run 'oc login -u kubeadmin -p dfEGQ-ISy4g-S4vri-xxfKK https://api.crc.testing:6443'
INFO
INFO You can now run 'crc console' and use these credentials to access the OpenShift web console
Started the OpenShift cluster
WARN The cluster might report a degraded or error state. This is expected since several operators have been disabled to lower the resource usage. For more information, please consult the documentation
PS C:\Users\gbraad\.crc\bin> hvc ip crc
172.30.66.52
PS C:\Users\gbraad\.crc\bin> $env:PODMAN_HOST="172.30.66.52"
PS C:\Users\gbraad\.crc\bin> ssh -i $env:PODMAN_IDENTITY_FILE $env:PODMAN_USER@$env:PODMAN_HOST
The authenticity of host '172.30.66.52 (172.30.66.52)' can't be established.
ECDSA key fingerprint is SHA256:d1lg5hpNdFWWrSjMSkqmMCRiw4cCrlMP3QOkN6jATYM.
Are you sure you want to continue connecting (yes/no)? yes
Warning: Permanently added '172.30.66.52' (ECDSA) to the list of known hosts.
Red Hat Enterprise Linux CoreOS 43.81.202002032142.0
  Part of OpenShift 4.3, RHCOS is a Kubernetes native operating system
  managed by the Machine Config Operator (`clusteroperator/machine-config`).

WARNING: Direct SSH access to machines is not recommended; instead,
make configuration changes via `machineconfig` objects:
  https://docs.openshift.com/container-platform/4.3/architecture/architecture-rhcos.html

---
[core@crc-jccc5-master-0 ~]$ podman ps
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES
[core@crc-jccc5-master-0 ~]$ exit
logout
Connection to 172.30.66.52 closed.
PS C:\Users\gbraad\.crc\bin> .\podman-remote-windows.exe ps
Error: unexpected EOF
PS C:\Users\gbraad\.crc\bin> .\podman-remote-windows.exe --identity-file $env:PODMAN_IDENTITY_FILE --username $env:PODMAN_USER --remote-host $env:PODMAN_HOST ps
Error: unexpected EOF
PS C:\Users\gbraad\.crc\bin>

In the VM systemctl shows:

io.podman.socket                                                                                                                                   loaded active listening Podman Remote API Socket

In the VM we have:

[core@crc-jccc5-master-0 ~]$ rpm -qa | grep varlink
libvarlink-18-3.el8.x86_64
libvarlink-util-18-3.el8.x86_64

@anjannath
Copy link
Member

On windows i get the same behavior as mentioned in #961 (comment).

But works on macOS

─anjan@dhcp35-62 ~/github.com/code-ready/crc ‹podman*›
╰─$ eval $(crc podman-env)
╭─anjan@dhcp35-62 ~/github.com/code-ready/crc ‹podman*›
╰─$ podman ps
CONTAINER ID  IMAGE  COMMAND  CREATED  STATUS  PORTS  NAMES
╭─anjan@dhcp35-62 ~/github.com/code-ready/crc ‹podman*›
╰─$ podman info
client:
  Connection: ssh -p 22 -T -i /Users/anjan/.crc/machines/crc/id_rsa -q -o StrictHostKeyChecking=no
    -o UserKnownHostsFile=/dev/null [email protected] -- varlink -A \'podman --log-level=error
    varlink \\\$VARLINK_ADDRESS\' bridge
  Connection Type: BridgeConnection
  OS Arch: darwin/amd64
  Podman Version: 1.6.3-dev
  RemoteAPI Version: 1
host:
  arch: amd64
  buildah_version: 1.12.0-dev
  cpus: 4
  distribution:
    distribution: '"rhcos"'
    version: "4.3"
  eventlogger: journald
  hostname: crc-w6th5-master-0
  kernel: 4.18.0-147.3.1.el8_1.x86_64
  mem_free: 252727296
  mem_total: 8359866368
  os: linux
  swap_free: 0
  swap_total: 0
  uptime: 10m 31.81s
insecure registries:
  registries: null
registries:
  registries: null
store:
  containers: 1
  graph_driver_name: overlay
  graph_driver_options: |-
    map[overlay.mount_program:map[Executable:/usr/bin/fuse-overlayfs Package:fuse-overlayfs-0.4.1-1.module+el8.1.0+4081+b29780af.x86_64 Version:fuse-overlayfs: version 0.4.1
    FUSE library version 3.2.1
    using FUSE kernel interface version 7.26]]
  graph_root: /var/home/core/.local/share/containers/storage
  graph_status:
    backing_filesystem: xfs
    native_overlay_diff: "false"
    supports_d_type: "true"
images: 1
  run_root: /run/user/1000

╭─anjan@dhcp35-62 ~/github.com/code-ready/crc ‹podman*›
╰─$ podman version
Client:
Version:            1.6.3-dev
RemoteAPI Version:  1
Go Version:         go1.12.10
Git Commit:         6c6e78374f5be949d11a8608080c96e2d22ca872
Built:              Wed Oct 30 00:38:11 2019
OS/Arch:            darwin/amd64

Service:
Version:            1.6.4
RemoteAPI Version:  1
Go Version:         go1.13.4
OS/Arch:            linux/amd64
╭─anjan@dhcp35-62 ~/github.com/code-ready/crc ‹podman*›
╰─$ podman run hello-world

Hello from Docker!
This message shows that your installation appears to be working correctly.

To generate this message, Docker took the following steps:
 1. The Docker client contacted the Docker daemon.
 2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
    (amd64)
 3. The Docker daemon created a new container from that image which runs the
    executable that produces the output you are currently reading.
 4. The Docker daemon streamed that output to the Docker client, which sent it
    to your terminal.

To try something more ambitious, you can run an Ubuntu container with:
 $ docker run -it ubuntu bash

Share images, automate workflows, and more with a free Docker ID:
 https://hub.docker.com/

For more examples and ideas, visit:
 https://docs.docker.com/get-started/

gbraad added a commit to gbraad-redhat/crc that referenced this issue Mar 6, 2020
gbraad added a commit to gbraad-redhat/crc that referenced this issue Mar 6, 2020
gbraad pushed a commit that referenced this issue Mar 6, 2020
Add subcommand to setup environment variables to use `podman-remote` with
the CRC VM.

Fixes #961.
gbraad pushed a commit that referenced this issue Mar 6, 2020
gbraad added a commit to gbraad-redhat/crc that referenced this issue Mar 6, 2020
gbraad added a commit to gbraad-redhat/crc that referenced this issue Mar 6, 2020
gbraad added a commit to gbraad-redhat/crc that referenced this issue Mar 6, 2020
@gbraad
Copy link
Contributor Author

gbraad commented Mar 10, 2020

@jwhonce @rhatdan I created #1083 to follow-up on the archive and containers/podman#5440 to handle the Windows support.

@rhatdan
Copy link

rhatdan commented Mar 10, 2020

@gbraad Great thanks.

@baude
Copy link

baude commented Mar 11, 2020

how do I reproduce what you are doing ?

@baude
Copy link

baude commented Mar 11, 2020

I think i know what might be wrong but will need to confirm with you live so you can test on your systems.

@gbraad
Copy link
Contributor Author

gbraad commented Mar 12, 2020

I will also create an issue to discuss the difference between the msi and zip as pointed out by @jwhonce in #961 (comment)

@gbraad
Copy link
Contributor Author

gbraad commented Mar 12, 2020

Here's the issue about the client archive on Windows: containers/podman#5477 ^^^

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants