Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ship libvarlink-utils #231

Closed
praveenkumar opened this issue Jul 26, 2019 · 14 comments · Fixed by coreos/fedora-coreos-config#154
Closed

Ship libvarlink-utils #231

praveenkumar opened this issue Jul 26, 2019 · 14 comments · Fixed by coreos/fedora-coreos-config#154

Comments

@praveenkumar
Copy link

We don't have libvarlink-utils as part of default ostree, any plan to add it. This is required to make podman remotely accessible as pointed out https://podman.io/blogs/2019/01/16/podman-varlink.html post.

@lucab
Copy link
Contributor

lucab commented Jul 26, 2019

That package seems to only contain /usr/bin/varlink (plus bash&vim helpers). Overall it is a small debugging utility which we could either directly ship or use via toolbox (or similar containers).

What I am more concerned about is the usecase with its documented flow.
It is meant to bridge between a unix-domain socket and remote-IPC over on-host sshd. While this is fine for the general usecase, I think we want to recommend against that for FCOS, for several reasons:

  • the host sshd is meant to be kept for extraordinary admin tasks, not as a general authenticated transport for user applications
  • the setup requires additional ssh config, user, and ACLs tweaks on the host itself. However, strictly speaking only the socket and group setup seems to be required to access podman IPC
  • I believe that the bridging part could be moved to its own container, with just an additional socket bindmount

In particular, I'd suggest investigating the route of the last point as it has the potential benefits of moving all authN and authZ outside of host scope. Moreover, it would also move network access policying to the overlay network.

@dustymabe
Copy link
Member

I think we want to recommend against that for FCOS, for several reasons

i'm not so sure..

connecting to machines via remote applications is pretty popular. For example, connecting via the docker API to a remote machine to, pull logs, debug a container using a GUI application, etc.. In the case of openshift/kube there are great tools for doing this already. In the case of standalone docker/podman I'd say we'd need to enable connecting from other machines such that operations like this could be easier on users. Docker has the remote API built in. podman has decided to go the way of varlink to achieve a similar goal (using SSH for auth/transport). Should we not support that out of the box?

@lucab
Copy link
Contributor

lucab commented Aug 6, 2019

Should we not support that out of the box?

Yes we should, by investigating and encouraging people to properly run their authenticated-transport bridge in a container. And that's because, unlike a general purpose distribution, we are explicitly aiming at a "minimal, [...] container-focused operating system".
From that point of view, it is perfectly ok to perform such bridging (SSH, socat, tcpwrap, or whatever the user prefers) in a container, without abusing on-host facilities meant for administration or shipping all possible bridging tools as part of the OS.

For completeness, this isn't an special case where we want to bridge a local-IPC mechanism to a remote transport. In a totally unrelated field, see for example this containerized bridge for Prometheus metrics.

@rhatdan
Copy link

rhatdan commented Aug 13, 2019

@baude @jwhonce @ashley-cui PTAL
Does podman varlink need the varlink executable?

@ashley-cui
Copy link

@rhatdan I vaguely recall having to install some sort of varlink before the remote could work. Not 100% sure though.

@baude
Copy link
Contributor

baude commented Aug 13, 2019

@rhatdan the system being "remoted-to" requires the varlink executable to be present for the purposes of the varlink bridge. So, yes.

@rhatdan
Copy link

rhatdan commented Aug 13, 2019

@dustymabe @praveenkumar @lucab This means since we want to use fedora-coreos as our boot2podman VM, we need to have the varlink executable present to make podman varlink work.

@lucab
Copy link
Contributor

lucab commented Aug 14, 2019

As per my earlier comments, from a technical point of view libvarlink-util is NOT required to be shipped by the OS on each host.
And the anti-pattern at play here is trying to "push to the host" application logic and configuration, which would be better owned by the app-owner and placed in a container (i.e. not by the OS on the host).

As it looks like my previous comments above were not explicit enough (sorry for that), let me go into a fully working example with socat and docker.
Please do note that these are just the quickest tools for me to reach, but the application owners should tweak it to their needs (e.g. sshd and podman, with authN and authZ, and some custom flow for credential management).

Let's start from a simple container image (you will maintain this and tweak it to your needs):

FROM fedora:30
RUN dnf -y install libvarlink-util podman socat

Let's run it with the unix-socket bind-mounted (you will own, run, document and tweak this to your needs):

[host]# docker run -p 7777:7777/tcp -ti --privileged  --rm -v /run/podman/:/run/podman <YOUR_IMAGE>

Let's now bridge the unix-socket over a TCP transport (you will customize this, add proper authN, and manage credentials lifecycle according to your needs):

[container]# socat TCP-LISTEN:7777,fork EXEC:"/usr/bin/varlink bridge --connect 'unix:/run/podman/io.podman'"

Now, from a remote container/host, you can reach the remote podman-varlink over the exposed port (you will customize this, document client environment and manage credentials according to your needs):

[user-container]$ varlink --bridge "socat - TCP:192.168.100.100:7777 " info
Vendor: Atomic
Product: podman
Version: 1.4.4
URL: https://github.com/containers/libpod
Interfaces:
  org.varlink.service
  io.podman

I personally tried this on latest FCOS testing preview, 30.20190801.0. The only strict requirement here is enabling and starting io.podman.socket, which can be done via Ignition (but I tested manually).
All the other details (transport type, transport encryption, credentials provisioning and rotation, client requirements) can be decoupled from the OS and directly owned by the application provider.

@cgwalters
Copy link
Member

We had an online chat about this...my personal vote is to continue discussion here.

For CRC, since you guys already build+snapshot RHCOS including containers, you could easily add another "varlink proxy container" of the form above right?

@dustymabe
Copy link
Member

dustymabe commented Aug 21, 2019

Hey @praveenkumar. As colin mentioned a few of us had a long chat about this yesterday.
While we don't want to ship remote IPC/bridging software for every application we do feel that
including libvarlink-utils makes sense in this case because it is remote IPC for podman,
which is offered by the host and a critical part of Fedora CoreOS.

While we will include the rpm we won't go as far as to enable remote IPC by default
(especially not unauthenticated). We'd like to get the rpm included and then discuss with
you how you specifically plan to set it up and then also publish a guide for ways to set it
up with authentication/authorization so that others can use a similar pattern.

I'll get the RPM added and then open a new issue to discuss the patterns for how to use it
where we'll collaborate with your team.

@rhatdan
Copy link

rhatdan commented Aug 21, 2019

Podman uses ssh for its remote Protocol. The varlink on the host. So enabling ssh is all you need to do to enable remote podman, as long as libvarlink-util is enabled.

Podman itself never listens on a network socket and by design relies on sshd for all remote access.

@praveenkumar
Copy link
Author

I'll get the RPM added and then open a new issue to discuss the patterns for how to use it
where we'll collaborate with your team.

Sounds great, Thank you.

@afbjorklund
Copy link

As discussed in other issue, you might want to consider adding the varlink resolver socket and service as well so that clients don't have to hard-code socket paths (such as /run/podman/io.podman)

Then again, said clients would still have to hard-code /run/org.varlink.resolver path in their libraries, so maybe it doesn't matter if you are only running a single varlink service anyway...

@gbraad
Copy link

gbraad commented Feb 25, 2020

Note: We have added libvarlink-util to our RHCOS images: crc-org/snc@78aa9f8 The changes in crc-org/crc#1001 sets up the environment using a command for CRC. This is not by default (on-demand).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

9 participants