-
Notifications
You must be signed in to change notification settings - Fork 59
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ship libvarlink-utils #231
Ship libvarlink-utils #231
Comments
That package seems to only contain What I am more concerned about is the usecase with its documented flow.
In particular, I'd suggest investigating the route of the last point as it has the potential benefits of moving all authN and authZ outside of host scope. Moreover, it would also move network access policying to the overlay network. |
i'm not so sure.. connecting to machines via remote applications is pretty popular. For example, connecting via the docker API to a remote machine to, pull logs, debug a container using a GUI application, etc.. In the case of openshift/kube there are great tools for doing this already. In the case of standalone docker/podman I'd say we'd need to enable connecting from other machines such that operations like this could be easier on users. Docker has the remote API built in. podman has decided to go the way of varlink to achieve a similar goal (using SSH for auth/transport). Should we not support that out of the box? |
Yes we should, by investigating and encouraging people to properly run their authenticated-transport bridge in a container. And that's because, unlike a general purpose distribution, we are explicitly aiming at a "minimal, [...] container-focused operating system". For completeness, this isn't an special case where we want to bridge a local-IPC mechanism to a remote transport. In a totally unrelated field, see for example this containerized bridge for Prometheus metrics. |
@baude @jwhonce @ashley-cui PTAL |
@rhatdan I vaguely recall having to install some sort of varlink before the remote could work. Not 100% sure though. |
@rhatdan the system being "remoted-to" requires the varlink executable to be present for the purposes of the varlink bridge. So, yes. |
@dustymabe @praveenkumar @lucab This means since we want to use fedora-coreos as our boot2podman VM, we need to have the varlink executable present to make podman varlink work. |
As per my earlier comments, from a technical point of view As it looks like my previous comments above were not explicit enough (sorry for that), let me go into a fully working example with socat and docker. Let's start from a simple container image (you will maintain this and tweak it to your needs):
Let's run it with the unix-socket bind-mounted (you will own, run, document and tweak this to your needs):
Let's now bridge the unix-socket over a TCP transport (you will customize this, add proper authN, and manage credentials lifecycle according to your needs):
Now, from a remote container/host, you can reach the remote podman-varlink over the exposed port (you will customize this, document client environment and manage credentials according to your needs):
I personally tried this on latest FCOS testing preview, |
We had an online chat about this...my personal vote is to continue discussion here. For CRC, since you guys already build+snapshot RHCOS including containers, you could easily add another "varlink proxy container" of the form above right? |
Hey @praveenkumar. As colin mentioned a few of us had a long chat about this yesterday. While we will include the rpm we won't go as far as to enable remote IPC by default I'll get the RPM added and then open a new issue to discuss the patterns for how to use it |
Podman uses ssh for its remote Protocol. The varlink on the host. So enabling ssh is all you need to do to enable remote podman, as long as libvarlink-util is enabled. Podman itself never listens on a network socket and by design relies on sshd for all remote access. |
Sounds great, Thank you. |
As discussed in other issue, you might want to consider adding the varlink resolver socket and service as well so that clients don't have to hard-code socket paths (such as Then again, said clients would still have to hard-code |
Note: We have added libvarlink-util to our RHCOS images: crc-org/snc@78aa9f8 The changes in crc-org/crc#1001 sets up the environment using a command for CRC. This is not by default (on-demand). |
We don't have
libvarlink-utils
as part of default ostree, any plan to add it. This is required to make podman remotely accessible as pointed out https://podman.io/blogs/2019/01/16/podman-varlink.html post.The text was updated successfully, but these errors were encountered: