-
Notifications
You must be signed in to change notification settings - Fork 243
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Spike: Expose podman service outside the VM #874
Comments
This task gets priority over other work that was assigned. I'll contact you later about this ... |
Can you detail some of the findings so we can create an actual task to work on. What is the result of the spike? |
@gbraad IIRC I never managed to actually do anything before I left for PTO. |
Can you at least in that case detail what you believe needs to be done, and discuss with @praveenkumar about his previous effort regarding this. |
from today till Monday @zeenix will spent time on identifying the missing pieces and a basic method to start the image might need additional plackages installed. if so, we can make this part of for the client we will deliver Note, we are looking into basic functionality here. improvements are part of future work (just a spike to identify needed work and options). |
The only thing necessary is to enable the these sockets for socket activation /usr/lib/systemd/system/io.podman.service Then to allow access from a remote podman over ssh connections. |
You should be able to install a podman on a MAC via brew and have it talk to podman on the server. |
Preliminary findingsAs promised, I looked into this. I mainly/first followed this guide to setup podman with varlink in the CRC VM. First thing I found was that, while After the setup, I was able to communicate from the host with the $ python -m varlink.cli --bridge "ssh 192.168.130.11" call io.podman.ListContainers
{
"containers": [
{
"command": [
"/sbin/init"
],
"containerrunning": true,
"createdat": "2020-01-17T11:08:03Z",
"id": "3d9d9e87f5069075c28e437f4829c8e212b5b252d0af3406a1104f4bb25e3116",
"image": "quay.io/crcont/dnsmasq:latest",
"imageid": "851bb0e5bf751cba2d649612a47651890a86eafe629308e1b3273c16b71b047e",
"labels": {
"org.label-schema.build-date": "20190305",
"org.label-schema.license": "GPLv2",
"org.label-schema.name": "CentOS Base Image",
"org.label-schema.schema-version": "1.0",
"org.label-schema.vendor": "CentOS"
},
"mounts": [
# ...
$ python -m varlink.cli --bridge "ssh 192.168.130.11" call io.podman.GetInfo
{
"info": {
"host": {
"arch": "amd64",
"buildah_version": "1.12.0-dev",
"cpus": 4,
"distribution": {
"distribution": "\"rhcos\"",
"version": "4.3"
},
"eventlogger": "journald",
"hostname": "crc-m2n9t-master-0",
"kernel": "4.18.0-147.3.1.el8_1.x86_64",
"mem_free": 254738432,
"mem_total": 7966154752,
"os": "linux",
"swap_free": 0,
"swap_total": 0,
"uptime": "1h 9m 48.38s (Approximately 0.04 days)"
},
"insecure_registries": null,
"podman": {
"compiler": "gc",
"git_commit": "",
"go_version": "go1.13.4",
"podman_version": "1.6.4"
},
"registries": null,
"store": {
"containers": 134,
"graph_driver_name": "overlay",
"graph_driver_options": "map[]",
"graph_root": "/var/lib/containers/storage",
"graph_status": {
"backing_filesystem": "xfs",
"native_overlay_diff": "true",
"supports_d_type": "true"
},
"images": 65,
"run_root": "/var/run/containers/storage"
}
}
} Some commands don't seem to work for some reason: $ python -m varlink.cli --bridge "ssh 192.168.130.11" call io.podman.Ping {}
{'parameters': {'method': 'Ping'}, 'error': 'org.varlink.service.MethodNotFound'}
$ python -m varlink.cli --bridge "ssh 192.168.130.11" call io.podman.ListImages
Connection closed
$ /tmp/varlink call unix:/run/podman/io.podman/io.podman.ListImages|head
{
"images": [
{
"containers": 0,
"created": "2020-01-07T23:20:01Z",
"digest": "sha256:3bada34ebed01542891c576954844afa164f087b6d8081e23f6f1724600b1f2e",
"id": "076c2c01b0e2d22e31c9ba50b07765773a9cc211b060003cba473afe64d65f89",
"isParent": false,
"labels": {
"com.coreos.ostree-commit": "2497f5d4993087b8c879e0e4faab0bfba6bc0cac131af350d0654b34a7dfcfd9", Talking of things not working, I didn't manage to get /bin/podman-remote container list --remote-host 192.168.130.11
Cannot execute command-line and remote command.
Error: unexpected EOF but it works fine through TCP port forwaring as decribed here: $ PODMAN_VARLINK_ADDRESS="tcp:127.0.0.1:1234" /bin/podman-remote container list
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
3d9d9e87f506 quay.io/crcont/dnsmasq:latest /sbin/init 2 hours ago Up 2 hours ago 0.0.0.0:53->53/udp dnsmasq
$ PODMAN_VARLINK_ADDRESS="tcp:127.0.0.1:1234" /bin/podman-remote top 3d9d9e87f506
USER PID PPID %CPU ELAPSED TTY TIME COMMAND
root 1 0 0.000 2h21m50.509231061s ? 0s /sbin/init
root 18 1 0.047 2h21m50.509749422s ? 4s /usr/lib/systemd/systemd-journald
root 30 1 0.000 2h21m50.509987637s ? 0s /usr/lib/systemd/systemd-udevd
dbus 125 1 0.000 2h21m50.510185375s ? 0s /usr/bin/dbus-daemon --system --address=systemd: --nofork --nopidfile --systemd-activation
root 233 1 0.012 2h21m49.510382194s ? 1s /usr/sbin/dnsmasq -k
root 234 1 0.000 2h21m49.510591832s ? 0s /usr/lib/systemd/systemd-logind
root 235 1 0.000 2h21m49.510794531s ? 0s /sbin/agetty --noclear tty1 linux |
Last Friday @praveenkumar and I spoke about the possible strategies to expose this to the user as part of the user story. We identified the following 3 possiblities:
At the moment we decided to go for option 2, and work on a situation as described in 1 over time. @code-ready/crc-devel PTAL |
I think we can conclude the spike. Thanks. Let's discuss this on Monday and decide about the follow-up tasks. |
I like option #2, as well. It would be nice if all of the services were socket activated via systemd. |
I also like option 2 and as @rhatdan says, we don't need to run the service needlessly but instead have it socket activated. |
however, we need to make sure RHEL8 with podman and RHCOS + podman work identical. let's take a baseline with RHEL8 to make sure hat we see is not a 'known issue'. Note, we do use a userspace library/package from a slightly different version. our RHCOS is fixed/pinned to the openshift release. Do note that option 2 involves a large(r) memory footprint and slower startup time as the OpenShift consumes these resources and time. |
I think we need to keep it clear that we're talking of "crc as only a podman installer" case here (i-e users downloading/installing CRC just for trying out podman only). In case of "podman as a side-feature of CRC", the resource usage is not much of an issue as creating/running an OS cluster is the primary goal of CRC. |
So with 4.3 I can see the podman - 1.6.4 and libvarlink is installed but then also we need to install libvarlink-utils which is not part of default RHCOS and need to installed as part of disk creation like we are doing for hyperV, this package shouldn't be differ much from RHEL-8 side. Below is what I followed and able to make connection with remote podman on mac catalina.
podman-remote config on the Mac but
|
Changes to include the |
@zeenix any feedback on the snc changes that include needed varlink packages? |
@gbraad the bundle @praveenkumar created against it, works for me and Praveen. On Fedora, we both get the same error on |
Good to hear. So the findings are consistent. Do they however also occur when using |
Just checked and seems we can use environment variables instead of a configuration file:
|
I bumped into a few hurdles installing a RHEL8 VM to test but I can do that now if needed. |
This is easier for a command like |
Do a quick test if those earlier reported commands that failed also (or not) fail on this VM. If so, this is something we have to escalate. (helps us to decide which action to take, as it might well be the podman versions in the RHCOS image aren't tested for all usecases. for the installer it only runs the initial etcd/cluster deployment). |
Closing as spike has been concluded. Added #961 for follow-up |
Tested against RHEL8. It requires extra steps as even podman is not installed by default and you've to enable subscription etc before you can install anything. Once setup. I was able to recreate the same experience, except SSH auth was password-based (I failed to quickly enable key-based auth and I didn't want to spend more time than I already did). |
Just for the record, I did all my testing against 4.3 bundle. |
Side note: any users that want to explore option 3 (separate VM) on their own, can use podman-machine for that. Totally separated env, for better and worse (mostly used as an alternative to running a local podman) Similar functionality is available in minikube for docker-machine since a long time, and some users use it. Recent versions of minikube have now added a matching |
The use case here is to enable the use of podman CLI on the host to manage containers inside the CRC VM.
The text was updated successfully, but these errors were encountered: