-
Notifications
You must be signed in to change notification settings - Fork 2.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Alternative to sudo, for remote podman #6809
Comments
Also enquiring about the usual approach to run local podman, currently using passwordless sudo to do it ( |
@afbjorklund we talked about this just last week. Probably still looking for a solution here. What are your thoughts or what do you favor. |
I think root logins might be disabled by default, so I would need to enable them
And then copy the authorized keys during boot, from the user to the mkdir /root/.ssh
chmod 700 /root/.ssh
cp /home/docker/.ssh/authorized_keys /root/.ssh/
chmod 600 /root/.ssh/authorized_keys I'm not sure what the implications of chown'ing the For dockerd I think it just uses the default settings, which uses the
Would you edit the user (or group) in the systemd unit somewhere perhaps ? [Socket]
ListenStream=%t/podman/podman.sock
SocketMode=0660 https://www.freedesktop.org/software/systemd/man/systemd.socket.html
SocketUser=docker
SocketGroup=docker |
For minikube/machine, there is a user called "docker" who is part of group "docker". It's also a member of the group "wheel", which will enable it to use /etc/sudoers
The group might also be called "sudo" (in Ubuntu), but it works the same way. Currently we are using this, to run It would probably have been easier to add a "podman" root group, but it seemed undesired ? |
Currently, if I run the new The previous versions (2.0.0) would give an error, after trying to contact a local socket - rather than use the varlink...
So one either has to use old podman-remote (1.9.3, since 1.8.x hangs) or we need to add support for this new socket. Unfortunately it is not included by default:
Or at least it is missing from Ubuntu 20.04
|
I am fine with creating a podman group and adding write access to the socket, but not by default, This would have to be configured in the containers.conf with strong words about how dangerous this is. Setting up podman group access to the root running podman, is equivalent to giving sudo without password access to root, and potentially worse. Is there something we could do as an alternative with systemd? IE I turn on the systemd podman.sock socket for a particular user and it sets up permissions for just this user. |
It seems to work, if starting the socket manually and changing the owner (also on the directory, not only the socket)
But one has to add the --remote (even to podman-remote), and one has to add the path and the secure param.
And it seems like $CONTAINER_HOST stopped working, so have to use --url "$CONTAINER_HOST" for it to work.
|
The non-working When building from source, both are OK. That is, the binary does have the buildflags and the APIv2 services do get installed... Ubuntu:
Source:
@lsm5 : that bug in user io.podman.service seems to also be there in user/podman.service, was it reported somewhere ? https://github.com/containers/libpod/blob/v2.0.1/contrib/systemd/user/podman.service#L16 |
Now that podman-machine doesn't work anymore, here is how to do the set up with vagrant: Vagrantfile # -*- mode: ruby -*-
# vi: set ft=ruby :
Vagrant.configure("2") do |config|
config.vm.box = "fedora/32-cloud-base"
config.vm.provider "virtualbox" do |vb|
vb.memory = "1024"
end
config.vm.provision "shell", inline: <<-SHELL
yum install -y podman
groupadd -f -r podman
#systemctl edit podman.socket
mkdir -p /etc/systemd/system/podman.socket.d
cat >/etc/systemd/system/podman.socket.d/override.conf <<EOF
[Socket]
SocketMode=0660
SocketUser=root
SocketGroup=podman
EOF
systemctl daemon-reload
echo "d /run/podman 0770 root podman" > /etc/tmpfiles.d/podman.conf
sudo systemd-tmpfiles --create
systemctl enable podman.socket
systemctl start podman.socket
usermod -aG podman $SUDO_USER
SHELL
end This installs podman, and adds a "podman" system group with socket access (like docker). Then one can use Important variables from ssh_config:
Then we Which enables us to access it remotely (unfortunately --remote is still broken and --url required) $ podman --remote version
Version: 2.0.2
API Version: 1
Go Version: go1.14.2
Built: Thu Jan 1 01:00:00 1970
OS/Arch: linux/amd64
$ podman-remote version
Error: Get "http://d/v1.0.0/libpod../../../_ping": dial unix ///run/user/1000/podman/podman.sock: connect: no such file or directory
$ podman-remote --url "$CONTAINER_HOST" --identity "$CONTAINER_SSHKEY" version
Client:
Version: 2.0.2
API Version: 1
Go Version: go1.14.2
Built: Thu Jan 1 01:00:00 1970
OS/Arch: linux/amd64
Server:
Version: 2.0.2
API Version: 0
Go Version: go1.14.3
Built: Thu Jan 1 01:00:00 1970
OS/Arch: linux/amd64 The vagrant .box is about the same size as the fedora .iso (250% boot2podman)
For linux users it is also possible to use the libvirt/kvm box instead of virtualbox. See https://vagrantcloud.com/search and https://alt.fedoraproject.org/cloud/ |
Full example here: https://boot2podman.github.io/2020/07/22/machine-replacement.html It looks like the "tmpfiles.d" was the missing piece, when it came to changing the group... |
A friendly reminder that this issue had no activity for 30 days. |
@ashley-cui PTAL |
For what it is worth, the systemd units are still missing in podman 2.0.6~1 as well.
It only has the varlink units:
Gives error: It does include varlink, though. |
You mean they are not shipped within an RPM? |
@lsm5 PTAL |
@afbjorklund |
Thank you, works now.
This was Ubuntu 20.04. |
We now have two working solutions, either connect as root@ and use the default - or change the group and use user@. @ashley-cui : $ minikube podman-env
export PODMAN_VARLINK_BRIDGE="/usr/bin/ssh -F /dev/null -o ConnectionAttempts=3
-o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet
-o PasswordAuthentication=no -o ServerAliveInterval=60 -o
StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null [email protected] -o
IdentitiesOnly=yes -i /home/anders/.minikube/machines/minikube/id_rsa -p 34749
-- sudo varlink -A \'podman varlink \\\$VARLINK_ADDRESS\' bridge"
export CONTAINER_HOST=ssh://[email protected]:34749/run/podman/podman.sock
export CONTAINER_SSHKEY=/home/anders/.minikube/machines/minikube/id_rsa
export MINIKUBE_ACTIVE_PODMAN="minikube"
# To point your shell to minikube's podman service, run:
# eval $(minikube -p minikube podman-env)
So you can close the ticket... But I think we will wait for 2.1. |
Note that in the case of minikube we are using There's also some missing pieces in the minikube OS, in order to run rootless containers (mainly because it wasn't needed) docker@minikube:~$ podman pull busybox
Trying to pull docker.io/library/busybox...
Getting image source signatures
Copying blob df8698476c65 done
Copying config 6858809bf6 done
Writing manifest to image destination
Storing signatures
ERRO[0004] Error while applying layer: ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
Trying to pull quay.io/busybox...
error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\n<title>404 Not Found</title>\n<h1>Not Found</h1>\n<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>\n"
Error: unable to pull busybox: 2 errors occurred:
* Error committing the finished image: error adding layer with blob "sha256:df8698476c65c2ee7ca0e9dbc2b1c8b1c91bce555819a9aaab724ac64241ba67": ApplyLayer exit status 1 stdout: stderr: there might not be enough IDs available in the namespace (requested 65534:65534 for /home): lchown /home: invalid argument
* Error initializing source docker://quay.io/busybox:latest: Error reading manifest latest in quay.io/busybox: error parsing HTTP 404 response body: invalid character '<' looking for beginning of value: "<!DOCTYPE HTML PUBLIC \"-//W3C//DTD HTML 3.2 Final//EN\">\n<title>404 Not Found</title>\n<h1>Not Found</h1>\n<p>The requested URL was not found on the server. If you entered the URL manually please check your spelling and try again.</p>\n"
But we do allow the user to run containers with podman, mostly to avoid them having to run two VMs (1 podman, 1 k8s). |
@afbjorklund looks like you've found working solutions, so i'm going to close this issue. Re-open if there's more to be done here. |
@ashley-cui : yes, the only thing remaining is to actually do it (code). Should have podman support back for next major release. |
When transitioning from the sudo varlink bridge to the new rest API, is there an alternative to logging in as root with ssh ?
Like adding a "podman" root-equivalent group, or starting the "podman.sock" socket as some other privileged user* perhaps.
* I think CoreOS is doing this (for the
core
user) ?Running rootless isn't the question here, it's about root.
Basically wondering what to use for the CONTAINER_HOST
Here is how they are using DOCKER_HOST (i.e. for docker):
The text was updated successfully, but these errors were encountered: