Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[nix] Cleanup nix derivation for static builds #6402

Merged

Conversation

@openshift-ci-robot
Copy link
Collaborator

Hi @hswong3i. Thanks for your PR.

I'm waiting for a containers member to verify that this patch is reasonable to test. If it is, they should reply with /ok-to-test on its own line. Until that is done, I will not automatically test new commits in this PR, but the usual testing commands by org members will still work. Regular contributors should join the org to skip this step.

Once the patch is verified, the new status will be reflected by the ok-to-test label.

I understand the commands that are listed here.

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label May 27, 2020
@hswong3i hswong3i force-pushed the master-linux-amd64 branch 5 times, most recently from 6df651c to 8aa554c Compare May 27, 2020 15:52
@hswong3i hswong3i changed the title Cleanup nix derivation for static builds [master] Cleanup nix derivation for static builds May 27, 2020
@hswong3i hswong3i force-pushed the master-linux-amd64 branch from 8aa554c to ae906a7 Compare May 27, 2020 16:03
@rhatdan
Copy link
Member

rhatdan commented May 27, 2020

@saschagrunert PTAL
Since we don't really know much about .nix static builds, I will rely on you guys to get this working.

nix/default.nix Outdated Show resolved Hide resolved
nix/default.nix Show resolved Hide resolved
nix/default.nix Outdated Show resolved Hide resolved
@hswong3i hswong3i force-pushed the master-linux-amd64 branch from ae906a7 to 2612f38 Compare May 28, 2020 00:08
@hswong3i
Copy link
Collaborator Author

hswong3i commented May 28, 2020

@saschagrunert if I apply make nixpkgs to update the nix/nixpkgs.json, e.g. as below (ok I copy that from crun/conmon):

diff --git a/nix/nixpkgs.json b/nix/nixpkgs.json
index fbc774373..84df2d61e 100644
--- a/nix/nixpkgs.json
+++ b/nix/nixpkgs.json
@@ -1,8 +1,9 @@
 {
   "url": "https://github.com/nixos/nixpkgs",
-  "rev": "a08d4f605bca62c282ce9955d5ddf7d824e89809",
-  "date": "2020-03-20T10:10:15+01:00",
-  "sha256": "1bniq08dlmrmrz4aga1cj0d7rqbaq9xapm5ar15wdv2c6431z2m8",
+  "rev": "1b5925f2189dc9b4ebf7168252bf89a94b7405ba",
+  "date": "2020-05-27T15:03:28+02:00",
+  "path": "/nix/store/qdsrj7hw9wzzng9l2kfbsyi9ynprrn6p-nixpkgs",
+  "sha256": "0q9plknr294k4bjfqvgvp5vglfby5yn64k6ml0gqwi0dwf0qi6fv",
   "fetchSubmodules": false,
   "deepClone": false,
   "leaveDotGit": false

After nix build -f nix/ the result binary always become v1.9.3...

$ ./result/bin/podman --version
podman version 1.9.3

Should that due to upstream nixpkgs template being updated? I manually trace the changes for podman/default.nix (https://github.com/NixOS/nixpkgs/commits/master/pkgs/applications/virtualization/podman/default.nix) but have no idea...

@saschagrunert
Copy link
Member

@saschagrunert if I apply make nixpkgs to update the nix/nixpkgs.json, e.g. as below (ok I copy that from crun/conmon):

If we update the nixpkgs here then we also have to ensure that the remote container image (for CI purposes) on quay.io is up-to-date. See
https://github.com/containers/libpod/blob/adca437d03bc74edcf3ef9b60ea55360157f893c/Makefile#L227-L232

@hswong3i
Copy link
Collaborator Author

If we update the nixpkgs here then we also have to ensure that the remote container image (for CI purposes) on quay.io is up-to-date. See

https://github.com/containers/libpod/blob/adca437d03bc74edcf3ef9b60ea55360157f893c/Makefile#L227-L232

Therefore too much dependency and looks like overkill for this PR, let's try again for that later ;-)

Copy link
Member

@saschagrunert saschagrunert left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ah I think we still have to update the image to get the CI happy :)

@hswong3i
Copy link
Collaborator Author

Ah I think we still have to update the image to get the CI happy :)

Oh dear @saschagrunert please share me some hints O_O||

@saschagrunert
Copy link
Member

@hswong3i can you set NIX_IMAGE ?= quay.io/podman/nix-podman:1.1.0 and run make nix-image to verify it works?

https://github.com/containers/libpod/blob/adca437d03bc74edcf3ef9b60ea55360157f893c/Makefile#L227-L232

Can someone invite me to https://quay.io/organization/podman? I think @TomSweeneyRedHat helped me the last time pushing the image to the right location. 😇

@hswong3i hswong3i force-pushed the master-linux-amd64 branch from 2612f38 to 4954f15 Compare May 28, 2020 09:15
@hswong3i
Copy link
Collaborator Author

@hswong3i can you set NIX_IMAGE ?= quay.io/podman/nix-podman:1.1.0 and run make nix-image to verify it works?

https://github.com/containers/libpod/blob/adca437d03bc74edcf3ef9b60ea55360157f893c/Makefile#L227-L232

Can someone invite me to https://quay.io/organization/podman? I think @TomSweeneyRedHat helped me the last time pushing the image to the right location. innocent

OK make nix-image successful:

checking for references to /tmp/nix-build-podman-static.drv-0/ in /nix/store/61qm573j28xriipwrhpy1wc6g4zwrq2c-podman-static-man...
/nix/store/xzj9mvy3c2h1p8sy0cb66100qy477jvf-podman-static-bin
Removing intermediate container 895f98181f35
 ---> c03a360a150c
Step 6/7 : WORKDIR /
 ---> Running in b263d2989d13
Removing intermediate container b263d2989d13
 ---> b2d4f64185d5
Step 7/7 : RUN rm -rf work
 ---> Running in 0955495ae273
Removing intermediate container 0955495ae273
 ---> 55ef9814af04
Successfully built 55ef9814af04
Successfully tagged quay.io/podman/nix-podman:1.1.0

hswong3i@hswong3i-XPS-13-7390:~/Documents/alvistack/_fork/libpod$ docker image list quay.io/podman/nix-podman
REPOSITORY                  TAG                 IMAGE ID            CREATED             SIZE
quay.io/podman/nix-podman   1.1.0               55ef9814af04        19 seconds ago      2.38GB

@hswong3i hswong3i force-pushed the master-linux-amd64 branch 2 times, most recently from 7b3c01d to 67c87f1 Compare May 30, 2020 11:40
@hswong3i hswong3i force-pushed the master-linux-amd64 branch from 8197ea6 to 7f116a6 Compare July 17, 2020 11:18
Signed-off-by: Wong Hoi Sing Edison <[email protected]>
@hswong3i hswong3i force-pushed the master-linux-amd64 branch from 7f116a6 to f53812a Compare July 18, 2020 01:03
@rhatdan
Copy link
Member

rhatdan commented Jul 18, 2020

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Jul 18, 2020
@openshift-merge-robot openshift-merge-robot merged commit d087ade into containers:master Jul 18, 2020
@rubensa
Copy link

rubensa commented Sep 23, 2020

Now that crun/conmon/skopeo/buildah/podman are statically built (and available on https://github.com/alvistack/ repositories)..

Is there any guide on how to "manually install" a full static binary podman distribution?
Something like:

  • Placing (or linking) downloaded binaries in the right directory (/usr/bin, /usr/local/bin, $HOME/.local/bin, ...)
  • Setting up configuration files (containers.conf, storage.conf, registries.conf, auth.conf, ...) and placing them in the right directory (/etc/containers, $HOME/.config/containers, ...)

My objetive here is to maually install and configure a root-less podman as I already install and configure docker making the process as less "invasive" for the system as I can.

@rhatdan
Copy link
Member

rhatdan commented Sep 23, 2020

Rootless podman, you should just need to install the executable in your homedir, but you will need other programs like fuse-overlayfs and crun or runc installed.
Finally you would probably want registries.conf in your home dir under ~/.config/containers/registries.conf

I have never tried this, but I believe this will work. Try it and document what you find.

@rubensa
Copy link

rubensa commented Sep 23, 2020

@rhatdan First of all, thank you for your comment.

I've created the ~/.config/containers/registries.conf file (with only docker.io registry)

[registries.search]
registries = ['docker.io']

[registries.insecure]
registries = []

[registries.block]
registries = []

I've runc already installed in my Ubuntu 20.04.1 (but would like to use the static compilled crun available on https://github.com/alvistack/)

$ runc --version
runc version 1.0.0-rc10
commit: dc9208a3303feef5b3839f4323d9beb36df0a9dd
spec: 1.0.1-dev

But, more things are needed (and looks like can't be placed on ~/.local/bin)...

./podman-v2.1.0-linux-amd64 run hello-world
Error: could not find a working conmon binary (configured options: [/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon]): invalid argument

Then I tried with

sudo ln -s $PWD/conmon-v2.0.21-linux-amd64 /usr/local/bin/conmon

But now If I run again

./podman-v2.1.0-linux-amd64 run hello-world

The process stalls (never ends) and nothing happens... but I can see that two podman processes are running...

$ ps -af
UID          PID    PPID  C STIME TTY          TIME CMD
root        1007    1005  0 07:16 tty1     00:00:01 /usr/lib/xorg/Xorg vt1 -displayfd 3 -auth /run/user/125/gdm/Xauthority -background none -noreset -keeptty -verbose 3
gdm         1733    1005  0 07:16 tty1     00:00:00 /usr/libexec/gnome-session-binary --systemd --autostart /usr/share/gdm/greeter/autostart
root        3501    3499  6 07:16 tty2     00:37:03 /usr/lib/xorg/Xorg vt2 -displayfd 3 -auth /run/user/1000/gdm/Xauthority -background none -noreset -keeptty -verbose 3
rubensa     3856    3499  0 07:16 tty2     00:00:00 /usr/libexec/gnome-session-binary --systemd --systemd --session=ubuntu
rubensa     5429    4940  0 07:16 pts/0    00:00:00 /bin/bash -l
rubensa     5432    4896  0 07:16 pts/0    00:00:00 /bin/bash -l
rubensa   215334  160869  0 17:23 pts/0    00:00:00 ./podman-v2.1.0-linux-amd64 run hello-world
rubensa   215347  215334  0 17:23 pts/0    00:00:00 ./podman-v2.1.0-linux-amd64 run hello-world
rubensa   217138  210646  0 17:29 pts/1    00:00:00 ps -af

And nothing is created under ~.config/containers (looks like no image is downloaded or anything).

@saschagrunert
Copy link
Member

I think a static bundle like we provide it in CRI-O would make sense. Just untar, make install and you’re good to go.

@rubensa
Copy link

rubensa commented Sep 23, 2020

With the previous commands I noticed that bolt_state.db was created under ~/.local/share/containers/storage/libpod

To try a bit more, I downloaded static binaries for fuse-overlayfs-x86_64-1.1.2. Can I create a storage.conf file under ~/.config/containers/ to specify mount_program = "/path/to/fuse-overlayfs-x86_64-1.1.2" and expect this to be used?

[storage]
  driver = "overlay"
[storage.options]
  mount_program = "/path/to/fuse-overlayfs-x86_64-1.1.2"

@rubensa
Copy link

rubensa commented Sep 23, 2020

Umm... looks like it is used... but now I have to see how I can configure this...

$ ./podman-v2.1.0-linux-amd64 run hello-world
ERRO[0000] User-selected graph driver "overlay" overwritten by graph driver "vfs" from database - delete libpod local files to resolve 

@rubensa
Copy link

rubensa commented Sep 23, 2020

Tried with a new containers.conf under ~/.config/containers/ with

runtime = "/path/to/crun-0.14.1-linux-amd64"

but same error message as before...

If I remove from storage.conf the following lines (keeping only storage.options section)

[storage]
  driver = "overlay"

again the process (hello-world) is stuck and nothing happens.

@rhatdan
Copy link
Member

rhatdan commented Sep 23, 2020

Don't put anything in ~/.local/share/containers. Podman will create this content on first run.

Podman searches for fuse-overlayfs and if it finds it will setup overlayfs. If it is in the users executable path it should get added.
In github.com/containers/storage
./utils.go: if path, err := exec.LookPath("fuse-overlayfs"); err == nil {

Then Podman will take care of everything else.
You could precreate the storage.conf

@rhatdan
Copy link
Member

rhatdan commented Sep 23, 2020

containers.conf pointing at crun would work also.

@rubensa
Copy link

rubensa commented Sep 24, 2020

@rhatdan Thanks for the info.
I didn't created anything under ~/.local/share/containers. As you say, the folder was created with an storage/libpod/bolt_state.db file on it.

I tried again like this:

$ sudo ln -s /software/podman-v2.1.0-linux-amd64 /usr/local/bin/podman
$ sudo ln -s /software/fuse-overlayfs-x86_64-1.1.2 /usr/local/bin/fuse-overlayfs
$ sudo ln -s /software/crun-0.14.1-linux-amd64 /usr/local/bin/crun
$ sudo ln -s /software/conmon-v2.0.21-linux-amd64 /usr/local/bin/conmon

With ~/.config/containers/containers.conf

runtime = "crun"

~/.config/containers/registries.conf

[registries.search]
registries = ['docker.io']

[registries.insecure]
registries = []

[registries.block]
registries = []

~/.config/containers/storage.conf

[storage.options]
  mount_program = "fuse-overlayfs"

Running

$ podman run hello-world

The cursor keeps blinking and nothing happens...

Is there any way to debug what is happening in the stalled process?

  11841 pts/1    Ss     0:00 bash
  12179 pts/1    Sl+    0:00 podman run hello-world
  12191 pts/1    S+     0:00 podman run hello-world

@rubensa
Copy link

rubensa commented Sep 24, 2020

Found the flag to debug...

$ podman run --log-level=debug hello-world
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called run.PersistentPreRunE(podman run --log-level=debug hello-world) 
DEBU[0000] Reading configuration file "/home/rubensa/.config/containers/containers.conf" 
DEBU[0000] Merged system config "/home/rubensa/.config/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.22.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:file HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand:/pause InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/home/rubensa/.config/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/rubensa/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/rubensa/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/rubensa/.config/cni/net.d}} 
DEBU[0000] Using conmon: "/usr/local/bin/conmon"        
DEBU[0000] Initializing boltdb state at /home/rubensa/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver                           
DEBU[0000] Using graph root /home/rubensa/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/rubensa/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/rubensa/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Not configuring container store              
DEBU[0000] Initializing event backend file              
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/local/bin/crun"          
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 

Need to check the error...

@rubensa
Copy link

rubensa commented Sep 24, 2020

Tried with

sudo ln -s /software/slirp4netns-x86_64-1.1.4 /usr/local/bin/slirp4netns

but same problem

$ podman run --log-level=debug --net=slirp4netns hello-world
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called run.PersistentPreRunE(podman run --log-level=debug --net=slirp4netns hello-world) 
DEBU[0000] Reading configuration file "/home/rubensa/.config/containers/containers.conf" 
DEBU[0000] Merged system config "/home/rubensa/.config/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.22.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:file HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand:/pause InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/home/rubensa/.config/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/rubensa/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/rubensa/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/rubensa/.config/cni/net.d}} 
DEBU[0000] Using conmon: "/usr/local/bin/conmon"        
DEBU[0000] Initializing boltdb state at /home/rubensa/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver                           
DEBU[0000] Using graph root /home/rubensa/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/rubensa/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/rubensa/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Not configuring container store              
DEBU[0000] Initializing event backend file              
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/local/bin/crun"          
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 

For the record: my ~/.config/containers is a symlink, but I don't think that can cause any problem.

@rubensa
Copy link

rubensa commented Sep 24, 2020

Tried creating ~/.config/containers/policy.json

{
    "default": [
        {
            "type": "insecureAcceptAnything"
        }
    ],
    "transports":
        {
            "docker-daemon":
                {
                    "": [{"type":"insecureAcceptAnything"}]
                }
        }
}

Same result.

Tried creating ~/.config/cni/net.d/87-podman-bridge.conf

{
  "cniVersion": "0.4.0",
  "name": "podman",
  "plugins": [
    {
      "type": "bridge",
      "bridge": "cni-podman0",
      "isGateway": true,
      "ipMasq": true,
      "hairpinMode": true,
      "ipam": {
        "type": "host-local",
        "routes": [{ "dst": "0.0.0.0/0" }],
        "ranges": [
          [
            {
              "subnet": "10.88.0.0/16",
              "gateway": "10.88.0.1"
            }
          ]
        ]
      }
    },
    {
      "type": "portmap",
      "capabilities": {
        "portMappings": true
      }
    },
    {
      "type": "firewall"
    },
    {
      "type": "tuning"
    }
  ]
}

Same result.

Tried with (although looks like cni-plugins are not used in rootless mode #2174 (comment))

$ tar xvfz cni-plugins-linux-amd64-v0.8.7.tgz

on folder ~/.config/cni and setting ~./config/containers/containers.conf to

runtime = "crun"
cni_config_dir = "/home/rubensa/.config/cni/net.d"
cni_plugin_dir = "/home/rubensa/.config/cni"

Same result

INFO[0000] podman filtering at log level debug          
DEBU[0000] Called run.PersistentPreRunE(podman run --log-level=debug hello-world) 
DEBU[0000] Reading configuration file "/home/rubensa/.config/containers/containers.conf" 
DEBU[0000] Merged system config "/home/rubensa/.config/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.22.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:file HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand:/pause InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/home/rubensa/.config/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/rubensa/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/rubensa/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/rubensa/.config/cni/net.d}} 
DEBU[0000] Using conmon: "/usr/local/bin/conmon"        
DEBU[0000] Initializing boltdb state at /home/rubensa/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver                           
DEBU[0000] Using graph root /home/rubensa/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/rubensa/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/rubensa/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Not configuring container store              
DEBU[0000] Initializing event backend file              
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/local/bin/crun"          
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 


@rubensa
Copy link

rubensa commented Sep 24, 2020

Re-checking doc and looks like driver="overlay" is required for root-less so changed (again) storage.conf to:

[storage]
driver = "overlay"
[storage.options]
mount_program = "fuse-overlayfs"

Now, looks like one more step is run (but then, again, stalled)

$ podman run --log-level=debug hello-world
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called run.PersistentPreRunE(podman run --log-level=debug hello-world) 
DEBU[0000] Reading configuration file "/home/rubensa/.config/containers/containers.conf" 
DEBU[0000] Merged system config "/home/rubensa/.config/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.22.0 Annotations:[] CgroupNS:host Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableLabeling:false Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{CgroupCheck:false CgroupManager:cgroupfs ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/1000/libpod/tmp/events/events.log EventsLogger:file HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand:/pause InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:runc OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/home/rubensa/.config/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/rubensa/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/1000/libpod/tmp VolumePath:/home/rubensa/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/home/rubensa/.config/cni/net.d}} 
DEBU[0000] Using conmon: "/usr/local/bin/conmon"        
DEBU[0000] Initializing boltdb state at /home/rubensa/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/rubensa/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/1000/containers     
DEBU[0000] Using static dir /home/rubensa/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/1000/libpod/tmp      
DEBU[0000] Using volume path /home/rubensa/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] Not configuring container store              
DEBU[0000] Initializing event backend file              
DEBU[0000] using runtime "/usr/local/bin/crun"          
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] using runtime "/usr/bin/runc"                

@rubensa
Copy link

rubensa commented Sep 24, 2020

Umm... no one more step... only the order of messages from:

DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/local/bin/crun"          
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 

to:

DEBU[0000] using runtime "/usr/local/bin/crun"          
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
DEBU[0000] using runtime "/usr/bin/runc"                

@rubensa
Copy link

rubensa commented Oct 1, 2020

Looking at Rootless containers with Podman: The basics seems that I only need:

But this is not enough as podman itself needs extra utilities...

As suggested by @rhatdan this extra is needed:

But looks like this is also needed:

Do I need any of skopeo, cri-o or buildash for running podman (I think not, but not sure)?

But this is not enough as podman needs some configuration:

  • ~/.config/containers/policy.json
{
    "default": [
        {
            "type": "insecureAcceptAnything"
        }
    ],
    "transports":
        {
            "docker-daemon":
                {
                    "": [{"type":"insecureAcceptAnything"}]
                }
        }
}
  • ~/.config/containers/registries.conf
[registries.search]
registries = ['docker.io']

[registries.insecure]
registries = []

[registries.block]
registries = []
  • ~/.config/containers/storage.conf
[storage]
driver = "overlay"
[storage.options]
mount_program = "fuse-overlayfs"
  • ~/.config/containers/containers.conf
runtime = "crun"

But, unfortunately, after all I can't make podman rootless working using static binaries... :(

PS: I'm on Ubuntu 20.04.1 and checked /etc/subuid and /etc/subgid configuration and /proc/sys/user/max_user_namespaces and looks ok.

Any ideas on this?

@rhatdan
Copy link
Member

rhatdan commented Oct 1, 2020

If I execute the following on Fedora this is what I see

$ rpm -q podman --requires
/bin/sh
config(podman) = 2:2.1.1-2.fc33
conmon >= 2:2.0.16-1
containernetworking-plugins >= 0.8.6-1
containers-common >= 1.1.1-9
iptables
libassuan.so.0()(64bit)
libc.so.6()(64bit)
libc.so.6(GLIBC_2.14)(64bit)
libc.so.6(GLIBC_2.2.5)(64bit)
libc.so.6(GLIBC_2.32)(64bit)
libc.so.6(GLIBC_2.4)(64bit)
libdl.so.2()(64bit)
libdl.so.2(GLIBC_2.2.5)(64bit)
libgpg-error.so.0()(64bit)
libgpgme.so.11()(64bit)
libgpgme.so.11(GPGME_1.0)(64bit)
libgpgme.so.11(GPGME_1.1)(64bit)
libpthread.so.0()(64bit)
libpthread.so.0(GLIBC_2.12)(64bit)
libpthread.so.0(GLIBC_2.2.5)(64bit)
libpthread.so.0(GLIBC_2.3.2)(64bit)
librt.so.1()(64bit)
librt.so.1(GLIBC_2.2.5)(64bit)
libseccomp.so.2()(64bit)
nftables
oci-runtime
rpmlib(CompressedFileNames) <= 3.0.4-1
rpmlib(FileDigests) <= 4.6.0-1
rpmlib(PayloadFilesHavePrefix) <= 4.0-1
rpmlib(PayloadIsZstd) <= 5.4.18-1
rtld(GNU_HASH)

Out of these you will definitely need conmon, and potentially some of the configuration files specified in containers-common.

But to run rooless we will also need --recommends

 rpm -q podman --recommends
catatonit
container-selinux
crun >= 0.14-2
fuse-overlayfs >= 0.3-8
podman-plugins = 2:2.1.1-2.fc33
runc
slirp4netns >= 0.3.0-2

Out of these, you really just need fuse-overlayfs, slirp4netns, and crun (or runc)

@rhatdan
Copy link
Member

rhatdan commented Oct 1, 2020

@QiWang19 @ashley-cui Might be a good blog to write. What does podman need to run successfully.

@QiWang19
Copy link
Contributor

QiWang19 commented Oct 1, 2020

@QiWang19 @ashley-cui Might be a good blog to write. What does podman need to run successfully.

yes, sounds good let's draft one.

@QiWang19
Copy link
Contributor

@rubensa you may need some of the required dependencies from the installation notes, https://podman.io/getting-started/installation#build-and-run-dependencies. From my side sudo apt install uidmap would work. I was using ubuntu 20.04.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 24, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 24, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. lgtm Indicates that a PR is ready to be merged. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Static Binary for Github Release?
9 participants