Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman build continues to use runc, even if podman run is configured to use crun #8893

Closed
srcshelton opened this issue Jan 5, 2021 · 17 comments · Fixed by containers/buildah#2926
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@srcshelton
Copy link
Contributor

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

My podman installations are set to use crun, chiefly for improved cgroups2 support. This being the case, there seems little reason to have both crun and runc installed at the same time.

However, on a fresh install with only podman and crun, attempting a podman build fails at the first RUN directive (after several container-level directives such as ADD and COPY have succeeded) with an error stating that that the command runc cannot be found.

/etc/containers/containers.conf specifies runtime = "crun"... is there another setting (or another tool which needs to be separately configured) for build operations?

If so, and if there is no configuration for the additional tool in any of the existing /etc/containers/*.conf files, then should podman pass-through its configured runtime setting to any child tools it launches?

(e.g. should podman automatically set BUILDAH_RUNTIME to be equal to the value of the runtime configuration option in /etc/containers/containers.conf if this option is set?)

N.B. I've realised that this host is defaulting to cgroupsv1 so I need to fix that... but it feels as if this issue is still worth highlighting?

Output of podman version:

Version:      2.2.1
API Version:  2.1.0
Go Version:   go1.15.5
Git Commit:   a0d478edea7f775b7ce32f8eb1a01e75374486cb
Built:        Thu Dec 31 13:48:19 2020
OS/Arch:      linux/arm

Output of podman info --debug:

host:
  arch: arm
  buildahVersion: 1.18.0
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: Unknown
    path: /usr/bin/conmon
    version: 'conmon version 2.0.22, commit: 9c34a8663b85e479e0c083801e89a2b2835228ed'
  cpus: 4
  distribution:
    distribution: gentoo
    version: unknown
  eventLogger: file
  hostname: turnpike
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 5.4.79-v8+
  linkmode: dynamic
  memFree: 3475898368
  memTotal: 8194351104
  ociRuntime:
    name: crun
    package: Unknown
    path: /usr/bin/crun
    version: |-
      crun version 0.16
      commit: eb0145e5ad4d8207e84a327248af76663d4e50dd
      spec: 1.0.0
      +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  rootless: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 0
  swapTotal: 0
  uptime: 125h 36m 54.79s (Approximately 5.21 days)
registries:
  localhost:5000:
    Blocked: false
    Insecure: true
    Location: localhost:5000
    MirrorByDigestOnly: false
    Mirrors: []
    Prefix: localhost:5000
  search:
  - docker.io
  - docker.pkg.github.com
  - quay.io
  - public.ecr.aws
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 1
    paused: 0
    running: 1
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.ignore_chown_errors: "false"
  graphRoot: /storage/containers/podman/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 334
  runRoot: /storage/containers/podman/run
  volumePath: /storage/containers/podman/volumes
version:
  APIVersion: 2.1.0
  Built: 1609422499
  BuiltTime: Thu Dec 31 13:48:19 2020
  GitCommit: a0d478edea7f775b7ce32f8eb1a01e75374486cb
  GoVersion: go1.15.5
  OsArch: linux/arm
  Version: 2.2.1

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide?

Yes

@openshift-ci-robot openshift-ci-robot added the kind/bug Categorizes issue or PR as related to a bug. label Jan 5, 2021
@rhatdan
Copy link
Member

rhatdan commented Jan 5, 2021

That would be a serious bug.

@rhatdan
Copy link
Member

rhatdan commented Jan 5, 2021

Strange I can not reproduce. I removed runc from my system and a build with a RUN command worked fine.

@rhatdan
Copy link
Member

rhatdan commented Jan 5, 2021

Do you have an example Containerfile where this is happening?

@rhatdan
Copy link
Member

rhatdan commented Jan 5, 2021

Do you have BUILDAH_RUNTIME set on your system?

@rhatdan
Copy link
Member

rhatdan commented Jan 5, 2021

This fails for me, as expected since the underlying code will not ignore the environment variables.

BUILDAH_RUNTIME=/usr/bin/runc podman build /tmp
STEP 1: FROM registry.access.redhat.com/ubi7/ubi-init
STEP 2: run echo hello 
--> Using cache afd37b9a2fa666ff828f71171769b7fb22a3fe3b587bdc08357e23ebd1176da6
--> afd37b9a2fa
afd37b9a2fa666ff828f71171769b7fb22a3fe3b587bdc08357e23ebd1176da6
sh-5.0# BUILDAH_RUNTIME=/usr/bin/runc podman build --no-cache /tmp
STEP 1: FROM registry.access.redhat.com/ubi7/ubi-init
STEP 2: run echo hello 
error running container: error creating container for [/bin/sh -c echo hello]: : fork/exec /usr/bin/runc: no such file or directory
Error: error building at STEP "RUN echo hello": error while running runtime: exit status 1

@srcshelton
Copy link
Contributor Author

What I've done here is to check-out https://github.com/srcshelton/docker-gentoo-build on a Raspberry Pi running Gentoo (32bit userland, 64bit kernel) , and run gentoo-init.docker.

I also then had the same issue on an x86 system (x32 userland, 64bit kernel, also Gentoo).

At first I thought it was a wrong-architecture problem, then I noticed that the error was in regards to runc rather than crun.

As soon as the current build completes on either system, I'll attempt to reproduce and take logs.

@srcshelton
Copy link
Contributor Author

Do you have BUILDAH_RUNTIME set on your system?

Nope - I only found out about this after checking whether it was anticipated that buildah would be configured separately when writing-up this issue!

@srcshelton
Copy link
Contributor Author

This fails for me, as expected since the underlying code will not ignore the environment variables.

error running container: error creating container for [/bin/sh -c echo hello]: : fork/exec /usr/bin/runc: no such file or directory
Error: error building at STEP "RUN echo hello": error while running runtime: exit status 1

... although I don't think the error I saw included fork/exec - although it was solved to installing runc. I'll gather more data ASAP and report back.

@srcshelton
Copy link
Contributor Author

Actually, tell a lie - the non-ARM system that also had this problem effectively isn't a multiarch case: it's an x32 userland on a 64bit kernel, but running a 64bit build of podman. Which appears to make it much less likely that it's a wrong-architecture issue in the first place...

@srcshelton
Copy link
Contributor Author

srcshelton commented Jan 6, 2021

Here we go:

c98d7004d4aca4e21149480fae37d1d871de19a62817c84f675eea0829f5e986
STEP 1: FROM gentoo/stage3:latest AS stage3
--> c98d7004d4a
STEP 2: FROM gentoo-env:latest
STEP 3: ARG env_name
--> 1dcad63e18f
STEP 4: ARG env_id
--> 5033e838454
STEP 5: ARG stage3_image
--> 4125ca9d2e2
STEP 6: ARG stage3_id
--> e1ff6822b3c
STEP 7: LABEL envrionment_from="${env_name}:${env_id}"
--> 4411c7d003c
STEP 8: LABEL stage3_from="${stage3_image}:${stage3_id}"
--> af99f225b67
STEP 9: COPY --from=stage3 / /
--> 8090b0549c5
STEP 10: RUN test ! -e /var/db/repos || rm -r /var/db/repos
error running container: error creating container for [/bin/sh -c test ! -e /var/db/repos || rm -r /var/db/repos]: : exec: "runc": executable file not found in $PATH
Error: error building at STEP "RUN test ! -e /var/db/repos || rm -r /var/db/repos": error while running runtime: exit status 1

... this machine was working with both runc and crun installed, but produces this output once runc is removed (despite being configured to use crun).

@eriksjolund
Copy link
Contributor

I see something similar for podman that was built from the master branch a few days ago.
The following example shows that /usr/bin/runc is used for podman build but /home/erik.sjolund/bin/crun is used for podman run.

Summary

About the software

I'm trying to run a podman that is installed in my home directory on a CentOS 8.2 computer. There is also a system podman installed that I guess might interfere. Unfortunately I don't have root permissions so system changes are not easy to make.

To install Podman in my home directory, I downloaded podman that was buillt from the master branch a few days ago
(filename: centos8-podman183f443a585a3659d807ee413e5b708d37a72924-conmon7bc96c75e9a8f01d243990736cbc07e06d964766-containernetworking-plugins-versionv0.9.0-go1.15.3.tar)

from the GitHub action

https://github.com/eriksjolund/build-podman/actions/runs/483013600

The binaries crun and slirp4netns were downloaded from GitHub. (Probably crun was from a Github action build but I don't remember right now).

I created the file files.tar that contains command outputs from the experiment.
(The file can be downloaded in ZIP format: files.tar.zip)

Creating the file files.tar

[erik.sjolund@vm ~]$ mkdir /tmp/command_outputs/
[erik.sjolund@vm ~]$ cd test-app/
[erik.sjolund@vm test-app]$ podman --log-level debug build -t test-app . > /tmp/command_outputs/podman-build-outputs.txt 2>&1
[erik.sjolund@vm test-app]$ podman --log-level debug run --rm -ti docker.io/library/alpine:latest echo hello > /tmp/command_outputs/podman-run-outputs.txt 2>&1
[erik.sjolund@vm test-app]$ which podman
~/podman/bin/podman
[erik.sjolund@vm test-app]$ podman version
WARN[0000] Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf 
WARN[0000] Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf 
WARN[0000] Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf 
WARN[0000] Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf 
WARN[0000] Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf 
WARN[0000] Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf 
Version:      3.0.0-dev
API Version:  3.0.0
Go Version:   go1.15.3
Git Commit:   183f443a585a3659d807ee413e5b708d37a72924
Built:        Wed Jan 13 15:49:08 2021
OS/Arch:      linux/amd64
[erik.sjolund@vm test-app]$ rpm -qf /usr/bin/podman
podman-1.6.4-10.module_el8.2.0+305+5e198a41.x86_64
[erik.sjolund@vm test-app]$ rpm -q buildah
package buildah is not installed
[erik.sjolund@vm test-app]$ rpm -qa | grep containers
containers-common-0.1.40-11.module_el8.2.0+377+92552693.x86_64
[erik.sjolund@vm test-app]$ rpm -ql containers-common-0.1.40-11.module_el8.2.0+377+92552693.x86_64
/etc/containers
/etc/containers/certs.d
/etc/containers/oci
/etc/containers/oci/hooks.d
/etc/containers/policy.json
/etc/containers/registries.conf
/etc/containers/registries.d
/etc/containers/registries.d/default.yaml
/etc/containers/storage.conf
/usr/share/containers
/usr/share/containers/mounts.conf
/usr/share/containers/seccomp.json
/usr/share/man/man5/containers-certs.d.5.gz
/usr/share/man/man5/containers-mounts.conf.5.gz
/usr/share/man/man5/containers-policy.json.5.gz
/usr/share/man/man5/containers-registries.conf.5.gz
/usr/share/man/man5/containers-registries.d.5.gz
/usr/share/man/man5/containers-signature.5.gz
/usr/share/man/man5/containers-storage.conf.5.gz
/usr/share/man/man5/containers-transports.5.gz
/usr/share/rhel/secrets
/usr/share/rhel/secrets/etc-pki-entitlement
/usr/share/rhel/secrets/redhat.repo
/usr/share/rhel/secrets/rhsm
/var/lib/containers/sigstore
[erik.sjolund@vm test-app]$ podman info --debug | grep -v hostname: > /tmp/command_outputs/podman-info--debug.txt
WARN[0000] Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf 
WARN[0000] Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf 
WARN[0000] Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf 
WARN[0000] Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf 
WARN[0000] Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf 
WARN[0000] Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf 
[erik.sjolund@vm test-app]$ tar cf /tmp/files.tar /etc/containers/ ~/.config/containers/ ~/test-app /tmp/command_outputs
tar: Removing leading `/' from member names
tar: Removing leading `/' from hard link targets

Analyzing the file files.tar

[esjolund@laptop ~]$ mkdir /tmp/tmpdir
[esjolund@laptop ~]$ cd /tmp/tmpdir
[esjolund@laptop tmpdir]$ 
[esjolund@laptop tmpdir]$ 
[esjolund@laptop tmpdir]$ tar xf ../files.tar 
[esjolund@laptop tmpdir]$ grep "running conmon"  ./tmp/command_outputs/podman-run-outputs.txt 
time="2021-01-17T16:22:05+01:00" level=debug msg="running conmon: /home/erik.sjolund/podman/bin/conmon" args="[--api-version 1 -c 86ff3a6b67e97e4810afb9b57b349fcdf66cdfa4ceecf00333955c29a9d77675 -u 86ff3a6b67e97e4810afb9b57b349fcdf66cdfa4ceecf00333955c29a9d77675 -r /home/erik.sjolund/bin/crun -b /home/erik.sjolund/.local/share/containers/storage/vfs-containers/86ff3a6b67e97e4810afb9b57b349fcdf66cdfa4ceecf00333955c29a9d77675/userdata -p /run/user/1626/containers/vfs-containers/86ff3a6b67e97e4810afb9b57b349fcdf66cdfa4ceecf00333955c29a9d77675/userdata/pidfile -n kind_jemison --exit-dir /run/user/1626/libpod/tmp/exits --socket-dir-path /run/user/1626/libpod/tmp/socket -l k8s-file:/home/erik.sjolund/.local/share/containers/storage/vfs-containers/86ff3a6b67e97e4810afb9b57b349fcdf66cdfa4ceecf00333955c29a9d77675/userdata/ctr.log --log-level debug --syslog -t --conmon-pidfile /run/user/1626/containers/vfs-containers/86ff3a6b67e97e4810afb9b57b349fcdf66cdfa4ceecf00333955c29a9d77675/userdata/conmon.pid --exit-command /home/erik.sjolund/podman/bin/podman --exit-command-arg --root --exit-command-arg /home/erik.sjolund/.local/share/containers/storage --exit-command-arg --runroot --exit-command-arg /run/user/1626/containers --exit-command-arg --log-level --exit-command-arg debug --exit-command-arg --cgroup-manager --exit-command-arg cgroupfs --exit-command-arg --tmpdir --exit-command-arg /run/user/1626/libpod/tmp --exit-command-arg --runtime --exit-command-arg crun --exit-command-arg --storage-driver --exit-command-arg vfs --exit-command-arg --events-backend --exit-command-arg journald --exit-command-arg --syslog --exit-command-arg container --exit-command-arg cleanup --exit-command-arg --rm --exit-command-arg 86ff3a6b67e97e4810afb9b57b349fcdf66cdfa4ceecf00333955c29a9d77675]"
[esjolund@laptop tmpdir]$ 
[esjolund@laptop tmpdir]$ grep -- --bundle ./tmp/command_outputs/podman-build-outputs.txt 
time="2021-01-17T16:21:25+01:00" level=debug msg="Running [\"/usr/bin/runc\" \"create\" \"--bundle\" \"/var/tmp/buildah778373008\" \"--pid-file\" \"/var/tmp/buildah778373008/pid\" \"--no-new-keyring\" \"buildah-buildah778373008\"]"
[esjolund@laptop tmpdir]$ 

Ooops, I forgot to include

  • /usr/share/containers/libpod.conf
  • /usr/share/containers/seccomp.conf
  • /usr/share/containers/mounts.conf

They should be the system default for CentOS 8.2. Anyway, here they are:

[erik.sjolund@vm ~]$ zip /tmp/system-conf.zip /usr/share/containers/libpod.conf  /usr/share/containers/seccomp.json /usr/share/containers/mounts.conf 
  adding: usr/share/containers/libpod.conf (deflated 61%)
  adding: usr/share/containers/seccomp.json (deflated 82%)
  adding: usr/share/containers/mounts.conf (deflated 14%)
[erik.sjolund@vm ~]$ 

system-conf.zip

@rhatdan
Copy link
Member

rhatdan commented Jan 20, 2021

The default runtime is found using this tool.

func (c *EngineConfig) findRuntime() string {
	// Search for crun first followed by runc and kata
	for _, name := range []string{"crun", "runc", "kata"} {
		for _, v := range c.OCIRuntimes[name] {
			if _, err := os.Stat(v); err == nil {
				return name
			}
		}
		if path, err := exec.LookPath(name); err == nil {
			logrus.Warningf("Found default OCIruntime %s path which is missing from [engine.runtimes] in containers.conf", path)
			return name
		}
	}
	return ""
}

It should look for crun first, followed by runc and kata.

If it finds one in it's search path or via PATH, it will use it before falling over to the next OCI Runtime.

This code is the same in podman and buildah at least in the Main branch and in podman 3.0-rc1 and buildah-1.19.2.

@srcshelton
Copy link
Contributor Author

You'll notice that, unlike @eriksjolund's output, the logs from my use-case doesn't include the notice about entries missing from containers.conf (since I do have both crun and runc configured so I can swap between them).

Hmm - is that actually the problem? My containers.conf contains:

runtime = "crun"
[engine.runtimes]
runc = [
  "/usr/bin/runc"
]
crun = [
  "/usr/bin/crun"
]

... and I'm seeing exec: "runc": executable file not found in $PATH when crun is set as the runtime (as above) but when runc isn't installed... is the presence of a engine.runtimes entry overriding the runtime configuration value?

@eriksjolund
Copy link
Contributor

Regarding the function findRuntime(), when I run podman --log-level debug build -t test-app . I see this output

time="2021-01-17T16:21:22+01:00" level=warning msg="Found default OCIruntime /home/erik.sjolund/bin/crun path which is missing from [engine.runtimes] in containers.conf"

Okay, the function returns crun in my case.
But later (during the very same podman-build execution) I see

time="2021-01-17T16:21:25+01:00" level=debug msg="Running [\"/usr/bin/runc\" \"create\" \"--bundle\" \"/var/tmp/buildah778373008\" \"--pid-file\" \"/var/tmp/buildah778373008/pid\" \"--no-new-keyring\" \"buildah-buildah778373008\"]"

A new experiment

I tried once more but this time I set the environment variable BUILDAH_RUNTIME

BUILDAH_RUNTIME=/home/erik.sjolund/bin/crun podman --log-level debug build -t test-app .

This time crun was used. I see the output

DEBU[0000] Running ["/home/erik.sjolund/bin/crun" "create" "--bundle" "/var/tmp/buildah603788197" "--pid-file" "/var/tmp/buildah603788197/pid" "--no-new-keyring" "buildah-buildah603788197"] 
DEBU[0000] Running ["/home/erik.sjolund/bin/crun" "start" "buildah-buildah603788197"] 

rhatdan added a commit to rhatdan/buildah that referenced this issue Jan 21, 2021
Currently we have a weird situation where the user sets the default
runtime in his containers.conf for podman but Buildah is still falling
back to use crun because it was hard coded as the default for Buildah.

We are changing the default to "crun" but this should ONLY be used
when the containers.conf load fails, which should never happen.

I would like to remove this default, but that would theoretically break
the API promise of Buildah.

This should fix containers/podman#8893

Signed-off-by: Daniel J Walsh <[email protected]>
rhatdan added a commit to rhatdan/buildah that referenced this issue Jan 21, 2021
Currently we have a weird situation where the user sets the default
runtime in his containers.conf for podman but Buildah is still falling
back to use crun because it was hard coded as the default for Buildah.

We are changing the default to "crun" but this should ONLY be used
when the containers.conf load fails, which should never happen.

I would like to remove this default, but that would theoretically break
the API promise of Buildah.

This should fix containers/podman#8893

Signed-off-by: Daniel J Walsh <[email protected]>
rhatdan added a commit to rhatdan/buildah that referenced this issue Jan 21, 2021
Currently we have a weird situation where the user sets the default
runtime in his containers.conf for podman but Buildah is still falling
back to use crun because it was hard coded as the default for Buildah.

We are changing the default to "crun" but this should ONLY be used
when the containers.conf load fails, which should never happen.

I would like to remove this default, but that would theoretically break
the API promise of Buildah.

This should fix containers/podman#8893

Signed-off-by: Daniel J Walsh <[email protected]>
rhatdan added a commit to rhatdan/buildah that referenced this issue Jan 21, 2021
Currently we have a weird situation where the user sets the default
runtime in his containers.conf for podman but Buildah is still falling
back to use crun because it was hard coded as the default for Buildah.

We are changing the default to "crun" but this should ONLY be used
when the containers.conf load fails, which should never happen.

I would like to remove this default, but that would theoretically break
the API promise of Buildah.

This should fix containers/podman#8893

Signed-off-by: Daniel J Walsh <[email protected]>
@rhatdan
Copy link
Member

rhatdan commented Jan 21, 2021

Ok I think I found the issue in Buildah.

containers/buildah#2926 Should fix this once it is merged.

@rhatdan rhatdan self-assigned this Jan 21, 2021
rhatdan added a commit to rhatdan/buildah that referenced this issue Jan 21, 2021
Currently we have a weird situation where the user sets the default
runtime in his containers.conf for podman but Buildah is still falling
back to use crun because it was hard coded as the default for Buildah.

We are changing the default to "crun" but this should ONLY be used
when the containers.conf load fails, which should never happen.

I would like to remove this default, but that would theoretically break
the API promise of Buildah.

This should fix containers/podman#8893

Signed-off-by: Daniel J Walsh <[email protected]>
rhatdan added a commit to rhatdan/buildah that referenced this issue Jan 21, 2021
Currently we have a weird situation where the user sets the default
runtime in his containers.conf for podman but Buildah is still falling
back to use crun because it was hard coded as the default for Buildah.

We are changing the default to "crun" but this should ONLY be used
when the containers.conf load fails, which should never happen.

I would like to remove this default, but that would theoretically break
the API promise of Buildah.

This should fix containers/podman#8893

Signed-off-by: Daniel J Walsh <[email protected]>
rhatdan added a commit to rhatdan/buildah that referenced this issue Jan 22, 2021
Currently we have a weird situation where the user sets the default
runtime in his containers.conf for podman but Buildah is still falling
back to use runc because it was hard coded as the default for Buildah.

I would like to remove this default, but that would theoretically break
the API promise of Buildah.

This should fix containers/podman#8893

Signed-off-by: Daniel J Walsh <[email protected]>
rhatdan added a commit to rhatdan/buildah that referenced this issue Jan 22, 2021
Currently we have a weird situation where the user sets the default
runtime in his containers.conf for podman but Buildah is still falling
back to use runc because it was hard coded as the default for Buildah.

I would like to remove this default, but that would theoretically break
the API promise of Buildah.

This should fix containers/podman#8893

Signed-off-by: Daniel J Walsh <[email protected]>
rhatdan added a commit to rhatdan/buildah that referenced this issue Jan 23, 2021
Currently we have a weird situation where the user sets the default
runtime in his containers.conf for podman but Buildah is still falling
back to use runc because it was hard coded as the default for Buildah.

I would like to remove this default, but that would theoretically break
the API promise of Buildah.

This should fix containers/podman#8893

Signed-off-by: Daniel J Walsh <[email protected]>
rhatdan added a commit to rhatdan/buildah that referenced this issue Jan 23, 2021
Currently we have a weird situation where the user sets the default
runtime in his containers.conf for podman but Buildah is still falling
back to use runc because it was hard coded as the default for Buildah.

I would like to remove this default, but that would theoretically break
the API promise of Buildah.

This should fix containers/podman#8893

Signed-off-by: Daniel J Walsh <[email protected]>
@Asgoret
Copy link

Asgoret commented Feb 19, 2021

@rhatdan still here)

UPD#1: Setting BUILDAH_RUNTIME helped. Reinstall, install runc doesn't help.
Versions:
OS: Ubuntu 20.04 LTS
Kernel: 5.4.0-65-generic
Podman: 3.0.0

Version:      3.0.0
API Version:  3.0.0
Go Version:   go1.15.2
Built:        Thu Jan  1 03:00:00 1970
OS/Arch:      linux/amd64

UPD#2: Podman v3 is not ready for non-RH OS :) Old issues come back (e.g. #1792)

@rhatdan
Copy link
Member

rhatdan commented Feb 19, 2021

The ubuntu issues on Kubic are being worked out. Should be fixed now or soon. 3.0.1 is on it's way as well.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 22, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 22, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants