Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

All docker-compose volumes fail with: "Error response from daemon: fill out specgen: /data: duplicate mount destination" #11822

Closed
nyonson opened this issue Oct 1, 2021 · 29 comments · Fixed by #13540
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@nyonson
Copy link

nyonson commented Oct 1, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Any docker-compose file which declares a volume is failing.

Steps to reproduce the issue:

  1. Declare a volume in docker-compose.yaml

  2. Run docker-compose up

Describe the results you received:

Error response from daemon: fill out specgen: /data: duplicate mount destination (/data depending on the volume mount locations setting).

Describe the results you expected:

Volume created and containers setup.

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:      3.4.0
API Version:  3.4.0
Go Version:   go1.17.1
Git Commit:   6e8de00bb224f9931d7402648f0177e7357ed079
Built:        Fri Oct  1 11:14:18 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.23.1
  cgroupControllers: []
  cgroupManager: systemd
  cgroupVersion: v2
  conmon:
    package: /usr/bin/conmon is owned by conmon 1:2.0.30-1
    path: /usr/bin/conmon
    version: 'conmon version 2.0.30, commit: 2792c16f4436f1887a7070d9ad99d9c29742f38a'
  cpus: 8
  distribution:
    distribution: arch
    version: unknown
  eventLogger: journald
  hostname: mercury2
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65537
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65537
  kernel: 5.14.8-arch1-1
  linkmode: dynamic
  logDriver: journald
  memFree: 39771189248
  memTotal: 41912086528
  ociRuntime:
    name: crun
    package: /usr/bin/crun is owned by crun 1.1-1
    path: /usr/bin/crun
    version: |-
      crun version 1.1
      commit: 5b341a145c4f515f96f55e3e7760d1c79ec3cf1f
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /etc/containers/seccomp.json
    selinuxEnabled: false
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: /usr/bin/slirp4netns is owned by slirp4netns 1.1.12-1
    version: |-
      slirp4netns version 1.1.12
      commit: 7a104a101aa3278a2152351a082a6df71f57c9a3
      libslirp: 4.6.1
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 0
  swapTotal: 0
  uptime: 16m 19.2s
plugins:
  log:
  - k8s-file
  - none
  - journald
  network:
  - bridge
  - macvlan
  volume:
  - local
registries:
  search:
  - docker.io
store:
  configFile: /home/njohnson/.config/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/njohnson/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 5
  runRoot: /run/user/1000/containers
  volumePath: /home/njohnson/.local/share/containers/storage/volumes
version:
  APIVersion: 3.4.0
  Built: 1633083258
  BuiltTime: Fri Oct  1 11:14:18 2021
  GitCommit: 6e8de00bb224f9931d7402648f0177e7357ed079
  GoVersion: go1.17.1
  OsArch: linux/amd64
  Version: 3.4.0

Package info (e.g. output of rpm -q podman or apt list podman):

Name            : podman
Version         : 3.4.0-1
Description     : Tool and library for running OCI-based containers in pods
Architecture    : x86_64
URL             : https://github.com/containers/podman
Licenses        : Apache
Groups          : None
Provides        : None
Depends On      : cni-plugins  conmon  containers-common  crun  fuse-overlayfs  iptables  libdevmapper.so=1.02-64  libgpgme.so=11-64  libseccomp.so=2-64  slirp4netns
Optional Deps   : apparmor: for AppArmor support
                  btrfs-progs: support btrfs backend devices
                  catatonit: --init flag support
                  podman-docker: for Docker-compatible CLI [installed]
Required By     : podman-docker
Optional For    : None
Conflicts With  : None
Replaces        : None
Installed Size  : 72.66 MiB
Packager        : David Runge <[email protected]>
Build Date      : Fri 01 Oct 2021 11:14:18 AM BST
Install Date    : Fri 01 Oct 2021 12:05:52 PM BST
Install Reason  : Explicitly installed
Install Script  : No
Validated By    : Signature

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

Yes

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Oct 1, 2021
@Luap99
Copy link
Member

Luap99 commented Oct 1, 2021

Can you please share your docker-compose.yml file.

@nyonson
Copy link
Author

nyonson commented Oct 1, 2021

It's reproducible on the demo docker compose files for me: https://github.com/docker/awesome-compose/blob/master/gitea-postgres/docker-compose.yaml

@Luap99
Copy link
Member

Luap99 commented Oct 1, 2021

This compose file works for me, can you share more details.

@nyonson
Copy link
Author

nyonson commented Oct 1, 2021

This has started recently (within the last month), but was not due to the recent upgrade to 3.4.0. It was all effecting me on 3.3.1.

I am running podman as a user service with systemctl:

> systemctl --user status podman.socket
● podman.socket - Podman API Socket
     Loaded: loaded (/usr/lib/systemd/user/podman.socket; enabled; vendor preset: enabled)
     Active: active (listening) since Fri 2021-10-01 12:06:59 BST; 2h 42min ago
   Triggers: ● podman.service
       Docs: man:podman-system-service(1)
     Listen: /run/user/1000/podman/podman.sock (Stream)
     CGroup: /user.slice/user-1000.slice/[email protected]/app.slice/podman.socket

And have the DOCKER_HOST environment variable set:

> echo $DOCKER_HOST
unix:///run/user/1000/podman/podman.sock

I don't have any configuration under ~/.config/containers so I believe I am running things pretty standard.

@mheon
Copy link
Member

mheon commented Oct 1, 2021

Please provide all containers.conf files - /usr/share/containers/containers.conf, /etc/containers/containers.conf, ~/.config/containers/containers.conf - most logical explanation is an extra volume from the config in one of those.

@nyonson
Copy link
Author

nyonson commented Oct 1, 2021

/usr/share/containers/containers.conf

> cat /usr/share/containers/containers.conf
# The containers configuration file specifies all of the available configuration
# command-line options/flags for container engine tools like Podman & Buildah,
# but in a TOML format that can be easily modified and versioned.

# Please refer to containers.conf(5) for details of all configuration options.
# Not all container engines implement all of the options.
# All of the options have hard coded defaults and these options will override
# the built in defaults. Users can then override these options via the command
# line. Container engines will read containers.conf files in up to three
# locations in the following order:
#  1. /usr/share/containers/containers.conf
#  2. /etc/containers/containers.conf
#  3. $HOME/.config/containers/containers.conf (Rootless containers ONLY)
#  Items specified in the latter containers.conf, if they exist, override the
# previous containers.conf settings, or the default settings.

[containers]

# List of annotation. Specified as
# "key = value"
# If it is empty or commented out, no annotations will be added
#
#annotations = []

# Used to change the name of the default AppArmor profile of container engine.
#
#apparmor_profile = "container-default"

# Default way to to create a cgroup namespace for the container
# Options are:
# `private` Create private Cgroup Namespace for the container.
# `host`    Share host Cgroup Namespace with the container.
#
#cgroupns = "private"

# Control container cgroup configuration
# Determines  whether  the  container will create CGroups.
# Options are:
# `enabled`   Enable cgroup support within container
# `disabled`  Disable cgroup support, will inherit cgroups from parent
# `no-conmon` Do not create a cgroup dedicated to conmon.
#
#cgroups = "enabled"

# List of default capabilities for containers. If it is empty or commented out,
# the default capabilities defined in the container engine will be added.
#
default_capabilities = [
  "CHOWN",
  "DAC_OVERRIDE",
  "FOWNER",
  "FSETID",
  "KILL",
  "NET_BIND_SERVICE",
  "SETFCAP",
  "SETGID",
  "SETPCAP",
  "SETUID",
  "SYS_CHROOT"
]

# A list of sysctls to be set in containers by default,
# specified as "name=value",
# for example:"net.ipv4.ping_group_range=0 0".
#
default_sysctls = [
  "net.ipv4.ping_group_range=0 0",
]

# A list of ulimits to be set in containers by default, specified as
# "<ulimit name>=<soft limit>:<hard limit>", for example:
# "nofile=1024:2048"
# See setrlimit(2) for a list of resource names.
# Any limit not specified here will be inherited from the process launching the
# container engine.
# Ulimits has limits for non privileged container engines.
#
#default_ulimits = [
#  "nofile=1280:2560",
#]

# List of devices. Specified as
# "<device-on-host>:<device-on-container>:<permissions>", for example:
# "/dev/sdc:/dev/xvdc:rwm".
# If it is empty or commented out, only the default devices will be used
#
#devices = []

# List of default DNS options to be added to /etc/resolv.conf inside of the container.
#
#dns_options = []

# List of default DNS search domains to be added to /etc/resolv.conf inside of the container.
#
#dns_searches = []

# Set default DNS servers.
# This option can be used to override the DNS configuration passed to the
# container. The special value "none" can be specified to disable creation of
# /etc/resolv.conf in the container.
# The /etc/resolv.conf file in the image will be used without changes.
#
#dns_servers = []

# Environment variable list for the conmon process; used for passing necessary
# environment variables to conmon or the runtime.
#
#env = [
#  "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
#  "TERM=xterm",
#]

# Pass all host environment variables into the container.
#
#env_host = false

# Default proxy environment variables passed into the container.
# The environment variables passed in include:
# http_proxy, https_proxy, ftp_proxy, no_proxy, and the upper case versions of
# these. This option is needed when host system uses a proxy but container
# should not use proxy. Proxy environment variables specified for the container
# in any other way will override the values passed from the host.
#
#http_proxy = true

# Run an init inside the container that forwards signals and reaps processes.
#
#init = false

# Container init binary, if init=true, this is the init binary to be used for containers.
#
#init_path = "/usr/libexec/podman/catatonit"

# Default way to to create an IPC namespace (POSIX SysV IPC) for the container
# Options are:
# `private` Create private IPC Namespace for the container.
# `host`    Share host IPC Namespace with the container.
#
#ipcns = "private"

# keyring tells the container engine whether to create
# a kernel keyring for use within the container.
#
#keyring = true

# label tells the container engine whether to use container separation using
# MAC(SELinux) labeling or not.
# The label flag is ignored on label disabled systems.
#
#label = true

# Logging driver for the container. Available options: k8s-file and journald.
#
#log_driver = "k8s-file"

# Maximum size allowed for the container log file. Negative numbers indicate
# that no size limit is imposed. If positive, it must be >= 8192 to match or
# exceed conmon's read buffer. The file is truncated and re-opened so the
# limit is never exceeded.
#
#log_size_max = -1

# Specifies default format tag for container log messages.
# This is useful for creating a specific tag for container log messages.
# Containers logs default to truncated container ID as a tag.
#
#log_tag = ""

# Default way to to create a Network namespace for the container
# Options are:
# `private` Create private Network Namespace for the container.
# `host`    Share host Network Namespace with the container.
# `none`    Containers do not use the network
#
#netns = "private"

# Create /etc/hosts for the container.  By default, container engine manage
# /etc/hosts, automatically adding  the container's  own  IP  address.
#
#no_hosts = false

# Default way to to create a PID namespace for the container
# Options are:
# `private` Create private PID Namespace for the container.
# `host`    Share host PID Namespace with the container.
#
#pidns = "private"

# Maximum number of processes allowed in a container.
#
#pids_limit = 2048

# Copy the content from the underlying image into the newly created volume
# when the container is created instead of when it is started. If false,
# the container engine will not copy the content until the container is started.
# Setting it to true may have negative performance implications.
#
#prepare_volume_on_create = false

# Indicates the networking to be used for rootless containers
#
#rootless_networking = "slirp4netns"

# Path to the seccomp.json profile which is used as the default seccomp profile
# for the runtime.
#
#seccomp_profile = "/usr/share/containers/seccomp.json"

# Size of /dev/shm. Specified as <number><unit>.
# Unit is optional, values:
# b (bytes), k (kilobytes), m (megabytes), or g (gigabytes).
# If the unit is omitted, the system uses bytes.
#
#shm_size = "65536k"

# Set timezone in container. Takes IANA timezones as well as "local",
# which sets the timezone in the container to match the host machine.
#
#tz = ""

# Set umask inside the container
#
#umask = "0022"

# Default way to to create a User namespace for the container
# Options are:
# `auto`        Create unique User Namespace for the container.
# `host`    Share host User Namespace with the container.
#
#userns = "host"

# Number of UIDs to allocate for the automatic container creation.
# UIDs are allocated from the "container" UIDs listed in
# /etc/subuid & /etc/subgid
#
#userns_size = 65536

# Default way to to create a UTS namespace for the container
# Options are:
# `private`        Create private UTS Namespace for the container.
# `host`    Share host UTS Namespace with the container.
#
#utsns = "private"

# List of volumes. Specified as
# "<directory-on-host>:<directory-in-container>:<options>", for example:
# "/db:/var/lib/db:ro".
# If it is empty or commented out, no volumes will be added
#
#volumes = []

# The network table contains settings pertaining to the management of
# CNI plugins.

[secrets]
#driver = "file"

[secrets.opts]
#root = "/example/directory"

[network]

# Path to directory where CNI plugin binaries are located.
#
#cni_plugin_dirs = [
#  "/usr/local/libexec/cni",
#  "/usr/libexec/cni",
#  "/usr/local/lib/cni",
#  "/usr/lib/cni",
#  "/opt/cni/bin",
#]

# The network name of the default CNI network to attach pods to.
#
#default_network = "podman"

# The default subnet for the default CNI network given in default_network.
# If a network with that name does not exist, a new network using that name and
# this subnet will be created.
# Must be a valid IPv4 CIDR prefix.
#
#default_subnet = "10.88.0.0/16"

# Path to the directory where CNI configuration files are located.
#
#network_config_dir = "/etc/cni/net.d/"

[engine]
# Index to the active service
#
#active_service = production

# Cgroup management implementation used for the runtime.
# Valid options "systemd" or "cgroupfs"
#
#cgroup_manager = "systemd"

# Environment variables to pass into conmon
#
#conmon_env_vars = [
#  "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
#]

# Paths to look for the conmon container manager binary
#
#conmon_path = [
#  "/usr/libexec/podman/conmon",
#  "/usr/local/libexec/podman/conmon",
#  "/usr/local/lib/podman/conmon",
#  "/usr/bin/conmon",
#  "/usr/sbin/conmon",
#  "/usr/local/bin/conmon",
#  "/usr/local/sbin/conmon"
#]

# Specify the keys sequence used to detach a container.
# Format is a single character [a-Z] or a comma separated sequence of
# `ctrl-<value>`, where `<value>` is one of:
# `a-z`, `@`, `^`, `[`, `\`, `]`, `^` or `_`
#
#detach_keys = "ctrl-p,ctrl-q"

# Determines whether engine will reserve ports on the host when they are
# forwarded to containers. When enabled, when ports are forwarded to containers,
# ports are held open by as long as the container is running, ensuring that
# they cannot be reused by other programs on the host. However, this can cause
# significant memory usage if a container has many ports forwarded to it.
# Disabling this can save memory.
#
#enable_port_reservation = true

# Environment variables to be used when running the container engine (e.g., Podman, Buildah).
# For example "http_proxy=internal.proxy.company.com".
# Note these environment variables will not be used within the container.
# Set the env section under [containers] table, if you want to set environment variables for the container.
#
#env = []

# Selects which logging mechanism to use for container engine events.
# Valid values are `journald`, `file` and `none`.
#
#events_logger = "journald"

# A is a list of directories which are used to search for helper binaries.
#
#helper_binaries_dir = [
#  "/usr/local/libexec/podman",
#  "/usr/local/lib/podman",
#  "/usr/libexec/podman",
#  "/usr/lib/podman",
#]

# Path to OCI hooks directories for automatically executed hooks.
#
#hooks_dir = [
#  "/usr/share/containers/oci/hooks.d",
#]

# Manifest Type (oci, v2s2, or v2s1) to use when pulling, pushing, building
# container images. By default image pulled and pushed match the format of the
# source image. Building/committing defaults to OCI.
#
#image_default_format = ""

# Default transport method for pulling and pushing for images
#
#image_default_transport = "docker://"

# Maximum number of image layers to be copied (pulled/pushed) simultaneously.
# Not setting this field, or setting it to zero, will fall back to containers/image defaults.
#
#image_parallel_copies = 0

# Default command to run the infra container
#
#infra_command = "/pause"

# Infra (pause) container image name for pod infra containers.  When running a
# pod, we start a `pause` process in a container to hold open the namespaces
# associated with the  pod.  This container does nothing other then sleep,
# reserving the pods resources for the lifetime of the pod.
#
#infra_image = "k8s.gcr.io/pause:3.4.1"

# Specify the locking mechanism to use; valid values are "shm" and "file".
# Change the default only if you are sure of what you are doing, in general
# "file" is useful only on platforms where cgo is not available for using the
# faster "shm" lock type.  You may need to run "podman system renumber" after
# you change the lock type.
#
#lock_type** = "shm"

# Indicates if Podman is running inside a VM via Podman Machine.
# Podman uses this value to do extra setup around networking from the
# container inside the VM to to host.
#
#machine_enabled = false

# MultiImageArchive - if true, the container engine allows for storing archives
# (e.g., of the docker-archive transport) with multiple images.  By default,
# Podman creates single-image archives.
#
#multi_image_archive = "false"

# Default engine namespace
# If engine is joined to a namespace, it will see only containers and pods
# that were created in the same namespace, and will create new containers and
# pods in that namespace.
# The default namespace is "", which corresponds to no namespace. When no
# namespace is set, all containers and pods are visible.
#
#namespace = ""

# Path to the slirp4netns binary
#
#network_cmd_path = ""

# Default options to pass to the slirp4netns binary.
# For example "allow_host_loopback=true"
#
#network_cmd_options = ["enable_ipv6=true",]

# Whether to use chroot instead of pivot_root in the runtime
#
#no_pivot_root = false

# Number of locks available for containers and pods.
# If this is changed, a lock renumber must be performed (e.g. with the
# 'podman system renumber' command).
#
#num_locks = 2048

# Whether to pull new image before running a container
#
#pull_policy = "missing"

# Indicates whether the application should be running in remote mode. This flag modifies the
# --remote option on container engines. Setting the flag to true will default
# `podman --remote=true` for access to the remote Podman service.
#
#remote = false

# Default OCI runtime
#
#runtime = "crun"

# List of the OCI runtimes that support --format=json.  When json is supported
# engine will use it for reporting nicer errors.
#
#runtime_supports_json = ["crun", "runc", "kata", "runsc", "krun"]

# List of the OCI runtimes that supports running containers with KVM Separation.
#
#runtime_supports_kvm = ["kata", "krun"]

# List of the OCI runtimes that supports running containers without cgroups.
#
#runtime_supports_nocgroups = ["crun", "krun"]

# Default location for storing temporary container image content.  Can be overridden with the TMPDIR environment
# variable.  If you specify "storage", then the location of the
# container/storage tmp directory will be used.
# image_copy_tmp_dir="/var/tmp"

# Number of seconds to wait without a connection
# before the `podman system service` times out and exits
#
#service_timeout = 5

# Directory for persistent engine files (database, etc)
# By default, this will be configured relative to where the containers/storage
# stores containers
# Uncomment to change location from this default
#
#static_dir = "/var/lib/containers/storage/libpod"

# Number of seconds to wait for container to exit before sending kill signal.
#
#stop_timeout = 10

# map of service destinations
#
#[service_destinations]
#  [service_destinations.production]
#     URI to access the Podman service
#     Examples:
#       rootless "unix://run/user/$UID/podman/podman.sock" (Default)
#       rootfull "unix://run/podman/podman.sock (Default)
#       remote rootless ssh://engineering.lab.company.com/run/user/1000/podman/podman.sock
#       remote rootfull ssh://[email protected]:22/run/podman/podman.sock
#
#    uri = "ssh://[email protected]/run/user/1001/podman/podman.sock"
#    Path to file containing ssh identity key
#    identity = "~/.ssh/id_rsa"

# Directory for temporary files. Must be tmpfs (wiped after reboot)
#
#tmp_dir = "/run/libpod"

# Directory for libpod named volumes.
# By default, this will be configured relative to where containers/storage
# stores containers.
# Uncomment to change location from this default.
#
#volume_path = "/var/lib/containers/storage/volumes"

# Paths to look for a valid OCI runtime (crun, runc, kata, runsc, krun, etc)
[engine.runtimes]
#crun = [
#  "/usr/bin/crun",
#  "/usr/sbin/crun",
#  "/usr/local/bin/crun",
#  "/usr/local/sbin/crun",
#  "/sbin/crun",
#  "/bin/crun",
#  "/run/current-system/sw/bin/crun",
#]

#kata = [
#  "/usr/bin/kata-runtime",
#  "/usr/sbin/kata-runtime",
#  "/usr/local/bin/kata-runtime",
#  "/usr/local/sbin/kata-runtime",
#  "/sbin/kata-runtime",
#  "/bin/kata-runtime",
#  "/usr/bin/kata-qemu",
#  "/usr/bin/kata-fc",
#]

#runc = [
#  "/usr/bin/runc",
#  "/usr/sbin/runc",
#  "/usr/local/bin/runc",
#  "/usr/local/sbin/runc",
#  "/sbin/runc",
#  "/bin/runc",
#  "/usr/lib/cri-o-runc/sbin/runc",
#]

#runsc = [
#  "/usr/bin/runsc",
#  "/usr/sbin/runsc",
#  "/usr/local/bin/runsc",
#  "/usr/local/sbin/runsc",
#  "/bin/runsc",
#  "/sbin/runsc",
#  "/run/current-system/sw/bin/runsc",
#]

#krun = [
#  "/usr/bin/krun",
#  "/usr/local/bin/krun",
#]

[engine.volume_plugins]
#testplugin = "/run/podman/plugins/test.sock"

[machine]
# Number of CPU's a machine is created with.
#
#cpus=1

# The size of the disk in GB created when init-ing a podman-machine VM.
#
#disk_size=10

# The image used when creating a podman-machine VM.
#
#image = "testing"

# Memory in MB a machine is created with.
#
#memory=2048

# The [machine] table MUST be the last entry in this file.
# (Unless another table is added)
# TOML does not provide a way to end a table other than a further table being
# defined, so every key hereafter will be part of [machine] and not the
# main config.

/etc/containers/containers.conf

> cat /etc/containers/containers.conf
# The containers configuration file specifies all of the available configuration
# command-line options/flags for container engine tools like Podman & Buildah,
# but in a TOML format that can be easily modified and versioned.

# Please refer to containers.conf(5) for details of all configuration options.
# Not all container engines implement all of the options.
# All of the options have hard coded defaults and these options will override
# the built in defaults. Users can then override these options via the command
# line. Container engines will read containers.conf files in up to three
# locations in the following order:
#  1. /usr/share/containers/containers.conf
#  2. /etc/containers/containers.conf
#  3. $HOME/.config/containers/containers.conf (Rootless containers ONLY)
#  Items specified in the latter containers.conf, if they exist, override the
# previous containers.conf settings, or the default settings.

[containers]

# List of annotation. Specified as
# "key = value"
# If it is empty or commented out, no annotations will be added
#
#annotations = []

# Used to change the name of the default AppArmor profile of container engine.
#
#apparmor_profile = "container-default"

# Default way to to create a cgroup namespace for the container
# Options are:
# `private` Create private Cgroup Namespace for the container.
# `host`    Share host Cgroup Namespace with the container.
#
#cgroupns = "private"

# Control container cgroup configuration
# Determines  whether  the  container will create CGroups.
# Options are:
# `enabled`   Enable cgroup support within container
# `disabled`  Disable cgroup support, will inherit cgroups from parent
# `no-conmon` Do not create a cgroup dedicated to conmon.
#
#cgroups = "enabled"

# List of default capabilities for containers. If it is empty or commented out,
# the default capabilities defined in the container engine will be added.
#
default_capabilities = [
  "CHOWN",
  "DAC_OVERRIDE",
  "FOWNER",
  "FSETID",
  "KILL",
  "NET_BIND_SERVICE",
  "SETFCAP",
  "SETGID",
  "SETPCAP",
  "SETUID",
  "SYS_CHROOT"
]

# A list of sysctls to be set in containers by default,
# specified as "name=value",
# for example:"net.ipv4.ping_group_range=0 0".
#
default_sysctls = [
  "net.ipv4.ping_group_range=0 0",
]

# A list of ulimits to be set in containers by default, specified as
# "<ulimit name>=<soft limit>:<hard limit>", for example:
# "nofile=1024:2048"
# See setrlimit(2) for a list of resource names.
# Any limit not specified here will be inherited from the process launching the
# container engine.
# Ulimits has limits for non privileged container engines.
#
#default_ulimits = [
#  "nofile=1280:2560",
#]

# List of devices. Specified as
# "<device-on-host>:<device-on-container>:<permissions>", for example:
# "/dev/sdc:/dev/xvdc:rwm".
# If it is empty or commented out, only the default devices will be used
#
#devices = []

# List of default DNS options to be added to /etc/resolv.conf inside of the container.
#
#dns_options = []

# List of default DNS search domains to be added to /etc/resolv.conf inside of the container.
#
#dns_searches = []

# Set default DNS servers.
# This option can be used to override the DNS configuration passed to the
# container. The special value "none" can be specified to disable creation of
# /etc/resolv.conf in the container.
# The /etc/resolv.conf file in the image will be used without changes.
#
#dns_servers = []

# Environment variable list for the conmon process; used for passing necessary
# environment variables to conmon or the runtime.
#
#env = [
#  "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
#  "TERM=xterm",
#]

# Pass all host environment variables into the container.
#
#env_host = false

# Default proxy environment variables passed into the container.
# The environment variables passed in include:
# http_proxy, https_proxy, ftp_proxy, no_proxy, and the upper case versions of
# these. This option is needed when host system uses a proxy but container
# should not use proxy. Proxy environment variables specified for the container
# in any other way will override the values passed from the host.
#
#http_proxy = true

# Run an init inside the container that forwards signals and reaps processes.
#
#init = false

# Container init binary, if init=true, this is the init binary to be used for containers.
#
#init_path = "/usr/libexec/podman/catatonit"

# Default way to to create an IPC namespace (POSIX SysV IPC) for the container
# Options are:
# `private` Create private IPC Namespace for the container.
# `host`    Share host IPC Namespace with the container.
#
#ipcns = "private"

# keyring tells the container engine whether to create
# a kernel keyring for use within the container.
#
#keyring = true

# label tells the container engine whether to use container separation using
# MAC(SELinux) labeling or not.
# The label flag is ignored on label disabled systems.
#
#label = true

# Logging driver for the container. Available options: k8s-file and journald.
#
#log_driver = "k8s-file"

# Maximum size allowed for the container log file. Negative numbers indicate
# that no size limit is imposed. If positive, it must be >= 8192 to match or
# exceed conmon's read buffer. The file is truncated and re-opened so the
# limit is never exceeded.
#
#log_size_max = -1

# Specifies default format tag for container log messages.
# This is useful for creating a specific tag for container log messages.
# Containers logs default to truncated container ID as a tag.
#
#log_tag = ""

# Default way to to create a Network namespace for the container
# Options are:
# `private` Create private Network Namespace for the container.
# `host`    Share host Network Namespace with the container.
# `none`    Containers do not use the network
#
#netns = "private"

# Create /etc/hosts for the container.  By default, container engine manage
# /etc/hosts, automatically adding  the container's  own  IP  address.
#
#no_hosts = false

# Default way to to create a PID namespace for the container
# Options are:
# `private` Create private PID Namespace for the container.
# `host`    Share host PID Namespace with the container.
#
#pidns = "private"

# Maximum number of processes allowed in a container.
#
#pids_limit = 2048

# Copy the content from the underlying image into the newly created volume
# when the container is created instead of when it is started. If false,
# the container engine will not copy the content until the container is started.
# Setting it to true may have negative performance implications.
#
#prepare_volume_on_create = false

# Indicates the networking to be used for rootless containers
#
#rootless_networking = "slirp4netns"

# Path to the seccomp.json profile which is used as the default seccomp profile
# for the runtime.
#
#seccomp_profile = "/usr/share/containers/seccomp.json"

# Size of /dev/shm. Specified as <number><unit>.
# Unit is optional, values:
# b (bytes), k (kilobytes), m (megabytes), or g (gigabytes).
# If the unit is omitted, the system uses bytes.
#
#shm_size = "65536k"

# Set timezone in container. Takes IANA timezones as well as "local",
# which sets the timezone in the container to match the host machine.
#
#tz = ""

# Set umask inside the container
#
#umask = "0022"

# Default way to to create a User namespace for the container
# Options are:
# `auto`        Create unique User Namespace for the container.
# `host`    Share host User Namespace with the container.
#
#userns = "host"

# Number of UIDs to allocate for the automatic container creation.
# UIDs are allocated from the "container" UIDs listed in
# /etc/subuid & /etc/subgid
#
#userns_size = 65536

# Default way to to create a UTS namespace for the container
# Options are:
# `private`        Create private UTS Namespace for the container.
# `host`    Share host UTS Namespace with the container.
#
#utsns = "private"

# List of volumes. Specified as
# "<directory-on-host>:<directory-in-container>:<options>", for example:
# "/db:/var/lib/db:ro".
# If it is empty or commented out, no volumes will be added
#
#volumes = []

# The network table contains settings pertaining to the management of
# CNI plugins.

[secrets]
#driver = "file"

[secrets.opts]
#root = "/example/directory"

[network]

# Path to directory where CNI plugin binaries are located.
#
#cni_plugin_dirs = [
#  "/usr/local/libexec/cni",
#  "/usr/libexec/cni",
#  "/usr/local/lib/cni",
#  "/usr/lib/cni",
#  "/opt/cni/bin",
#]

# The network name of the default CNI network to attach pods to.
#
#default_network = "podman"

# The default subnet for the default CNI network given in default_network.
# If a network with that name does not exist, a new network using that name and
# this subnet will be created.
# Must be a valid IPv4 CIDR prefix.
#
#default_subnet = "10.88.0.0/16"

# Path to the directory where CNI configuration files are located.
#
#network_config_dir = "/etc/cni/net.d/"

[engine]
# Index to the active service
#
#active_service = production

# Cgroup management implementation used for the runtime.
# Valid options "systemd" or "cgroupfs"
#
#cgroup_manager = "systemd"

# Environment variables to pass into conmon
#
#conmon_env_vars = [
#  "PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin"
#]

# Paths to look for the conmon container manager binary
#
#conmon_path = [
#  "/usr/libexec/podman/conmon",
#  "/usr/local/libexec/podman/conmon",
#  "/usr/local/lib/podman/conmon",
#  "/usr/bin/conmon",
#  "/usr/sbin/conmon",
#  "/usr/local/bin/conmon",
#  "/usr/local/sbin/conmon"
#]

# Specify the keys sequence used to detach a container.
# Format is a single character [a-Z] or a comma separated sequence of
# `ctrl-<value>`, where `<value>` is one of:
# `a-z`, `@`, `^`, `[`, `\`, `]`, `^` or `_`
#
#detach_keys = "ctrl-p,ctrl-q"

# Determines whether engine will reserve ports on the host when they are
# forwarded to containers. When enabled, when ports are forwarded to containers,
# ports are held open by as long as the container is running, ensuring that
# they cannot be reused by other programs on the host. However, this can cause
# significant memory usage if a container has many ports forwarded to it.
# Disabling this can save memory.
#
#enable_port_reservation = true

# Environment variables to be used when running the container engine (e.g., Podman, Buildah).
# For example "http_proxy=internal.proxy.company.com".
# Note these environment variables will not be used within the container.
# Set the env section under [containers] table, if you want to set environment variables for the container.
#
#env = []

# Selects which logging mechanism to use for container engine events.
# Valid values are `journald`, `file` and `none`.
#
#events_logger = "journald"

# A is a list of directories which are used to search for helper binaries.
#
#helper_binaries_dir = [
#  "/usr/local/libexec/podman",
#  "/usr/local/lib/podman",
#  "/usr/libexec/podman",
#  "/usr/lib/podman",
#]

# Path to OCI hooks directories for automatically executed hooks.
#
#hooks_dir = [
#  "/usr/share/containers/oci/hooks.d",
#]

# Manifest Type (oci, v2s2, or v2s1) to use when pulling, pushing, building
# container images. By default image pulled and pushed match the format of the
# source image. Building/committing defaults to OCI.
#
#image_default_format = ""

# Default transport method for pulling and pushing for images
#
#image_default_transport = "docker://"

# Maximum number of image layers to be copied (pulled/pushed) simultaneously.
# Not setting this field, or setting it to zero, will fall back to containers/image defaults.
#
#image_parallel_copies = 0

# Default command to run the infra container
#
#infra_command = "/pause"

# Infra (pause) container image name for pod infra containers.  When running a
# pod, we start a `pause` process in a container to hold open the namespaces
# associated with the  pod.  This container does nothing other then sleep,
# reserving the pods resources for the lifetime of the pod.
#
#infra_image = "k8s.gcr.io/pause:3.4.1"

# Specify the locking mechanism to use; valid values are "shm" and "file".
# Change the default only if you are sure of what you are doing, in general
# "file" is useful only on platforms where cgo is not available for using the
# faster "shm" lock type.  You may need to run "podman system renumber" after
# you change the lock type.
#
#lock_type** = "shm"

# Indicates if Podman is running inside a VM via Podman Machine.
# Podman uses this value to do extra setup around networking from the
# container inside the VM to to host.
#
#machine_enabled = false

# MultiImageArchive - if true, the container engine allows for storing archives
# (e.g., of the docker-archive transport) with multiple images.  By default,
# Podman creates single-image archives.
#
#multi_image_archive = "false"

# Default engine namespace
# If engine is joined to a namespace, it will see only containers and pods
# that were created in the same namespace, and will create new containers and
# pods in that namespace.
# The default namespace is "", which corresponds to no namespace. When no
# namespace is set, all containers and pods are visible.
#
#namespace = ""

# Path to the slirp4netns binary
#
#network_cmd_path = ""

# Default options to pass to the slirp4netns binary.
# For example "allow_host_loopback=true"
#
#network_cmd_options = ["enable_ipv6=true",]

# Whether to use chroot instead of pivot_root in the runtime
#
#no_pivot_root = false

# Number of locks available for containers and pods.
# If this is changed, a lock renumber must be performed (e.g. with the
# 'podman system renumber' command).
#
#num_locks = 2048

# Whether to pull new image before running a container
#
#pull_policy = "missing"

# Indicates whether the application should be running in remote mode. This flag modifies the
# --remote option on container engines. Setting the flag to true will default
# `podman --remote=true` for access to the remote Podman service.
#
#remote = false

# Default OCI runtime
#
#runtime = "crun"

# List of the OCI runtimes that support --format=json.  When json is supported
# engine will use it for reporting nicer errors.
#
#runtime_supports_json = ["crun", "runc", "kata", "runsc", "krun"]

# List of the OCI runtimes that supports running containers with KVM Separation.
#
#runtime_supports_kvm = ["kata", "krun"]

# List of the OCI runtimes that supports running containers without cgroups.
#
#runtime_supports_nocgroups = ["crun", "krun"]

# Default location for storing temporary container image content.  Can be overridden with the TMPDIR environment
# variable.  If you specify "storage", then the location of the
# container/storage tmp directory will be used.
# image_copy_tmp_dir="/var/tmp"

# Number of seconds to wait without a connection
# before the `podman system service` times out and exits
#
#service_timeout = 5

# Directory for persistent engine files (database, etc)
# By default, this will be configured relative to where the containers/storage
# stores containers
# Uncomment to change location from this default
#
#static_dir = "/var/lib/containers/storage/libpod"

# Number of seconds to wait for container to exit before sending kill signal.
#
#stop_timeout = 10

# map of service destinations
#
#[service_destinations]
#  [service_destinations.production]
#     URI to access the Podman service
#     Examples:
#       rootless "unix://run/user/$UID/podman/podman.sock" (Default)
#       rootfull "unix://run/podman/podman.sock (Default)
#       remote rootless ssh://engineering.lab.company.com/run/user/1000/podman/podman.sock
#       remote rootfull ssh://[email protected]:22/run/podman/podman.sock
#
#    uri = "ssh://[email protected]/run/user/1001/podman/podman.sock"
#    Path to file containing ssh identity key
#    identity = "~/.ssh/id_rsa"

# Directory for temporary files. Must be tmpfs (wiped after reboot)
#
#tmp_dir = "/run/libpod"

# Directory for libpod named volumes.
# By default, this will be configured relative to where containers/storage
# stores containers.
# Uncomment to change location from this default.
#
#volume_path = "/var/lib/containers/storage/volumes"

# Paths to look for a valid OCI runtime (crun, runc, kata, runsc, krun, etc)
[engine.runtimes]
#crun = [
#  "/usr/bin/crun",
#  "/usr/sbin/crun",
#  "/usr/local/bin/crun",
#  "/usr/local/sbin/crun",
#  "/sbin/crun",
#  "/bin/crun",
#  "/run/current-system/sw/bin/crun",
#]

#kata = [
#  "/usr/bin/kata-runtime",
#  "/usr/sbin/kata-runtime",
#  "/usr/local/bin/kata-runtime",
#  "/usr/local/sbin/kata-runtime",
#  "/sbin/kata-runtime",
#  "/bin/kata-runtime",
#  "/usr/bin/kata-qemu",
#  "/usr/bin/kata-fc",
#]

#runc = [
#  "/usr/bin/runc",
#  "/usr/sbin/runc",
#  "/usr/local/bin/runc",
#  "/usr/local/sbin/runc",
#  "/sbin/runc",
#  "/bin/runc",
#  "/usr/lib/cri-o-runc/sbin/runc",
#]

#runsc = [
#  "/usr/bin/runsc",
#  "/usr/sbin/runsc",
#  "/usr/local/bin/runsc",
#  "/usr/local/sbin/runsc",
#  "/bin/runsc",
#  "/sbin/runsc",
#  "/run/current-system/sw/bin/runsc",
#]

#krun = [
#  "/usr/bin/krun",
#  "/usr/local/bin/krun",
#]

[engine.volume_plugins]
#testplugin = "/run/podman/plugins/test.sock"

#[machine]
# Number of CPU's a machine is created with.
#
#cpus=1

# The size of the disk in GB created when init-ing a podman-machine VM.
#
#disk_size=10

# The image used when creating a podman-machine VM.
#
#image = "testing"

# Memory in MB a machine is created with.
#
#memory=2048

# The [machine] table MUST be the last entry in this file.
# (Unless another table is added)
# TOML does not provide a way to end a table other than a further table being
# defined, so every key hereafter will be part of [machine] and not the
# main config.

~/.config/containers/containers.conf

> cat ~/.config/containers/containers.conf
cat: /home/njohnson/.config/containers/containers.conf: No such file or directory

And for what its worth:

> diff /usr/share/containers/containers.conf /etc/containers/containers.conf
558c558
< [machine]
---
> #[machine]

@nyonson
Copy link
Author

nyonson commented Oct 1, 2021

Looks like docker-compose was very recently (sept 28th) upgraded to version 2.0.0 for Arch: https://github.com/archlinux/svntogit-community/commits/packages/docker-compose/trunk

Downgrading back to version 1.29.2 fixes things for me.

pacman -U /var/cache/pacman/pkg/docker-compose-1.29.2-1-any.pkg.tar.zst

Is version 2 of docker compose not compatible?

@mheon
Copy link
Member

mheon commented Oct 1, 2021

We're still investigating compose 2.0 compatibility - safest to stick with 1.x for now. I know that building with Compose 2.0 is also broken right now.

(On the plus side, Compose 2.0 also offers some exciting opportunities to integrate with Podman more closely, so we're also investigating there)

@nyonson
Copy link
Author

nyonson commented Oct 1, 2021

Ah, got it. I think I ran into the build issue on 2.0 as well.

Is there an issue I can subscribe to in order to follow along?

@github-actions
Copy link

github-actions bot commented Nov 1, 2021

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

github-actions bot commented Dec 2, 2021

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

github-actions bot commented Jan 2, 2022

A friendly reminder that this issue had no activity for 30 days.

@github-actions
Copy link

A friendly reminder that this issue had no activity for 30 days.

@selurvedu
Copy link

@github-actions[bot] a friendly reminder that this issue is still relevant.

@kriansa
Copy link

kriansa commented Feb 21, 2022

@mheon is there any development on this issue?

I just tested with Podman 4.0 and compose 2.2.3 and it is still a no-go for mounting containers with VOLUME on image declaration.

Do we have any workaround at this moment?

@mheon
Copy link
Member

mheon commented Feb 22, 2022

No, largely because there's a more critical problem blocking Compose v2.0 - it requires support for the Buildkit API to be present to build images, which we presently do not have. We're going to be looking at this over the coming months to try and address it, but we don't feel like fixing minor bugs like this makes much sense until we've got core features (building images) ready and working.

@kriansa
Copy link

kriansa commented Feb 22, 2022

@mheon Thanks for the follow-up. I don't want to diverge from the original topic, but is there still an issue with building images? I remember one when Compose was just released but it seems to be fixed at this point - as far as I have tested, building images is working well at least for my use cases so far.

@mheon
Copy link
Member

mheon commented Feb 22, 2022

@kriansa Really? Interesting - if we can confirm that Buildkit is no longer required that'd be a massive relief.

@L0g4n Official answer is going to be podman generate kube and podman play kube, but we are committed to making sure that podman-compose and Docker Compose work with Podman; we just may require folks to stay on older versions for a bit when they do major rewrites that require new Docker functionality we didn't previously implement. Fortunately, there's a finite amount of Docker functionality, and every time this happens we get closer to parity.

@kriansa
Copy link

kriansa commented Feb 22, 2022

@mheon Yes. Right now I'm using podman 4.0.0 and compose 2.2.3 (latest) and I can build my images just fine using docker-compose build, but I can't tell if that's using buildkit or not (is there an easy way to check that?)

The main pain point for ditching docker completely from my computer is now the volumes issue. It can be reproduced by this docker-compose.yml:

---
volumes:
  db_data:
services:
  database:
    image: docker.io/library/mariadb:10.6.5
    volumes:
      - db_data:/var/lib/mysql

Then try to docker-compose up and it will fail with the same message as reported in the title (/var/lib/mysql: duplicate mount destination).

@SaladinAyyub
Copy link

SaladinAyyub commented Mar 10, 2022

@kriansa exact same issue.
I want to host nakama - https://github.com/heroiclabs/nakama/blob/master/docker-compose.yml
But volumes is not supported with latest docker-compose. I hope this gets supported soon.

@mheon
Copy link
Member

mheon commented Mar 16, 2022

I can now confirm that BuildKit no longer seems to be a requirement for Compose v2.0, which unblocks the rest of this work. Beginning work on fixing this volume issue now. Seems like Compose is passing the /data mount in two separate places, for unclear reasons; deduplicating it should be sufficient to fix. Hope to get a patch out tomorrow.

mheon added a commit to mheon/libpod that referenced this issue Mar 17, 2022
Docker Compose v2.0 passes mount specifications in two different
places: Volumes (just the destination) and Mounts (full info
provided - source, destination, etc). This was causing Podman to
refuse to create containers, as the destination was used twice.
Deduplicate between Mounts and Volumes, preferring volumes, to
resolve this.

Fixes containers#11822

Signed-off-by: Matthew Heon <[email protected]>
@katywings
Copy link

Sadly 🙈 I still get this error with podman 4.1.0 and docker-compose 2.5.0 on macOS Catalina 10.15.7 with this simple docker-compose.yaml: https://github.com/nocodb/nocodb/blob/master/docker-compose/pg/docker-compose.yml

Exporting DOCKER_BUILDKIT=0 in ~/.bashrc or calling docker-compose like: DOCKER_BUILDKIT=0 docker-compose up doesnt help.

Screenshot 2022-05-07 at 20 25 10

@mheon
Copy link
Member

mheon commented May 7, 2022

I'll take a look on Monday.

@mheon
Copy link
Member

mheon commented May 7, 2022

Also, can you open a fresh issue on this? Entire issue template would be useful to help debug.

@katywings
Copy link

katywings commented May 7, 2022

@mheon Thanks for the quick response! I actually was setting up the fresh issue when I noticed the following:

Running podman info --debug showed me, that I am actually still on 4.0.3, even though that podman -v shows: podman version 4.1.0 😱

As far as I can can tell, podman machine just doesn't yet have a fedora-coreos image with podman 4.1 😅.
I will wait a couple days or so until fedora-coreos is shipping with podman 4.1, then I will try again 😉.

P.S. About that podman -v output: I created a feature request to include the server version #14149

@Schnuecks
Copy link

Error still exists with podman 4.1.1 and docker compose version 2.7.0

this occoured if i try to install mailcow-dockerized

@mheon
Copy link
Member

mheon commented Jul 31, 2022

Please open a fresh issue and fill out the full issue template, this should already be fixed.

@giuliohome
Copy link

Very similar case (Error response from daemon: fill out specgen: /data: duplicate mount destination) on linux fedora (where docker is podman) for Redis (docker.io/redis/redis-stack-server:latest).

Solved by upgrading to Docker Compose version v2.15.1.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Aug 29, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Aug 29, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.