Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error: the --rm option conflicts with --restart, when the restartPolicy is not \"\" and \"no\"\n #591

Closed
QuentinFAIDIDE opened this issue May 25, 2023 · 2 comments
Labels
bug Something isn't working

Comments

@QuentinFAIDIDE
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Unable to apply the following task in a playbook:


    - name: Start the pod
      containers.podman.podman_container:
        name: postgresql-sonarqube
        image: "{{ postgres_container_image }}:{{ postgres_container_image_tag }}"
        restart_policy: always
        env:
          POSTGRES_USER: "{{ postgres_user }}"
          POSTGRES_PASSWORD: "{{ postgres_password }}"
          POSTGRES_DB: "{{ postgres_user }}"
        volumes:
          - "/home/{{ remote_server_user }}/postgres:/var/lib/postgresql/data:Z,U"
        generate_systemd:
          path: "/home/{{ remote_server_user }}/.config/systemd/user"
          restart_policy: "always"
          new: true
        pod: "ci-machine"
        log_driver: "journald"
        log_opt:
          tag: "postgresql-sonarqube"
      tags: postgresql

It gets me the following error:

TASK [Start the pod] ******************************************************************************************************************************************************fatal: [192.168.33.6]: FAILED! => {"changed": false, "msg": "Can't run container postgresql-sonarqube", "stderr": "Error: the --rm option conflicts with --restart, when the restartPolicy is not \"\" and \"no\"\n", "stderr_lines": ["Error: the --rm option conflicts with --restart, when the restartPolicy is not \"\" and \"no\""], "stdout": "", "stdout_lines": []}

Steps to reproduce the issue:

  1. Use RockyLinux8 image for vagrant

  2. Apply a playbook with the aforementioned task.

  3. See it fail.

Describe the results you received:


TASK [Start the pod] ******************************************************************************************************************************************************fatal: [192.168.33.6]: FAILED! => {"changed": false, "msg": "Can't run container postgresql-sonarqube", "stderr": "Error: the --rm option conflicts with --restart, when the restartPolicy is not \"\" and \"no\"\n", "stderr_lines": ["Error: the --rm option conflicts with --restart, when the restartPolicy is not \"\" and \"no\""], "stdout": "", "stdout_lines": []}

Describe the results you expected:

TASK [Start the pod] ******************************************************************************************************************************************************changed: [192.168.33.6]

Additional information you deem important (e.g. issue happens only occasionally):
Changing version to 1.9.4 fixes the issue, as well as another one that fail to get the diff of the journalctl log level that used to happen on subsequent runs if I wouldn't downgrade. I downgrade the following way:

mkdir -p ~/.ansible/collections/ansible_collections/containers
git clone https://github.com/containers/ansible-podman-collections.git ~/.ansible/collections/ansible_collections/containers/podman
cd ~/.ansible/collections/ansible_collections/containers/podman
git checkout tags/1.9.4
cd -

Checking out to 1.10.1, doesn't fix the issue, though.

Version of the containers.podman collection: 1.10.1

Output of ansible --version:

ansible [core 2.14.6]
  config file = None
  configured module search path = ['/root/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/local/lib/python3.10/dist-packages/ansible
  ansible collection location = /root/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/local/bin/ansible
  python version = 3.10.6 (main, Mar 10 2023, 10:55:28) [GCC 11.3.0] (/usr/bin/python3)
  jinja version = 3.1.2
  libyaml = True

Output of podman version:

podman version 4.4.1

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.29.0
  cgroupControllers: []
  cgroupManager: cgroupfs
  cgroupVersion: v1
  conmon:
    package: conmon-2.1.6-1.module+el8.8.0+1265+fa25dd7a.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.1.6, commit: a88a21e8953a6243d5f369f61a342bcaf0630aa1'
  cpuUtilization:
    idlePercent: 99.01
    systemPercent: 0.6
    userPercent: 0.39
  cpus: 2
  distribution:
    distribution: '"rocky"'
    version: "8.7"
  eventLogger: file
  hostname: localhost.localdomain
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 1000
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 4.18.0-425.13.1.el8_7.x86_64
  linkmode: dynamic
  logDriver: k8s-file
  memFree: 255770624
  memTotal: 1900556288
  networkBackend: cni
  ociRuntime:
    name: runc
    package: runc-1.1.4-1.module+el8.8.0+1265+fa25dd7a.x86_64
    path: /usr/bin/runc
    version: |-
      runc version 1.1.4
      spec: 1.0.2-dev
      go: go1.19.4
      libseccomp: 2.5.2
  os: linux
  remoteSocket:
    exists: true
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_SYS_CHROOT,CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID  
    rootless: true
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.2.0-2.module+el8.8.0+1265+fa25dd7a.x86_64
    version: |-
      slirp4netns version 1.2.0
      commit: 656041d45cfca7a4176f6b7eed9e4fe6c11e8383
      libslirp: 4.4.0
      SLIRP_CONFIG_VERSION_MAX: 3
      libseccomp: 2.5.2
  swapFree: 2203054080
  swapTotal: 2203054080
  uptime: 2h 25m 56.00s (Approximately 0.08 days)
plugins:
  authorization: null
  log:
  - k8s-file
  - none
  - passthrough
  - journald
  network:
  - bridge
  - macvlan
  - ipvlan
  volume:
  - local
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /home/vagrant/.config/containers/storage.conf
  containerStore:
    number: 2
    paused: 0
    running: 2
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/vagrant/.local/share/containers/storage
  graphRootAllocated: 66482892800
  graphRootUsed: 2884349952
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 3
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/vagrant/.local/share/containers/storage/volumes
version:
  APIVersion: 4.4.1
  Built: 1684272165
  BuiltTime: Tue May 16 21:22:45 2023
  GitCommit: ""
  GoVersion: go1.19.4
  Os: linux
  OsArch: linux/amd64
  Version: 4.4.1

Package info (e.g. output of rpm -q podman or apt list podman):

podman-4.4.1-12.module+el8.8.0+1265+fa25dd7a.x86_64

Playbok you run with ansible (e.g. content of playbook.yaml):

---
- name: make sure postgres is deployed
  hosts: ciservers
  vars_files:
    - vars/variables.yml
  tasks:

    - name: Create data volume directory if it does not exist
      script: scripts/create_postgres_folder.sh
      tags: postgresql

    - name: Ensuire postgresql folder exists
      ansible.builtin.file:
        path: "/home/{{ remote_server_user }}/{{ postgres_backup_folder }}"
        state: directory
      tags: postgresql

    - name: Templates the backup script
      ansible.builtin.template:
        src: backup.sh.j2
        dest: /home/{{ remote_server_user }}/backup.sh
        mode: u=rwx,g=rx,o=r
      tags: postgresql

    - name: Start the pod
      containers.podman.podman_container:
        name: postgresql-sonarqube
        image: "{{ postgres_container_image }}:{{ postgres_container_image_tag }}"
        restart_policy: always
        env:
          POSTGRES_USER: "{{ postgres_user }}"
          POSTGRES_PASSWORD: "{{ postgres_password }}"
          POSTGRES_DB: "{{ postgres_user }}"
        volumes:
          - "/home/{{ remote_server_user }}/postgres:/var/lib/postgresql/data:Z,U"
        generate_systemd:
          path: "/home/{{ remote_server_user }}/.config/systemd/user"
          restart_policy: "always"
          new: true
        pod: "ci-machine"
        log_driver: "journald"
        log_opt:
          tag: "postgresql-sonarqube"
      tags: postgresql

    - name: Enable systemd user service
      ansible.builtin.shell: |
        systemctl --user daemon-reload
        systemctl --user enable container-postgresql-sonarqube 
        systemctl --user restart container-postgresql-sonarqube
      tags: postgresql

    - name: Backup script
      ansible.builtin.cron:
        name: "Backup the postgresql database and gitlab config"
        job: "/home/{{ remote_server_user }}/backup.sh"
        hour: "4"
        minute: "0"
      tags: postgresql

Command line and output of ansible run with high verbosity
Note that I had to rerun the command after having already succesfully started the postgres-sonarqube container (downgraded back and forth to submit the issue here) and that it therefore failed at the next container instead in the following logs (with exact same error).

TASK [Create data volume directory if it does not exist] ******************************************************************************************************************task path: /mnt/c/Users/USERNAME/Documents/Dev/tooling-vm/deploy_sonarqube.yml:7
<192.168.33.6> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.33.6> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/vagrant_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/e657355820"' 192.168.33.6 '/bin/sh -c '"'"'echo ~vagrant && sleep 0'"'"''
<192.168.33.6> (0, b'/home/vagrant\n', b'')
<192.168.33.6> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.33.6> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/vagrant_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/e657355820"' 192.168.33.6 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/vagrant/.ansible/tmp `"&& mkdir "` echo /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.1560116-3618-40988895391900 `" && echo ansible-tmp-1685015470.1560116-3618-40988895391900="` echo /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.1560116-3618-40988895391900 `" ) && sleep 0'"'"''
<192.168.33.6> (0, b'ansible-tmp-1685015470.1560116-3618-40988895391900=/home/vagrant/.ansible/tmp/ansible-tmp-1685015470.1560116-3618-40988895391900\n', b'')
<192.168.33.6> PUT /mnt/c/Users/USERNAME/Documents/Dev/tooling-vm/scripts/create_sonarqube_folder.sh TO /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.1560116-3618-40988895391900/create_sonarqube_folder.sh
<192.168.33.6> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/vagrant_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/e657355820"' '[192.168.33.6]'
<192.168.33.6> (0, b'sftp> put /mnt/c/Users/USERNAME/Documents/Dev/tooling-vm/scripts/create_sonarqube_folder.sh /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.1560116-3618-40988895391900/create_sonarqube_folder.sh\n', b'')
<192.168.33.6> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.33.6> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/vagrant_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/e657355820"' 192.168.33.6 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.1560116-3618-40988895391900/ /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.1560116-3618-40988895391900/create_sonarqube_folder.sh && sleep 0'"'"''
<192.168.33.6> (0, b'', b'')
<192.168.33.6> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.33.6> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/vagrant_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/e657355820"' -tt 192.168.33.6 '/bin/sh -c '"'"' /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.1560116-3618-40988895391900/create_sonarqube_folder.sh && sleep 0'"'"''
<192.168.33.6> (0, b'Sonarqube folder already exists\r\n', b'Shared connection to 192.168.33.6 closed.\r\n')
<192.168.33.6> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.33.6> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/vagrant_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/e657355820"' 192.168.33.6 '/bin/sh -c '"'"'rm -f -r /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.1560116-3618-40988895391900/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.33.6> (0, b'', b'')
changed: [192.168.33.6] => {
    "changed": true,
    "rc": 0,
    "stderr": "Shared connection to 192.168.33.6 closed.\r\n",
    "stderr_lines": [
        "Shared connection to 192.168.33.6 closed."
    ],
    "stdout": "Sonarqube folder already exists\r\n",
    "stdout_lines": [
        "Sonarqube folder already exists"
    ]
}
Read vars_file 'vars/variables.yml'

TASK [Start the pod] ******************************************************************************************************************************************************task path: /mnt/c/Users/USERNAME/Documents/Dev/tooling-vm/deploy_sonarqube.yml:11
<192.168.33.6> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.33.6> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/vagrant_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/e657355820"' 192.168.33.6 '/bin/sh -c '"'"'echo ~vagrant && sleep 0'"'"''
<192.168.33.6> (0, b'/home/vagrant\n', b'')
<192.168.33.6> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.33.6> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/vagrant_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/e657355820"' 192.168.33.6 '/bin/sh -c '"'"'( umask 77 && mkdir -p "` echo /home/vagrant/.ansible/tmp `"&& mkdir "` echo /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.7213252-3627-14211151300950 `" && echo ansible-tmp-1685015470.7213252-3627-14211151300950="` echo /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.7213252-3627-14211151300950 `" ) && sleep 0'"'"''
<192.168.33.6> (0, b'ansible-tmp-1685015470.7213252-3627-14211151300950=/home/vagrant/.ansible/tmp/ansible-tmp-1685015470.7213252-3627-14211151300950\n', b'')
Using module file /root/.ansible/collections/ansible_collections/containers/podman/plugins/modules/podman_container.py
<192.168.33.6> PUT /root/.ansible/tmp/ansible-local-34569o2ghz6e/tmpajbip89r TO /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.7213252-3627-14211151300950/AnsiballZ_podman_container.py
<192.168.33.6> SSH: EXEC sftp -b - -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/vagrant_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/e657355820"' '[192.168.33.6]'
<192.168.33.6> (0, b'sftp> put /root/.ansible/tmp/ansible-local-34569o2ghz6e/tmpajbip89r /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.7213252-3627-14211151300950/AnsiballZ_podman_container.py\n', b'')
<192.168.33.6> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.33.6> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/vagrant_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/e657355820"' 192.168.33.6 '/bin/sh -c '"'"'chmod u+x /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.7213252-3627-14211151300950/ /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.7213252-3627-14211151300950/AnsiballZ_podman_container.py && sleep 0'"'"''
<192.168.33.6> (0, b'', b'')
<192.168.33.6> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.33.6> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/vagrant_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/e657355820"' -tt 192.168.33.6 '/bin/sh -c '"'"'/usr/libexec/platform-python /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.7213252-3627-14211151300950/AnsiballZ_podman_container.py && sleep 0'"'"''
<192.168.33.6> (1, b'\r\n{"stdout": "", "stderr": "Error: the --rm option conflicts with --restart, when the restartPolicy is not \\"\\" and \\"no\\"\\n", "failed": true, 
"msg": "Can\'t run container sonarqube", "invocation": {"module_args": {"name": "sonarqube", "image": "docker.io/library/sonarqube:9.8.0-community", "restart_policy": "always", "env": {"SONAR_JDBC_USERNAME": "sonar", "SONAR_JDBC_PASSWORD": "sonar", "SONAR_JDBC_URL": "jdbc:postgresql://localhost:5432/sonar", "SONAR_LOG_ROLLINGPOLICY": "size:10MB", "SONAR_LOG_MAXFILES": "5"}, "volumes": ["/home/vagrant/sonarqube/data:/opt/sonarqube/data:Z", "/home/vagrant/sonarqube/extensions:/opt/sonarqube/extensions:Z", "/home/vagrant/certs:/opt/certs:ro,z"], "log_driver": "journald", "log_opt": {"tag": "sonarqube", "max_size": null, "path": null}, "generate_systemd": {"path": "/home/vagrant/.config/systemd/user", "restart_policy": "always", "new": true}, "pod": "ci-machine", "volume": ["/home/vagrant/sonarqube/data:/opt/sonarqube/data:Z", "/home/vagrant/sonarqube/extensions:/opt/sonarqube/extensions:Z", "/home/vagrant/certs:/opt/certs:ro,z"], "executable": "podman", "state": "started", "detach": true, "debug": false, "force_restart": false, "image_strict": false, "recreate": false, "annotation": null, "authfile": null, "blkio_weight": null, "blkio_weight_device": null, "cap_add": null, "cap_drop": null, "cgroup_parent": null, "cgroupns": null, "cgroups": null, "cidfile": null, "cmd_args": null, "conmon_pidfile": null, "command": null, "cpu_period": null, "cpu_rt_period": null, "cpu_rt_runtime": null, "cpu_shares": null, "cpus": null, "cpuset_cpus": null, "cpuset_mems": null, "detach_keys": null, "device": null, "device_read_bps": null, "device_read_iops": null, "device_write_bps": null, "device_write_iops": null, "dns": null, "dns_option": null, "dns_search": null, "entrypoint": null, "env_file": 
null, "env_host": null, "etc_hosts": null, "expose": null, "gidmap": null, "group_add": null, "healthcheck": null, "healthcheck_interval": null, "healthcheck_retries": null, "healthcheck_start_period": null, "healthcheck_timeout": null, "hostname": null, "http_proxy": null, "image_volume": null, "init": null, "init_path": null, "interactive": null, "ip": null, "ipc": null, "kernel_memory": null, "label": null, "label_file": null, "log_level": null, "mac_address": null, "memory": null, "memory_reservation": null, "memory_swap": null, "memory_swappiness": null, "mount": null, "network": null, "network_aliases": null, "no_hosts": null, "oom_kill_disable": null, "oom_score_adj": 
null, "pid": null, "pids_limit": null, "privileged": null, "publish": null, "publish_all": null, "read_only": null, "read_only_tmpfs": null, "requires": null, "rm": true, 
"rootfs": null, "secrets": null, "sdnotify": null, "security_opt": null, "shm_size": null, "sig_proxy": null, "stop_signal": null, "stop_timeout": null, "subgidname": null, "subuidname": null, "sysctl": null, "systemd": null, "timezone": null, "tmpfs": null, "tty": null, "uidmap": null, "ulimit": null, "user": null, "userns": null, "uts": null, "volumes_from": null, "workdir": null}}}\r\n', b'Shared connection to 192.168.33.6 closed.\r\n')
<192.168.33.6> Failed to connect to the host via ssh: Shared connection to 192.168.33.6 closed.
<192.168.33.6> ESTABLISH SSH CONNECTION FOR USER: vagrant
<192.168.33.6> SSH: EXEC ssh -C -o ControlMaster=auto -o ControlPersist=60s -o 'IdentityFile="/root/vagrant_key"' -o KbdInteractiveAuthentication=no -o PreferredAuthentications=gssapi-with-mic,gssapi-keyex,hostbased,publickey -o PasswordAuthentication=no -o 'User="vagrant"' -o ConnectTimeout=10 -o 'ControlPath="/root/.ansible/cp/e657355820"' 192.168.33.6 '/bin/sh -c '"'"'rm -f -r /home/vagrant/.ansible/tmp/ansible-tmp-1685015470.7213252-3627-14211151300950/ > /dev/null 2>&1 && sleep 0'"'"''
<192.168.33.6> (0, b'', b'')
fatal: [192.168.33.6]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "annotation": null,
            "authfile": null,
            "blkio_weight": null,
            "blkio_weight_device": null,
            "cap_add": null,
            "cap_drop": null,
            "cgroup_parent": null,
            "cgroupns": null,
            "cgroups": null,
            "cidfile": null,
            "cmd_args": null,
            "command": null,
            "conmon_pidfile": null,
            "cpu_period": null,
            "cpu_rt_period": null,
            "cpu_rt_runtime": null,
            "cpu_shares": null,
            "cpus": null,
            "cpuset_cpus": null,
            "cpuset_mems": null,
            "debug": false,
            "detach": true,
            "detach_keys": null,
            "device": null,
            "device_read_bps": null,
            "device_read_iops": null,
            "device_write_bps": null,
            "device_write_iops": null,
            "dns": null,
            "dns_option": null,
            "dns_search": null,
            "entrypoint": null,
            "env": {
                "SONAR_JDBC_PASSWORD": "sonar",
                "SONAR_JDBC_URL": "jdbc:postgresql://localhost:5432/sonar",
                "SONAR_JDBC_USERNAME": "sonar",
                "SONAR_LOG_MAXFILES": "5",
                "SONAR_LOG_ROLLINGPOLICY": "size:10MB"
            },
            "env_file": null,
            "env_host": null,
            "etc_hosts": null,
            "executable": "podman",
            "expose": null,
            "force_restart": false,
            "generate_systemd": {
                "new": true,
                "path": "/home/vagrant/.config/systemd/user",
                "restart_policy": "always"
            },
            "gidmap": null,
            "group_add": null,
            "healthcheck": null,
            "healthcheck_interval": null,
            "healthcheck_retries": null,
            "healthcheck_start_period": null,
            "healthcheck_timeout": null,
            "hostname": null,
            "http_proxy": null,
            "image": "docker.io/library/sonarqube:9.8.0-community",
            "image_strict": false,
            "image_volume": null,
            "init": null,
            "init_path": null,
            "interactive": null,
            "ip": null,
            "ipc": null,
            "kernel_memory": null,
            "label": null,
            "label_file": null,
            "log_driver": "journald",
            "log_level": null,
            "log_opt": {
                "max_size": null,
                "path": null,
                "tag": "sonarqube"
            },
            "mac_address": null,
            "memory": null,
            "memory_reservation": null,
            "memory_swap": null,
            "memory_swappiness": null,
            "mount": null,
            "name": "sonarqube",
            "network": null,
            "network_aliases": null,
            "no_hosts": null,
            "oom_kill_disable": null,
            "oom_score_adj": null,
            "pid": null,
            "pids_limit": null,
            "pod": "ci-machine",
            "privileged": null,
            "publish": null,
            "publish_all": null,
            "read_only": null,
            "read_only_tmpfs": null,
            "recreate": false,
            "requires": null,
            "restart_policy": "always",
            "rm": true,
            "rootfs": null,
            "sdnotify": null,
            "secrets": null,
            "security_opt": null,
            "shm_size": null,
            "sig_proxy": null,
            "state": "started",
            "stop_signal": null,
            "stop_timeout": null,
            "subgidname": null,
            "subuidname": null,
            "sysctl": null,
            "systemd": null,
            "timezone": null,
            "tmpfs": null,
            "tty": null,
            "uidmap": null,
            "ulimit": null,
            "user": null,
            "userns": null,
            "uts": null,
            "volume": [
                "/home/vagrant/sonarqube/data:/opt/sonarqube/data:Z",
                "/home/vagrant/sonarqube/extensions:/opt/sonarqube/extensions:Z",
                "/home/vagrant/certs:/opt/certs:ro,z"
            ],
            "volumes": [
                "/home/vagrant/sonarqube/data:/opt/sonarqube/data:Z",
                "/home/vagrant/sonarqube/extensions:/opt/sonarqube/extensions:Z",
                "/home/vagrant/certs:/opt/certs:ro,z"
            ],
            "volumes_from": null,
            "workdir": null
        }
    },
    "msg": "Can't run container sonarqube",
    "stderr": "Error: the --rm option conflicts with --restart, when the restartPolicy is not \"\" and \"no\"\n",
    "stderr_lines": [
        "Error: the --rm option conflicts with --restart, when the restartPolicy is not \"\" and \"no\""
    ],
    "stdout": "",
    "stdout_lines": []
}

PLAY RECAP ****************************************************************************************************************************************************************192.168.33.6               : ok=15   changed=7    unreachable=0    failed=1    skipped=1    rescued=0    ignored=0

Additional environment details (AWS, VirtualBox, physical, etc.):
I'm botting up a vagrant machine from Windows, and I use wsl to run ansible from which I use the vagrant-generated ssh key

@sshnaidm sshnaidm added the bug Something isn't working label May 30, 2023
@sshnaidm
Copy link
Member

The problem is in duplication of restart_policy both in container and systemd. If you plan to manage containers with systemd, put restart_policy only there.
Systemd flag new implies --rm, and:

--rm is only allowed with on-failure as a restart policy.

as it seems from issue containers/podman#11438
If you remove restart_policy: always from container args and leave it only in systemd, it will work fine.

@QuentinFAIDIDE
Copy link
Author

Awesome! Thank you for your help.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

2 participants