Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Would like ability to add additional args to podman build #69

Closed
jharmison-redhat opened this issue Jul 1, 2020 · 4 comments · Fixed by #83
Closed

Would like ability to add additional args to podman build #69

jharmison-redhat opened this issue Jul 1, 2020 · 4 comments · Fixed by #83
Labels
enhancement New feature or request

Comments

@jharmison-redhat
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind feature

Description

Would like the ability to provide arbitrary args to podman build invocation through the command constructor, or support for more args supported by podman. In my particular use case, I need to provide ulimit args for RUN statements in a Dockerfile, which is supported by podman build natively.

Steps to reproduce the issue:

  1. Include a RUN statement in your Dockerfile that requires a ulimit other than the default of 1024.

  2. Use podman build /path --ulimit=nofile=4096:4096 -t imagename

  3. Attempt to find a way to do the same with containers.podman.podman_image.

Describe the results you received:

podman build: Works as expected
containers.podman.podman_image: The podman build args are constructed incrementally from a fixed set of options, with no ability to specify either ulimit or arbitrary build args.

Describe the results you expected:

One of the following:

  • containers.podman.podman_image provides either a ulimit arg, as a suboption to build or in some other useful way.
  • containers.podman.podman_image provides a way to pass arbitrary args on invocation of podman build, allowing me to use the --ulimit arg.

Additional information you deem important (e.g. issue happens only occasionally):

I can implement, but would like to know what the project's preferred implementation would be.

Output of ansible --version:

$ ansible --version
ansible 2.9.10
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/james/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/james/.local/lib/python3.8/site-packages/ansible
  executable location = /home/james/.local/bin/ansible
  python version = 3.8.3 (default, May 29 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)]

Output of podman version:

$ podman version
Version:            1.9.3
RemoteAPI Version:  1
Go Version:         go1.14.2
OS/Arch:            linux/amd64

Output of podman info --debug:

$ podman info --debug
debug:
  compiler: gc
  gitCommit: ""
  goVersion: go1.14.2
  podmanVersion: 1.9.3
host:
  arch: amd64
  buildahVersion: 1.14.9
  cgroupVersion: v2
  conmon:
    package: conmon-2.0.18-1.fc32.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.18, commit: 6e8799f576f11f902cd8a8d8b45b2b2caf636a85'
  cpus: 32
  distribution:
    distribution: fedora
    version: "32"
  eventLogger: file
  hostname: ws.jharmison.com
  idMappings:
    gidmap:
    - container_id: 0
      host_id: 752000001
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
    uidmap:
    - container_id: 0
      host_id: 752000001
      size: 1
    - container_id: 1
      host_id: 100000
      size: 65536
  kernel: 5.6.19-300.fc32.x86_64
  memFree: 18120613888
  memTotal: 67414728704
  ociRuntime:
    name: crun
    package: crun-0.13-2.fc32.x86_64
    path: /usr/bin/crun
    version: |-
      crun version 0.13
      commit: e79e4de4ac16da0ce48777afb72c6241de870525
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +YAJL
  os: linux
  rootless: true
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns-1.1.1-1.fc32.x86_64
    version: |-
      slirp4netns version 1.1.1
      commit: bbf27c5acd4356edb97fa639b4e15e0cd56a39d5
      libslirp: 4.2.0
      SLIRP_CONFIG_VERSION_MAX: 2
  swapFree: 34359209984
  swapTotal: 34359734272
  uptime: 25h 45m 22.25s (Approximately 1.04 days)
registries:
  search:
  - registry.fedoraproject.org
  - registry.redhat.io
  - registry.access.redhat.com
  - registry.centos.org
  - docker.io
store:
  configFile: /home/james/.config/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 1
    stopped: 2
  graphDriverName: overlay
  graphOptions:
    overlay.mount_program:
      Executable: /usr/bin/fuse-overlayfs
      Package: fuse-overlayfs-1.1.1-1.fc32.x86_64
      Version: |-
        fusermount3 version: 3.9.1
        fuse-overlayfs: version 1.1.0
        FUSE library version 3.9.1
        using FUSE kernel interface version 7.31
  graphRoot: /home/james/.local/share/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "false"
  imageStore:
    number: 298
  runRoot: /run/user/752000001/containers
  volumePath: /home/james/.local/share/containers/storage/volumes

Package info (e.g. output of rpm -q podman or apt list podman):

$ rpm -q podman
podman-1.9.3-1.fc32.x86_64

Playbook you run with ansible (e.g. content of playbook.yaml):

---
- hosts: localhost
  tasks:
    - containers.podman.podman_image:
        name: thing
        path: '.'
        build:
          format: docker

Command line and output of ansible run with high verbosity:

ansible-playbook 2.9.10
  config file = /etc/ansible/ansible.cfg
  configured module search path = ['/home/james/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /home/james/.local/lib/python3.8/site-packages/ansible
  executable location = /home/james/.local/bin/ansible-playbook
  python version = 3.8.3 (default, May 29 2020, 00:00:00) [GCC 10.1.1 20200507 (Red Hat 10.1.1-1)]
Using /etc/ansible/ansible.cfg as config file
host_list declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
script declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
auto declined parsing /etc/ansible/hosts as it did not pass its verify_file() method
Parsed /etc/ansible/hosts inventory source with ini plugin
[WARNING]: provided hosts list is empty, only localhost is available. Note that the implicit localhost does not match 'all'

PLAYBOOK: playbook.yml **********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
1 plays in playbook.yml

PLAY [localhost] ****************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************

TASK [Gathering Facts] **********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
task path: /home/james/Projects/ansible-for-devops/example-ulimit/playbook.yml:2
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: james
<127.0.0.1> EXEC /bin/sh -c 'echo ~james && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/james/.ansible/tmp `"&& mkdir /home/james/.ansible/tmp/ansible-tmp-1593620553.5421648-251491-127085850747893 && echo ansible-tmp-1593620553.5421648-251491-127085850747893="` echo /home/james/.ansible/tmp/ansible-tmp-1593620553.5421648-251491-127085850747893 `" ) && sleep 0'
Using module file /home/james/.local/lib/python3.8/site-packages/ansible/modules/system/setup.py
<127.0.0.1> PUT /home/james/.ansible/tmp/ansible-local-251486cjccp_hj/tmposlz8_w1 TO /home/james/.ansible/tmp/ansible-tmp-1593620553.5421648-251491-127085850747893/AnsiballZ_setup.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/james/.ansible/tmp/ansible-tmp-1593620553.5421648-251491-127085850747893/ /home/james/.ansible/tmp/ansible-tmp-1593620553.5421648-251491-127085850747893/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/james/.ansible/tmp/ansible-tmp-1593620553.5421648-251491-127085850747893/AnsiballZ_setup.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/james/.ansible/tmp/ansible-tmp-1593620553.5421648-251491-127085850747893/ > /dev/null 2>&1 && sleep 0'
ok: [localhost]
META: ran handlers

TASK [containers.podman.podman_image] *******************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
task path: /home/james/Projects/ansible-for-devops/example-ulimit/playbook.yml:4
<127.0.0.1> ESTABLISH LOCAL CONNECTION FOR USER: james
<127.0.0.1> EXEC /bin/sh -c 'echo ~james && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '( umask 77 && mkdir -p "` echo /home/james/.ansible/tmp `"&& mkdir /home/james/.ansible/tmp/ansible-tmp-1593620554.172542-251582-97793637764110 && echo ansible-tmp-1593620554.172542-251582-97793637764110="` echo /home/james/.ansible/tmp/ansible-tmp-1593620554.172542-251582-97793637764110 `" ) && sleep 0'
Using module file /home/james/.ansible/collections/ansible_collections/containers/podman/plugins/modules/podman_image.py
<127.0.0.1> PUT /home/james/.ansible/tmp/ansible-local-251486cjccp_hj/tmpn56_r5p0 TO /home/james/.ansible/tmp/ansible-tmp-1593620554.172542-251582-97793637764110/AnsiballZ_podman_image.py
<127.0.0.1> EXEC /bin/sh -c 'chmod u+x /home/james/.ansible/tmp/ansible-tmp-1593620554.172542-251582-97793637764110/ /home/james/.ansible/tmp/ansible-tmp-1593620554.172542-251582-97793637764110/AnsiballZ_podman_image.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c '/usr/bin/python3 /home/james/.ansible/tmp/ansible-tmp-1593620554.172542-251582-97793637764110/AnsiballZ_podman_image.py && sleep 0'
<127.0.0.1> EXEC /bin/sh -c 'rm -f -r /home/james/.ansible/tmp/ansible-tmp-1593620554.172542-251582-97793637764110/ > /dev/null 2>&1 && sleep 0'
fatal: [localhost]: FAILED! => {
    "changed": false,
    "invocation": {
        "module_args": {
            "auth_file": null,
            "build": {
                "annotation": null,
                "cache": true,
                "force_rm": null,
                "format": "docker",
                "rm": true,
                "volume": null
            },
            "ca_cert_dir": null,
            "executable": "podman",
            "force": false,
            "name": "thing",
            "password": null,
            "path": ".",
            "pull": true,
            "push": false,
            "push_args": {
                "compress": null,
                "dest": null,
                "format": null,
                "remove_signatures": null,
                "sign_by": null,
                "transport": null
            },
            "state": "present",
            "tag": "latest",
            "username": null,
            "validate_certs": true
        }
    },
    "msg": "Failed to build image thing:latest:  Error: UNABLE TO DO THING WITH LOW ULIMIT\nError: error building at STEP \"RUN /app/install-script\": error while running runtime: exit status 1\n"
}

PLAY RECAP **********************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************************
localhost                  : ok=1    changed=0    unreachable=0    failed=1    skipped=0    rescued=0    ignored=0   

Additional environment details (AWS, VirtualBox, physical, etc.):

Physical runs on my host. This is a small example to demonstrate the particulars of my use case, with my actual implementation being a complex playbook with many roles, a custom library, etc.

Dockerfile contents:

FROM registry.redhat.io/ubi8/ubi-init
COPY install-script /app/install-script
RUN /app/install-script
CMD ["/bin/bash", "-l"]

install-script contents:

#!/bin/bash

if [ $(ulimit -H -n) -lt 4096 ]; then
    echo "Error: UNABLE TO DO THING WITH LOW ULIMIT" >&2
    exit 1
fi

echo 'echo "I did the thing."' >> /root/.bashrc

Output of podman build running with --ulimit arg:

$ podman build . -t thing
STEP 1: FROM registry.redhat.io/ubi8/ubi-init
STEP 2: COPY install-script /app/install-script
--> 05990234291
STEP 3: RUN /app/install-script
Error: UNABLE TO DO THING WITH LOW ULIMIT
Error: error building at STEP "RUN /app/install-script": error while running runtime: exit status 1
$ podman build . -t thing --ulimit=nofile=4096:4096
STEP 1: FROM registry.redhat.io/ubi8/ubi-init
STEP 2: COPY install-script /app/install-script
--> Using cache 0599023429196fc3ceb7c209070de0ff7c5beafd33fea0def632afd41098fe21
STEP 3: RUN /app/install-script
--> 441ed24adae
STEP 4: CMD ["/bin/bash", "-l"]
STEP 5: COMMIT thing
--> 306b5b50c8c
306b5b50c8c32c87e62664290eaf1a6be2205c1611dfc8b8b5854ebc74c3759a
$ podman run -it --rm thing
I did the thing.
[root@bd60e2d7101c /]# exit
logout
@jharmison-redhat
Copy link
Author

I would like to note that while implementing this, I did notice that there wasn't a mechanism to test several of the other suboptions provided to build in the podman_image test suite. I am unsure if you would like a test built for my implementation example, visible at jharmison-redhat@7d73fe0

The following playbook executed correctly with that installed:

---
- hosts: localhost
  tasks:
    - containers.podman.podman_image:
        name: thing
        path: '.'
        build:
          format: docker
          extra_args: --ulimit=nofile=4096:4096

@sshnaidm
Copy link
Member

sshnaidm commented Jul 2, 2020

@jharmison-redhat thanks, podman_image has currently very basic tests. But I think it's worth to start at least from testing your change. So feel free to add test for your option extra_args, later we can add other tests.

@sshnaidm sshnaidm added the enhancement New feature or request label Jul 2, 2020
@sshnaidm
Copy link
Member

@jharmison-redhat I included this change, needed it for cgroups args of podman build in #83

sshnaidm added a commit that referenced this issue Jul 16, 2020
Fix idempotency issues in podman_container
Add creating workdir, buildah issue: containers/buildah#2475
Fix #68
Fix #69
Should help to #80 as well, but will be handled separately.
@jharmison-redhat
Copy link
Author

@jharmison-redhat I included this change, needed it for cgroups args of podman build in #83

Was on PTO last week and pretty busy juggling the week prior, glad you were able to get this in. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants