Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support Azure DevOps resource containers for builds #11265

Closed
jamjon3 opened this issue Aug 18, 2021 · 5 comments · Fixed by #11280
Closed

Support Azure DevOps resource containers for builds #11265

jamjon3 opened this issue Aug 18, 2021 · 5 comments · Fixed by #11280
Assignees
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@jamjon3
Copy link

jamjon3 commented Aug 18, 2021

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description
Azure DevOps on self-hosted agents (RHEL8/Podman) cannot load containers defined in 'resources'.

It fails on this workflow command in the "Initialize Containers" step

/usr/bin/docker info -f "{{range .Plugins.Network}}{{println .}}{{end}}"

This fails because there is no .Plugins.Network in podman info. The same on docker with docker info produces a section that is missing in podman info:

 Plugins:
  Volume: local
  Network: bridge host ipvlan macvlan null overlay
  Log: awslogs fluentd gcplogs gelf journald json-file local logentries splunk syslog

I'm using 'podman-docker' to look like 'docker' to the Azure DevOps agent.

Steps to reproduce the issue:

  1. Add this section something like the following to azure-pipelines.yml
resources:
  containers:
  - container: win_rm
    image: nexus01.pd.pods.com:5443/pods-llc/swe/containers/base/devops_winrm_container:latest
    endpoint: pods_nexus_registry
  1. Add in the 'jobs' section with a reference to the agent pool and container
    jobs:
      - job: BuildInsideContainer
        displayName: building inside a container
        pool: mypool
        container: win_rm

  1. Run the pipeline (include some 'steps' of course). The "Initialize containers" step should pull the container and setup the 'steps' to run inside that specified container.

Describe the results you received:

Starting: Initialize containers
/usr/bin/docker version --format '{{.Server.APIVersion}}'
''3.2.3'
Docker daemon API version: ''3.2.3'
/usr/bin/docker version --format '{{.Client.APIVersion}}'
'3.2.3'
Docker client API version: '3.2.3'
/usr/bin/docker ps --all --quiet --no-trunc --filter "label=dc4b27"
/usr/bin/docker network prune --force --filter "label=dc4b27"
/usr/bin/docker login --username "***" --password-stdin ***
Login Succeeded!
/usr/bin/docker pull nexus01.pd.pods.com:5443/pods-llc/swe/containers/base/devops_winrm_container:latest
Trying to pull nexus01.pd.pods.com:5443/pods-llc/swe/containers/base/devops_winrm_container:latest...
Getting image source signatures
Copying blob sha256:d0badf5ab1aefb2806b494241481be6171425991987d6023c90074ea9404d6d8
Copying blob sha256:0a0b8f5ff20da9ce383904f041b750faee36dc6a258bc0100e7fcaa5d01b5101
Copying blob sha256:f9aac8178ace131a12b7f8e848ea3a2bb6b65eb6841946cf048588d161df2ff2
Copying blob sha256:fc725350b2637af7c79163bea9e3df54d78712803a01c97fb45333abd34807c0
Copying blob sha256:d251a2e2e8a37b8c79ad94dec69eaa86aeca6498ed8f3979e41005c309ddfa9a
Copying blob sha256:1b474f8e669eca50e71598ac473acae7d517247f94cee83b928c03bd53dc2ee0
Copying blob sha256:b77f066bf58c59e5edf0518c85b448a4c6b343b8b4e74c4fee6055a7942b01dd
Copying blob sha256:f443915232178bf37943f98091f60d27660b8fc7d29b31d741b92215bb87f930
Copying blob sha256:22a76ff78b8cd21fce32f3fa9e01428c7e94b795d0259564f5fa6eac6c02f163
Copying config sha256:606eda11e92bdc5b0e33f144415e9ff38c38aa12b0215c06fbeb91db85302a3c
Writing manifest to image destination
Storing signatures
606eda11e92bdc5b0e33f144415e9ff38c38aa12b0215c06fbeb91db85302a3c
/usr/bin/docker logout ***
Removed login credentials for nexus01.pd.pods.com:5443
/usr/bin/docker info -f "{{range .Plugins.Network}}{{println .}}{{end}}"
Error: template: info:1:16: executing "info" at <.Plugins.Network>: can't evaluate field Plugins in type *define.Info
##[error]Exit code 125 returned from process: file name '/usr/bin/docker', arguments 'info -f "{{range .Plugins.Network}}{{println .}}{{end}}"'.
Finishing: Initialize containers

Describe the results you expected:

With regular docker, docker info works fine:

Login Succeeded
/bin/docker pull nexus01.pd.pods.com:5443/pods-llc/swe/containers/base/devops_winrm_container:latest
latest: Pulling from pods-llc/swe/containers/base/devops_winrm_container
Digest: sha256:f11bfc11854f07c6f23b10de23fdb483c4b09e42fc7d522006ee8ea5e64cd210
Status: Image is up to date for nexus01.pd.pods.com:5443/pods-llc/swe/containers/base/devops_winrm_container:latest
nexus01.pd.pods.com:5443/pods-llc/swe/containers/base/devops_winrm_container:latest
/bin/docker logout ***
Removing login credentials for nexus01.pd.pods.com:5443
/bin/docker info -f "{{range .Plugins.Network}}{{println .}}{{end}}"
bridge
host
ipvlan
macvlan
null
overlay
/bin/docker network create --label 9f0d2f vsts_network_bf2a782037e240bbb012f1d5decbb5a6
7eaf4da58f61ff6a995593ed64f1e2b7d349eff90afed21bf86f74f5ba2b18dd
/bin/docker inspect --format="{{index .Config.Labels \"com.azure.dev.pipelines.agent.handler.node.path\"}}" nexus01.pd.pods.com:5443/pods-llc/swe/containers/base/devops_winrm_container:latest
/bin/docker create --name win_rm_nexus01pdpodscom5443podsllcswecontainersbasedevops_winrm_containerrelease130_f6c662 --label 9f0d2f --network vsts_network_bf2a782037e240bbb012f1d5decbb5a6  -v "/var/run/docker.sock":"/var/run/docker.sock" -v "/usr/local/share/vsts-agent/61":"/__a/61" -v "/usr/local/share/vsts-agent/_temp":"/__a/_temp" -v "/usr/local/share/vsts-agent/_tasks":"/__a/_tasks" -v "/usr/local/share/vsts-agent/_tool":"/__t" -v "/usr/local/share/vsts-agent/externals":"/__a/externals":ro -v "/usr/local/share/vsts-agent/.taskkey":"/__a/.taskkey" nexus01.pd.pods.com:5443/pods-llc/swe/containers/base/devops_winrm_container:latest "/__a/externals/node/bin/node" -e "setInterval(function(){}, 24 * 60 * 60 * 1000);"
546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32
/bin/docker start 546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32
546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32
/bin/docker ps --all --filter id=546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32 --filter status=running --no-trunc --format "{{.ID}} {{.Status}}"
546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32 Up Less than a second
/bin/docker exec  546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32 sh -c "command -v bash"
/usr/bin/bash
whoami 
DevOps1
id -u DevOps1
1000
Try to create a user with UID '1000' inside the container.
/bin/docker exec  546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32 bash -c "getent passwd 1000 | cut -d: -f1 "
/bin/docker exec  546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32 useradd -m -u 1000 DevOps1_azpcontainer
Grant user 'DevOps1_azpcontainer' SUDO privilege and allow it run any command without authentication.
/bin/docker exec  546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32 groupadd azure_pipelines_sudo
/bin/docker exec  546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32 usermod -a -G azure_pipelines_sudo DevOps1_azpcontainer
/bin/docker exec  546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32 su -c "echo '%azure_pipelines_sudo ALL=(ALL:ALL) NOPASSWD:ALL' >> /etc/sudoers"
Allow user 'DevOps1_azpcontainer' run any docker command without SUDO.
stat -c %g /var/run/docker.sock
992
/bin/docker exec  546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32 bash -c "cat /etc/group"
/bin/docker exec  546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32 groupadd -g 992 azure_pipelines_docker
/bin/docker exec  546fef1066d5156b1c7777129bfa3380704661f8a07fbf66e2aa74f99e5c6e32 usermod -a -G azure_pipelines_docker DevOps1_azpcontainer
Finishing: Initialize containers

Additional information you deem important (e.g. issue happens only occasionally):

Output of podman version:

Version:      3.2.3
API Version:  3.2.3
Go Version:   go1.15.7
Built:        Tue Jul 27 07:29:39 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.21.3
  cgroupControllers:
  - cpuset
  - cpu
  - cpuacct
  - blkio
  - memory
  - devices
  - freezer
  - net_cls
  - perf_event
  - net_prio
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.29-1.module+el8.4.0+11822+6cc1e7d7.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: ae467a0c8001179d4d0adf4ada381108a893d7ec'
  cpus: 8
  distribution:
    distribution: '"rhel"'
    version: "8.4"
  eventLogger: file
  hostname: tpapdvlcibld06
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-305.12.1.el8_4.x86_64
  linkmode: dynamic
  memFree: 15081103360
  memTotal: 16600363008
  ociRuntime:
    name: runc
    package: runc-1.0.0-74.rc95.module+el8.4.0+11822+6cc1e7d7.x86_64
    path: /usr/bin/runc
    version: |-
      runc version spec: 1.0.2-dev
      go: go1.15.13
      libseccomp: 2.5.1
  os: linux
  remoteSocket:
    exists: true
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 34355539968
  swapTotal: 34355539968
  uptime: 4h 39m 4.18s (Approximately 0.17 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 0
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.2.3
  Built: 1627370979
  BuiltTime: Tue Jul 27 07:29:39 2021
  GitCommit: ""
  GoVersion: go1.15.7
  OsArch: linux/amd64
  Version: 3.2.3

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.2.3-0.10.module+el8.4.0+11989+6676f7ad.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

No and yes. I'm using the distribution package for RHEL8 but I have checked the Podman Troubleshooting Guide.

Additional environment details (AWS, VirtualBox, physical, etc.):

VMWare virtual machine

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Aug 18, 2021
@rhatdan
Copy link
Member

rhatdan commented Aug 19, 2021

@Luap99 PTAL

@Luap99
Copy link
Member

Luap99 commented Aug 19, 2021

Adding this field should be simple but I am wondering how to query the supported drivers.
For log we support journald, k8s-file and none.
Network should be bridge and macvlan but I think we should add this to new network Interface at some point.
@mheon I assume it is also possible to get a list of configured volume plugins?

@flouthoc
Copy link
Collaborator

flouthoc commented Aug 19, 2021

@Luap99 volume plugins are masked under field driver afaik. Users manually configure plugins usually it defaults to local everywhere.

@mheon
Copy link
Member

mheon commented Aug 19, 2021

@Luap99 They're just stored in containers.conf - https://github.com/containers/common/blob/main/pkg/config/config.go#L396

The map is name of plugin (what you want) to path of plugin socket - so iterating through and taking all the names should be sufficient.

Luap99 added a commit to Luap99/libpod that referenced this issue Aug 19, 2021
For docker compat include information about available volume, log and
network drivers which should be listed under the plugins key.

Fixes containers#11265

Signed-off-by: Paul Holzinger <[email protected]>
@jamjon3
Copy link
Author

jamjon3 commented Aug 19, 2021

Thank you all and I'll watch the PR discussion and stay out of the way but I'm still active and watching the conversations.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

5 participants