Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

podman pull ignores a TMPDIR override in containers.conf #12296

Closed
hakonhall opened this issue Nov 15, 2021 · 5 comments · Fixed by containers/common#828 or #12303
Closed

podman pull ignores a TMPDIR override in containers.conf #12296

hakonhall opened this issue Nov 15, 2021 · 5 comments · Fixed by containers/common#828 or #12303
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.

Comments

@hakonhall
Copy link

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Steps to reproduce the issue:

  1. Set /etc/containers/containers.conf to
[engine]
env = ["TMPDIR=/tmp"]
  1. Pull image, e.g. podman pull rockylinux/rockylinux:8, and hit Ctrl-z to freeze the download to see where podman downloads images to.

Describe the results you received:

Inspecting /tmp and /var/tmp, only /var/tmp has a new directory entry with a name like storage759289511, with a timestamp close to now. I.e. podman pulled image to /var/tmp.

Describe the results you expected:

Expected podman pull to download to /tmp.

Pulling with an environment variable TMPDIR=/tmp works. Setting the env option of the engine table in containers.conf(5) is supposed to work in an equivalent way: "Environment variables to be used when running the container engine (e.g., Podman, Buildah)".

Additional information you deem important (e.g. issue happens only occasionally):

Rootless podman works in the same way: Any TMPDIR in ~/.config/containers/containers.conf is ignored.

Output of podman version:

Version:      3.2.3
API Version:  3.2.3
Go Version:   go1.15.7
Built:        Thu Jul 29 17:02:43 2021
OS/Arch:      linux/amd64

Output of podman info --debug:

host:
  arch: amd64
  buildahVersion: 1.21.3
  cgroupControllers:
  - cpuset
  - cpu
  - cpuacct
  - blkio
  - memory
  - devices
  - freezer
  - net_cls
  - perf_event
  - net_prio
  - hugetlb
  - pids
  - rdma
  cgroupManager: systemd
  cgroupVersion: v1
  conmon:
    package: conmon-2.0.29-1.module+el8.4.0+11822+6cc1e7d7.x86_64
    path: /usr/bin/conmon
    version: 'conmon version 2.0.29, commit: ae467a0c8001179d4d0adf4ada381108a893d7ec'
  cpus: 12
  distribution:
    distribution: '"rhel"'
    version: "8.4"
  eventLogger: file
  hostname: 5fz23d3
  idMappings:
    gidmap: null
    uidmap: null
  kernel: 4.18.0-305.19.1.el8_4.x86_64
  linkmode: dynamic
  memFree: 6132224000
  memTotal: 33244651520
  ociRuntime:
    name: runc
    package: runc-1.0.0-74.rc95.module+el8.4.0+11822+6cc1e7d7.x86_64
    path: /usr/bin/runc
    version: |-
      runc version spec: 1.0.2-dev
      go: go1.15.13
      libseccomp: 2.5.1
  os: linux
  remoteSocket:
    path: /run/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_NET_RAW,CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: false
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: true
  serviceIsRemote: false
  slirp4netns:
    executable: ""
    package: ""
    version: ""
  swapFree: 50377781248
  swapTotal: 50377781248
  uptime: 5h 3m 53.37s (Approximately 0.21 days)
registries:
  search:
  - registry.access.redhat.com
  - registry.redhat.io
  - docker.io
store:
  configFile: /etc/containers/storage.conf
  containerStore:
    number: 0
    paused: 0
    running: 0
    stopped: 0
  graphDriverName: overlay
  graphOptions:
    overlay.mountopt: nodev,metacopy=on
  graphRoot: /var/lib/containers/storage
  graphStatus:
    Backing Filesystem: xfs
    Native Overlay Diff: "false"
    Supports d_type: "true"
    Using metacopy: "true"
  imageStore:
    number: 0
  runRoot: /run/containers/storage
  volumePath: /var/lib/containers/storage/volumes
version:
  APIVersion: 3.2.3
  Built: 1627570963
  BuiltTime: Thu Jul 29 17:02:43 2021
  GitCommit: ""
  GoVersion: go1.15.7
  OsArch: linux/amd64
  Version: 3.2.3

Package info (e.g. output of rpm -q podman or apt list podman):

podman-3.2.3-0.11.module+el8.4.0+12050+ef972f71.x86_64

Have you tested with the latest version of Podman and have you checked the Podman Troubleshooting Guide? (https://github.com/containers/podman/blob/master/troubleshooting.md)

No and yes

Additional environment details (AWS, VirtualBox, physical, etc.):

@openshift-ci openshift-ci bot added the kind/bug Categorizes issue or PR as related to a bug. label Nov 15, 2021
@hakonhall
Copy link
Author

#8725 allowed specifying TMPDIR in containers.conf and was merged just 6 months prior to 3.2.3. But perhaps it never worked for pulls?

@rhatdan
Copy link
Member

rhatdan commented Nov 15, 2021

	for _, env := range cfg.Engine.Env {
		splitEnv := strings.SplitN(env, "=", 2)
		if len(splitEnv) != 2 {
			return fmt.Errorf("invalid environment variable for engine %s, valid configuration is KEY=value pair", env)
		}
		// skip if the env is already defined
		if _, ok := os.LookupEnv(splitEnv[0]); ok {
			logrus.Debugf("environment variable %s is already defined, skip the settings from containers.conf", splitEnv[0])
			continue
		}
		if err := os.Setenv(splitEnv[0], splitEnv[1]); err != nil {
			return err
		}
	}

From the code, it looks like it will not be set if the TMPDIR is set in the environment.
unset TMPDIR podman ...

Should set the envioronment based on containers.conf.

It should work for podman pulls, in podman 3.4, and probably earlier, how did you check?

@rhatdan
Copy link
Member

rhatdan commented Nov 15, 2021

Could you try the same test in podman 3.4. I fixed somehing on this back in June.

commit 7864108
Author: Daniel J Walsh [email protected]
Date: Fri Jun 18 17:27:39 2021 -0400

    fix systemcontext to use correct TMPDIR
    
    Users are complaining about read/only /var/tmp failing
    even if TMPDIR=/tmp is set.
    
    This PR Fixes: https://github.com/containers/podman/issues/10698
    
    [NO TESTS NEEDED] No way to test this.
    
    Signed-off-by: Daniel J Walsh <[email protected]>

rhatdan added a commit to rhatdan/common that referenced this issue Nov 15, 2021
The Engine.Env needs to be set very early in the setup process
to make sure no one attempts to use the environment.

Fixes: containers/podman#12296

Signed-off-by: Daniel J Walsh <[email protected]>
rhatdan added a commit to rhatdan/podman that referenced this issue Nov 15, 2021
Fixes: containers#12296

[NO NEW TESTS NEEDED] because there is no easy way to test this.
Tests are in containers/common.

Signed-off-by: Daniel J Walsh <[email protected]>
@hakonhall
Copy link
Author

From the code, it looks like it will not be set if the TMPDIR is set in the environment. unset TMPDIR podman ...

Should set the envioronment based on containers.conf.

I verified TMPDIR was not set in the environment before invoking podman by verifying TMPDIR was not set in /proc/<podman-pid>/environ, and also sudo bash -c 'echo $TMPDIR' was empty.

Thanks for the quick fix, I'll take it you have reproduced it and verified the fix.

@rhatdan
Copy link
Member

rhatdan commented Nov 16, 2021

Yup, Pretty much followed your procedure, Thanks for finding it.

@github-actions github-actions bot added the locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments. label Sep 21, 2023
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Sep 21, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
kind/bug Categorizes issue or PR as related to a bug. locked - please file new issue/PR Assist humans wanting to comment on an old issue or PR with locked comments.
Projects
None yet
2 participants