Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Doesn't work if user is managed by Active Directory (contains @ "at" sign) #585

Open
fallendusk opened this issue Oct 18, 2020 · 13 comments
Labels
1. Bug Something isn't working

Comments

@fallendusk
Copy link

Describe the bug
Toolbox fails to initialize while run as domain/enterprise user. The /run/user/809201000/toolbox/container-initialized file is never created. I manually added this user to /etc/subuid and /etc/subgid. Toolbox works on this VM with non-enterprise login. I can manually run podman to launch the fedora-toolbox container and it also works sans all the toolbox magic.

Steps how to reproduce the behaviour
Try to run toolbox enter with an enterprise user

Expected behaviour
Toolbox should work as it does with a normal linux user.

Actual behaviour

 greg  ~  toolbox enter -vv
DEBU Running as real user ID 809201000            
DEBU Resolved absolute path to the executable as /usr/bin/toolbox 
DEBU Running on a cgroups v2 host                 
DEBU Checking if /etc/subgid and /etc/subuid have entries for user greg 
DEBU TOOLBOX_PATH is /usr/bin/toolbox             
DEBU Toolbox config directory is /home/[email protected]/.config/toolbox 
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called version.PersistentPreRunE(podman --log-level debug version --format json) 
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.22.0 Annotations:[] CgroupNS:private Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableLabeling:true Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{CgroupCheck:false CgroupManager:systemd ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/809201000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand:/pause InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/[email protected]/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/809201000/libpod/tmp VolumePath:/home/[email protected]/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/var/home/[email protected]/.config/cni/net.d}} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/[email protected]/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/[email protected]/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/809201000/containers 
DEBU[0000] Using static dir /home/[email protected]/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/809201000/libpod/tmp 
DEBU[0000] Using volume path /home/[email protected]/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
INFO[0000] Setting parallel job count to 7              
DEBU[0000] Called version.PersistentPostRunE(podman --log-level debug version --format json) 
DEBU Current Podman version is 2.1.1              
DEBU Old Podman version is 2.1.1                  
DEBU Migration not needed: Podman version 2.1.1 is unchanged 
DEBU Resolving container and image names          
DEBU Container: ''                                
DEBU Image: ''                                    
DEBU Release: ''                                  
DEBU Resolved container and image names           
DEBU Container: 'fedora-toolbox-32'               
DEBU Image: 'fedora-toolbox:32'                   
DEBU Release: '32'                                
DEBU Checking if container fedora-toolbox-32 exists 
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called exists.PersistentPreRunE(podman --log-level debug container exists fedora-toolbox-32) 
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.22.0 Annotations:[] CgroupNS:private Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableLabeling:true Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{CgroupCheck:false CgroupManager:systemd ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/809201000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand:/pause InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/[email protected]/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/809201000/libpod/tmp VolumePath:/home/[email protected]/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/var/home/[email protected]/.config/cni/net.d}} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/[email protected]/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/[email protected]/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/809201000/containers 
DEBU[0000] Using static dir /home/[email protected]/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/809201000/libpod/tmp 
DEBU[0000] Using volume path /home/[email protected]/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
INFO[0000] Setting parallel job count to 7              
DEBU[0000] Called exists.PersistentPostRunE(podman --log-level debug container exists fedora-toolbox-32) 
DEBU Calling org.freedesktop.Flatpak.SessionHelper.RequestSession 
DEBU Starting container fedora-toolbox-32         
DEBU Inspecting entry point of container fedora-toolbox-32 
INFO[0000] podman filtering at log level debug          
DEBU[0000] Called inspect.PersistentPreRunE(podman --log-level debug inspect --format json --type container fedora-toolbox-32) 
DEBU[0000] Reading configuration file "/usr/share/containers/containers.conf" 
DEBU[0000] Merged system config "/usr/share/containers/containers.conf": &{Containers:{Devices:[] Volumes:[] ApparmorProfile:containers-default-0.22.0 Annotations:[] CgroupNS:private Cgroups:enabled DefaultCapabilities:[CAP_AUDIT_WRITE CAP_CHOWN CAP_DAC_OVERRIDE CAP_FOWNER CAP_FSETID CAP_KILL CAP_MKNOD CAP_NET_BIND_SERVICE CAP_NET_RAW CAP_SETFCAP CAP_SETGID CAP_SETPCAP CAP_SETUID CAP_SYS_CHROOT] DefaultSysctls:[] DefaultUlimits:[] DefaultMountsFile: DNSServers:[] DNSOptions:[] DNSSearches:[] EnableLabeling:true Env:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin TERM=xterm] EnvHost:false HTTPProxy:false Init:false InitPath: IPCNS:private LogDriver:k8s-file LogSizeMax:-1 NetNS:slirp4netns NoHosts:false PidsLimit:2048 PidNS:private SeccompProfile:/usr/share/containers/seccomp.json ShmSize:65536k TZ: Umask:0022 UTSNS:private UserNS:host UserNSSize:65536} Engine:{CgroupCheck:false CgroupManager:systemd ConmonEnvVars:[PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin] ConmonPath:[/usr/libexec/podman/conmon /usr/local/libexec/podman/conmon /usr/local/lib/podman/conmon /usr/bin/conmon /usr/sbin/conmon /usr/local/bin/conmon /usr/local/sbin/conmon /run/current-system/sw/bin/conmon] DetachKeys:ctrl-p,ctrl-q EnablePortReservation:true Env:[] EventsLogFilePath:/run/user/809201000/libpod/tmp/events/events.log EventsLogger:journald HooksDir:[/usr/share/containers/oci/hooks.d] ImageDefaultTransport:docker:// InfraCommand:/pause InfraImage:k8s.gcr.io/pause:3.2 InitPath:/usr/libexec/podman/catatonit LockType:shm MultiImageArchive:false Namespace: NetworkCmdPath: NoPivotRoot:false NumLocks:2048 OCIRuntime:crun OCIRuntimes:map[crun:[/usr/bin/crun /usr/sbin/crun /usr/local/bin/crun /usr/local/sbin/crun /sbin/crun /bin/crun /run/current-system/sw/bin/crun] kata:[/usr/bin/kata-runtime /usr/sbin/kata-runtime /usr/local/bin/kata-runtime /usr/local/sbin/kata-runtime /sbin/kata-runtime /bin/kata-runtime /usr/bin/kata-qemu /usr/bin/kata-fc] runc:[/usr/bin/runc /usr/sbin/runc /usr/local/bin/runc /usr/local/sbin/runc /sbin/runc /bin/runc /usr/lib/cri-o-runc/sbin/runc /run/current-system/sw/bin/runc]] PullPolicy:missing Remote:false RemoteURI: RemoteIdentity: ActiveService: ServiceDestinations:map[] RuntimePath:[] RuntimeSupportsJSON:[crun runc] RuntimeSupportsNoCgroups:[crun] RuntimeSupportsKVM:[kata kata-runtime kata-qemu kata-fc] SetOptions:{StorageConfigRunRootSet:false StorageConfigGraphRootSet:false StorageConfigGraphDriverNameSet:false StaticDirSet:false VolumePathSet:false TmpDirSet:false} SignaturePolicyPath:/etc/containers/policy.json SDNotify:false StateType:3 StaticDir:/home/[email protected]/.local/share/containers/storage/libpod StopTimeout:10 TmpDir:/run/user/809201000/libpod/tmp VolumePath:/home/[email protected]/.local/share/containers/storage/volumes} Network:{CNIPluginDirs:[/usr/libexec/cni /usr/lib/cni /usr/local/lib/cni /opt/cni/bin] DefaultNetwork:podman NetworkConfigDir:/var/home/[email protected]/.config/cni/net.d}} 
DEBU[0000] Using conmon: "/usr/bin/conmon"              
DEBU[0000] Initializing boltdb state at /home/[email protected]/.local/share/containers/storage/libpod/bolt_state.db 
DEBU[0000] Using graph driver overlay                   
DEBU[0000] Using graph root /home/[email protected]/.local/share/containers/storage 
DEBU[0000] Using run root /run/user/809201000/containers 
DEBU[0000] Using static dir /home/[email protected]/.local/share/containers/storage/libpod 
DEBU[0000] Using tmp dir /run/user/809201000/libpod/tmp 
DEBU[0000] Using volume path /home/[email protected]/.local/share/containers/storage/volumes 
DEBU[0000] Set libpod namespace to ""                   
DEBU[0000] [graphdriver] trying provided driver "overlay" 
DEBU[0000] overlay: mount_program=/usr/bin/fuse-overlayfs 
DEBU[0000] backingFs=extfs, projectQuotaSupported=false, useNativeDiff=false, usingMetacopy=false 
DEBU[0000] Initializing event backend journald          
DEBU[0000] using runtime "/usr/bin/runc"                
DEBU[0000] using runtime "/usr/bin/crun"                
WARN[0000] Error initializing configured OCI runtime kata: no valid executable found for OCI runtime kata: invalid argument 
INFO[0000] Setting parallel job count to 7              
DEBU[0000] Called inspect.PersistentPostRunE(podman --log-level debug inspect --format json --type container fedora-toolbox-32) 
DEBU Entry point PID is a float64                 
DEBU Entry point of container fedora-toolbox-32 is toolbox (PID=19809) 
DEBU Waiting for container fedora-toolbox-32 to finish initializing 
DEBU Checking if initialization stamp /run/user/809201000/toolbox/container-initialized-19809 exists 
Error: failed to initialize container fedora-toolbox-32

Screenshots
If applicable, add screenshots to help explain your problem.

Output of toolbox --version (v0.0.90+)
toolbox version 0.0.96

Toolbox package info (rpm -q toolbox)
toolbox-0.0.96-1.fc32.x86_64

Output of podman version

Version:      2.1.1
API Version:  2.0.0
Go Version:   go1.14.9
Built:        Wed Sep 30 15:31:11 2020
OS/Arch:      linux/amd64

Podman package info (rpm -q podman)
podman-2.1.1-7.fc32.x86_64

Info about your OS
Silverblue 32 using sssd for AD integration

Additional context

@fallendusk fallendusk added the 1. Bug Something isn't working label Oct 18, 2020
@debarshiray debarshiray changed the title Can't enter toolbox with domain user Doesn't work if user is managed by Active Directory (contains @ "at" sign) Nov 29, 2022
@debarshiray
Copy link
Member

debarshiray commented Nov 29, 2022

From #1022 the actual error seems to be:

$ podman start --attach <container>
...
passwd: Libuser error at line: 210 - name contains invalid char `@'.
Error: failed to remove password for user [email protected]: failed to invoke passwd(1)

For others facing this issue, it will be good to know what you get from:

$ podman start --attach <container>

@yrro
Copy link

yrro commented Nov 29, 2022

There's a --badname option to useradd which might be of use. Although it doesn't actaully work for me.

root@19916ebd66b9:/# useradd --badnames [email protected]
useradd: invalid user name '[email protected]'

@woolsgrs
Copy link

woolsgrs commented Jul 7, 2023

Hit same issue, we could resolve this in the usermod/useradd and skip passwd -d for the user?

usermodArgs := []string{
            "--append",
             "--groups", sudoGroup,
             "--home", targetUserHome,
             "--shell", targetUserShell,
             "--uid", fmt.Sprint(targetUserUid),
             "--password","''",
             targetUser,

@yrro
Copy link

yrro commented Jul 7, 2023

Hate to suggest it but maybe relying on a working useradd inside the container images of every version of every distribution the user wants to work with, is the wrong approach. Perhaps toolbox should just go and modify /etc/passwd directly...

@debarshiray
Copy link
Member

Hit same issue, we could resolve this in the usermod/useradd and skip passwd -d for the user?

usermodArgs := []string{
            "--append",
             "--groups", sudoGroup,
             "--home", targetUserHome,
             "--shell", targetUserShell,
             "--uid", fmt.Sprint(targetUserUid),
             "--password","''",
             targetUser,

Good to know. If we can do everything with useradd(8) and usermod(8), without having to use passwd(1) then that's one less dependency that we need to rely on, which is always preferable.

Could you please show me how the container's /etc/shadow looks like with this change? I don't have an Active Directory set-up at hand, so I am a bit blind here.

@debarshiray
Copy link
Member

Hate to suggest it but maybe relying on a working useradd inside the container images of every version of every distribution the user wants to work with, is the wrong approach. Perhaps toolbox should just go and modify /etc/passwd directly...

The thing is that we already require a somewhat modern and functional Shadow (ie., at least 4.9) for enterprise FreeIPA set-ups. Among all the operating systems that Toolbx claims to support (ie., Arch Linux, Fedora, RHEL and Ubuntu), it's only a problem for Ubuntu because only Ubuntu 22.10 has a new enough Shadow.

So, I wouldn't worry too much about it.

It looks like usermod(8) has had the --password option since version 4.0.14 from 2007, which should be old enough, but I don't know if there's been any significant improvements in functionality in recent times.

@debarshiray
Copy link
Member

Thanks for all the detective work and patience, @woolsgrs & @yrro

I will be gone for two weeks - first vacation, then GUADEC. Let's see if we can get this done once I am back.

debarshiray added a commit to debarshiray/toolbox that referenced this issue Aug 15, 2023
These tests assume that the group and user information on the host
operating system can be provided by different plugins for the GNU Name
Service Switch (or NSS) functionality of the GNU C Library.  eg., on
enterprise FreeIPA set-ups.  However, it's expected that everything
inside the Toolbx container will be provided by /etc/group, /etc/passwd,
/etc/shadow, etc..

While /etc/group and /etc/passwd can be read by any user, /etc/shadow
can only be read by root.  However, it's awkward to use sudo(8) in the
test cases involving /etc/shadow, because they ensure that root and
$USER don't need passwords to authenticate inside the container, and
sudo(8) itself depends on that.  If sudo(8) is used, the test suite can
behave unexpectedly if Toolbx didn't set up the container correctly.
eg., it can get blocked waiting for a password.

Hence, 'podman unshare' is used instead to enter the container's initial
user namespace, where $USER from the host appears as root.  This is
sufficient because the test cases only need to read /etc/shadow inside
the Toolbx container.

Note that 'run --keep-empty-lines' counts the trailing newline on the
last line as a separate line.

containers#585
@abbra
Copy link

abbra commented Aug 16, 2023

We did discuss this with @debarshiray today. I strongly advise not to do user modification operations like usermod or userdel for this purpose. Instead, rely on a fact that nss_systemd is present in all those contemporary images and provide a varlink interface that would expose host's user entry. This would work for any account.

32 character limit is due to utmp structure being this limited. Other software did limit itself based on this fact. Linux is in a bit better state, though, because FreeBSD has this limited to 16 characters.

debarshiray added a commit to debarshiray/toolbox that referenced this issue Aug 22, 2023
It's one less invocation of an external command, which is good because
spawning a new process is generally expensive.

One positive side-effect of this is that on some Active Directory
set-ups, the entry point no longer fails with:
  Error: failed to remove password for user [email protected]: failed
      to invoke passwd(1)

... because of:
  # passwd --delete [email protected]
  passwd: Libuser error at line: 210 - name contains invalid char `@'.

This is purely an accident, and isn't meant to be an intential change to
support Active Directory.  Tools like useradd(8) and usermod(8) from
Shadow aren't meant to work with Active Directory users, and, hence, it
can still break in other ways.  For that, one option is to expose $USER
from the host operating system to the Toolbx container through a Varlink
interface that can be used by nss-systemd inside the container.

containers#585
@debarshiray
Copy link
Member

debarshiray commented Aug 22, 2023

We did discuss this with @debarshiray today. I strongly advise not to do user modification operations like usermod or userdel for this purpose. Instead, rely on a fact that nss_systemd is present in all those contemporary images and provide a varlink interface that would expose host's user entry. This would work for any account.

Yes, let's try to expose $USER from the host operating system to the Toolbx container through a Varlink interface that can be used by nss-systemd inside the container.

However, the road to getting there is messy because of reasons. :)

Currently, we are stuck using usermod(8) because a few years ago, Podman 2.0.5 started adding an entry to /etc/passwd for containers created with podman create --userns keep-id (or podman run --userns keep-id). In recent times, one can use podman run --passwd=false --userns keep-id to prevent Podman from adding the entry.

However, the --passwd flag only exists for podman run, not podman create, which is what Toolbx uses. I have some rough changes to add it to podman create that I need to clean up and submit.

Even when we can use podman create --passwd=false --userns keep-id, it will only be effective for new containers created with a new enough Podman. Pre-existing containers won't have --passwd=false. They will still have the entry in /etc/passwd and require usermod(8).

So, we will still need to maintain the usermod(8) code as a fallback for some time.

As far as that fallback code is concerned, I do like the idea of replacing the passwd --delete <user> call with usermod --password ... because, if nothing else, it's one less invocation of an external command. I submitted #1349 for this specific part.

debarshiray added a commit to debarshiray/toolbox that referenced this issue Aug 22, 2023
It's one less invocation of an external command, which is good because
spawning a new process is generally expensive.

One positive side-effect of this is that on some Active Directory
set-ups, the entry point no longer fails with:
  Error: failed to remove password for user [email protected]: failed
      to invoke passwd(1)

... because of:
  # passwd --delete [email protected]
  passwd: Libuser error at line: 210 - name contains invalid char `@'.

This is purely an accident, and isn't meant to be an intential change to
support Active Directory.  Tools like useradd(8) and usermod(8) from
Shadow aren't meant to work with Active Directory users, and, hence, it
can still break in other ways.  For that, one option is to expose $USER
from the host operating system to the Toolbx container through a Varlink
interface that can be used by nss-systemd inside the container.

Based on an idea from Si.

containers#585
debarshiray added a commit to debarshiray/toolbox that referenced this issue Aug 24, 2023
It's one less invocation of an external command, which is good because
spawning a new process is generally expensive.

One positive side-effect of this is that on some Active Directory
set-ups, the entry point no longer fails with:
  Error: failed to remove password for user [email protected]: failed
      to invoke passwd(1)

... because of:
  # passwd --delete [email protected]
  passwd: Libuser error at line: 210 - name contains invalid char `@'.

This is purely an accident, and isn't meant to be an intential change to
support Active Directory.  Tools like useradd(8) and usermod(8) from
Shadow aren't meant to work with Active Directory users, and, hence, it
can still break in other ways.  For that, one option is to expose $USER
from the host operating system to the Toolbx container through a Varlink
interface that can be used by nss-systemd inside the container.

Based on an idea from Si.

containers#585
@debarshiray
Copy link
Member

debarshiray commented Aug 24, 2023

@fallendusk @yrro @woolsgrs Does #1349 work around this problem for you, while we work on the proper solution that @abbra laid out?

@yrro
Copy link

yrro commented Aug 25, 2023

@fallendusk @yrro @woolsgrs Does #1349 work around this problem for you, while we work on the proper solution that @abbra laid out?

This works, thanks!

Could you please show me how the container's /etc/shadow looks like with this change? I don't have an Active Directory set-up at hand, so I am a bit blind here.

FYI with your PR there is no entry for my user in /etc/shadow at all. /etc/passwd looks normal:

[email protected]::9360235:9360235:Sam Morris:/home/example.com/yrro:/bin/bash

@woolsgrs
Copy link

woolsgrs commented Sep 1, 2023

Ditto what I found in my testing it just omits the shadow entry

@debarshiray
Copy link
Member

Yes, that sounds correct.

If you look at the user: $USER in shadow(5) inside tests added in #1355 , before the changes in #1349 were made, you'll see that /etc/shadow isn't meant to have an entry for $USER.

The root cause of this lies in what podman(1) does to the files in /etc when you create a container with podman create --userns keep-id .... It only adds entries to /etc/passwd and /etc/group, but doesn't touch /etc/shadow at all. This is the commit where that behaviour was introduced. It was later fine tuned to put an * instead of an x as the password in passwd(5).

So, even when we were using passwd --delete <user>, there was nothing in /etc/shadow to remove. That continues today with usermod --password ''.

Thanks for testing it out, @yrro and @woolsgrs ! Much appreciated.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
1. Bug Something isn't working
Projects
None yet
Development

No branches or pull requests

5 participants