Skip to content

Use Case: Run As User

Michael Kenney edited this page Mar 20, 2018 · 19 revisions

History

TLDR;

reasons... mostly around containers writing files to mounted volumes as root which is annoying. Solution is more interesting.

Most of the issues my coworkers and I have run into are related to the /run-as-user script used as the container entrypoint. Essentially, this script enables a process that executes to run as a specified UID and GID in the system. This was important in the beginning because all our development work was done in a Linux VM (RedHat) using Docker v1.3 with fig and as an organization we were just learning about Docker and what it was and how it worked.

Because of the fig dependency, and the way the operations team had us using Docker, and the fact that this was really just an added burden to the engineering teams who now needed to support and work with these container things even though their primary job was writing software, everything always executed as root (UID=0). That made it frustrating and difficult to begin to normalize around build tools in our development environment that ran out of containers (as an effort to make the dev environments match our CI process) because all files were always written to the mounted volume as root, causing random issues with other development workflows related to file permissions.

For example, executing a composer install or similar would cause all vendor/ files written to be written as root which is frustrating when you have to edit a vendor file for testing or delete the vendor directory or the dependency managers lock file, etc., because you continuously found yourself having to rerun your last command with sudo.

Solution

TLDR;

A user account was created for commands to run as and that user is modified at runtime to use specified UID and GID values.

Clearly the preferred solution is to write the files out as the current user, so my solution was to create a user in the image to perform that role for me and a script to bootstrap that user. So, a user and group named dev was added to the image.

Because when the dev user is created it's assigned a UID and GID for the image and is unrelated to the host system, and because all of our docker daemons ran as UID 0 and so we would run all docker and fig (and later, docker-compose) commands with sudo, that wasn't sufficient to write files with unproblematic ownership. Often the files were written as a UID/GID that didn't exist on the host which had most of the same issues as writing them out as root.

To work with that, the run-as-user script was added that would read the UID and GID values of the host directory that was volume mounted into the image (generally at the /src mountpoint) and then modify that dev user with those values via usermod and groupmod. And that script was set as the container ENTRYPOINT and any docker run arguments used were passed to the script resulting in a container command that looks something like

/run-as-user npm install

Public key authentication

TLDR;

For build tools to access private repositories, your SSH keys need to be added to the container and the user running commands must use matching UID and GID values.

Many of our build tools also needed to pull code from private repositories and we had standardized on pubilc key authentication for access. In order for that to work, SSH calls need to know your username or or everything would run with the username dev. The easiest solution is to add your username for the resource being accessed to an ~/.ssh/config file and mount that file into the container at runtime. This had a side-effect though when the directory mounted into /src was not owned by the same user as the ssh keys which makes using them illegal.

So, a new behavior was added to the run-as-user script that said if the ~/.ssh directory existed at /home/dev/.ssh and it was not owned by root, then the script would execute commands as the owner of that directory instead of /src. That solution is slightly odd in some cases and can be surprising so it's not ideal, but it works in most cases.

Docker for Mac public key issues

TLDR;

Docker for Mac introduced a situation that caused the script to fail to run commands as the image's dev user and would still execute as root. In that situation, file output permissions were a non-issue but SSH key access was, so your SSH keys need to be added to the root user account as well.

There is one additional issue that came about with Docker for Mac and trying to transition to local development. In that environment, externally containers run as the current OS user, but internally through the hypervisor those UID and GID values are translated to 0 so in the containers when when the UID and GID of a directory (either /src or /home/dev/.ssh) is evaluated, if those values match the values of the OS user running docker, the container recieves 0 as the values and modifies the dev user accordingly, so when sudo evaluates that user's UID and GID and then looks for that user in the users file it finds root instead of dev and decides to run as root.

Because of that, for the scripts to work in an OS X environment, the ssh keys need to be added to root's home directory. Because we don't all work in the same environment (some using linux VMs, some running linux locally, some running on Macs locally, etc.) the solution has become to dual mount those files to both /home/dev/.ssh and /root/.ssh so our build tools work the same way in everyone's dev environment.

About the run-as-user script

Public key access to private repositories

Specify UID and GID values

Override the run-as-user script entirely

Clone this wiki locally