Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support SELinux enabled systems #1333

Open
langdon opened this issue Aug 30, 2019 · 40 comments
Open

Support SELinux enabled systems #1333

langdon opened this issue Aug 30, 2019 · 40 comments
Labels
containers Issue in vscode-remote containers feature-request Request for new features or functionality upstream Issue identified as 'upstream' component related (exists outside of VS Code Remote)

Comments

@langdon
Copy link

langdon commented Aug 30, 2019

Environment

  • VSCode Version: 1.37.1
  • Local OS Version: Fedora 30
  • Remote OS Version: fedora 30, debian 10 (really the python 3 default container)
  • Remote Extension/Connection Type: Docker

Steps to Reproduce:

  1. use default open-folder in container
  2. choose python 3 container
  3. docker exec in to the generated container
  4. ls -l /workspaces/name-of-your-project
  5. permission denied

You can see the same problem in the normal GUI interface but it is less obvious what is going on. You also have the same issue if you use a custom container and (probably) any other container.

Basically, as far as I can tell, the bind mount of the user's devel dir into /workspaces is not using the z or Z flag that let's it work well with SELinux. I think this will be particularly a problem as you can't set that flag using the new --mount option (see: Differences between “--mount” and “--volume” on https://docs.docker.com/engine/reference/commandline/service_create/#add-bind-mounts-or-volumes) at all.

There is a workaround that I note here for anyone running into this issue but probably not something the tool should do automatically. You can chcon your devel directory to be modifiable by docker. For example: chcon -Rt svirt_sandbox_file_t /full/path/to/your/code then reattach your devel dir to the container (probably using rebuild).

@chrmarti
Copy link
Contributor

chrmarti commented Sep 5, 2019

You could set "workspaceMount" to null (or the empty string) and use "runArgs" to do the mount using --volume in the devcontainer.json.

@chrmarti chrmarti added containers Issue in vscode-remote containers feature-request Request for new features or functionality upstream Issue identified as 'upstream' component related (exists outside of VS Code Remote) labels Sep 5, 2019
@chrmarti
Copy link
Contributor

chrmarti commented Oct 2, 2020

There are now also built-in ways of connecting to a Docker volume:

@sclel016
Copy link

sclel016 commented May 1, 2021

What is the current recommended work around for SELinux? I'm trying to open a host workspace in a container using the dev container infrastructure built into vscode-remote. It seems that on systems with SELinux, this can only be accomplished with a bind mount and z or Z flags.

Short of cloning a repository to a volume, is there a better workflow that still involves vscode-remote?

@PavelSosin-320
Copy link

PavelSosin-320 commented May 2, 2021

@sclel016 Those people who invented SeLinux provided a tool that gives to rootless users the same power as the root user has without compromising security - FUSE and FUSE mount Look at the contributor's list of both projects. It is an important part of Rootles Docker and Podman and comes to Linux as a dependency. Using FUSE mount implementation instead of Linux mount solves the problem.
The configuration that works perfectly for me in Podman:
graphDriverName: overlay
graphOptions:
overlay.mount_program:
Executable: /usr/bin/fuse-overlayfs
Package: fuse-overlayfs-1.5.0-1.fc33.x86_64
Version: |-
fusermount3 version: 3.9.3
fuse-overlayfs: version 1.5
FUSE library version 3.9.3
using FUSE kernel interface version 7.31

@PavelSosin-320
Copy link

@sclel016 Docker graph driver overlayfs + fuse/overlayfs should work for you on any Linux - fuse-overlayf storage driver should work for you on any Linux

@aallrd
Copy link

aallrd commented Jul 6, 2021

Hello,

I am using VSCode 1.57.1 and Podman 3.1.2.

I managed to mount my SELinux protected directory using this runArgs configuration:

// Required for an empty mount arg, since we manually add it in the runArgs
"workspaceMount": "",
"runArgs": [
  "--volume=/home/aallrd/work/project:/workspaces/project:Z"
]

However, I am not able to use the ${workspaceFolder} and ${workspaceFolderBasename} variables in the runArgs values for the volume command.

I am not sure if it used to worked but I remember doing something like that previously (where ${workspaceFolder} would be the folder opened with VSCode containing the .devcontainer/devcontainer.json file):

"runArgs": [
  "--volume=${workspaceFolder}:/workspaces/${workspaceFolderBasename}:Z",
]

It fails with this error:

[2021-07-06T16:59:09.847Z] Error: error creating named volume "${workspaceFolder}": error running volume create option: names must match [a-zA-Z0-9][a-zA-Z0-9_.-]*: invalid argument

Is it expected?

@aallrd
Copy link

aallrd commented Jul 7, 2021

Could it be linked to #5007 ?

@chrmarti
Copy link
Contributor

chrmarti commented Jul 9, 2021

@aallrd Thanks for filing #5301. Tracking the missing variables support with "runArgs" there.

@lovasoa
Copy link

lovasoa commented Aug 12, 2021

Hello !
Has there been any news on this front ? Dev containers are currently broken on fedora.

Is there a problem with --volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z which prevents it from being enabled by default ?

@chrmarti
Copy link
Contributor

@lovasoa Make sure to clear the "workspaceMount":

	"workspaceMount": "",
	"runArgs": ["--volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z"],

@PavelSosin-320
Copy link

@langdon You can try it:

  • Upgrade to Fedora 34 or 33 at least. (New Kernel, SeLinux, Cgroup,systemd, FUSE standards, and implementation.
  • Upgrade Docker to 20.10 (rootless mode)
    Neither of these versions promised 100% back compatibility.
    Check to corresponding sites.

@jibsaramnim
Copy link

jibsaramnim commented Nov 3, 2021

I ran into this for the first time just now as I'm setting up a new environment using Fedora 35. If you're making use of one of the existing devcontainer configurations or wrote your own that makes use of a docker-compose.yml file, you can achieve the same as what's mentioned above here by setting the :Z flag (a selinux specific label, apparently) on a volume defined there, like so:

services:
   app:
      # ...etc
      volumes:
         - ..:/workspace:Z

Hopefully a more official solution can be made available at some point. I'm not sure if an environment specific fix should exist in a repository's configuration file, as colleagues/collaborators might use very different environments, but at least for now this can hopefully help you get back to working on your project :).

anas-didi95 added a commit to anas-didi95/springboot-ecommerce-server that referenced this issue Mar 11, 2022
For docker compose volume, need to suffix with :z due to selinux
configuration. Without the suffix, the folder cannot open in container
due no permission.
microsoft/vscode-remote-release#1333
@Aricg
Copy link

Aricg commented Apr 26, 2022

ha ha of course it was selinux

@bradydean
Copy link

I'm working on fedora 37 and getting this. The manual bind mount doesn't work, the files inside the container are owned by root

@jibsaramnim
Copy link

jibsaramnim commented Dec 11, 2022

I'm working on fedora 37 and getting this. The manual bind mount doesn't work, the files inside the container are owned by root

Could you share (a snippet of) your docker-compose.yml or devcontainer.json file? I'm running Fedora 37 as-well, and have been able to continue using it as before, maybe we can spot what may be off in your config.

@bradydean
Copy link

bradydean commented Dec 11, 2022

Hey @jibsaramnim, this is my devcontainer.json

{
	"name": "Existing Dockerfile",
	"build": {
		"context": "..",
		"dockerfile": "../Dockerfile.dev"
	}
}

Dockerfile.dev

FROM node:18.12.1

RUN corepack enable && corepack prepare yarn@stable --activate

USER node

I also tried

{
	"name": "Existing Dockerfile",
	"build": {
		"context": "..",
		"dockerfile": "../Dockerfile.dev"
	},
	"workspaceMount": "",
	"runArgs": ["--volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z"]
}

but both ways have the same problem, the workspace files are owned by root

node@b8c568a240d2:/workspaces/app$ ls -l
total 100
-rw-r--r-- 1 root root    93 Dec 11 20:42 Dockerfile.dev
-rw-r--r-- 1 root root  1582 Jun 22  1984 README.md
-rw-r--r-- 1 root root   201 Jun 22  1984 next-env.d.ts
-rw-r--r-- 1 root root   137 Jun 22  1984 next.config.js
-rw-r--r-- 1 root root   465 Dec 10 02:36 package.json
drwxr-xr-x 1 root root    40 Dec 10 02:35 pages
drwxr-xr-x 1 root root    42 Dec 10 02:35 public
drwxr-xr-x 1 root root    52 Dec 10 02:35 styles
-rw-r--r-- 1 root root   509 Jun 22  1984 tsconfig.json
-rw-r--r-- 1 root root 22258 Dec 10 02:38 tsconfig.tsbuildinfo
-rw-r--r-- 1 root root 50789 Dec 10 02:36 yarn.lock

EDIT: Using docker 20.10.21 via docker desktop.

@jibsaramnim
Copy link

Dockerfile.dev

FROM node:18.12.1

RUN corepack enable && corepack prepare yarn@stable --activate

USER node

Correct me if I'm wrong, but are you using a non-vscode container image? There might be a difference in user IDs that causes the issue for you. Alternatively, you could try setting "remoteUser": "node" in your devcontainer.json to see if that resolves it with the container image you're using here.

Could you perhaps try starting with one of VSCode's container presets? In my case with the exact same runArgs command I have it working just fine. Same for projects where I have a docker-compose.yml file, setting the right flag there makes it work perfectly under Fedora 37.

@bradydean
Copy link

FWIW adding "remoteUser": "node" w/ node:18.12.1 did not work.

Using the node+typescript preset + runArgs does not work either, files are still owned by root.

{
	"name": "Node.js & TypeScript",
	"image": "mcr.microsoft.com/devcontainers/typescript-node:0-18",
	"workspaceMount": "",
	"runArgs": ["--volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z"]
}

@bradydean
Copy link

I played around with volumes with :Z with a dummy container and it doesn't appear docker is changing the selinux labels at all. Should I expect a difference in ls -Z on a file before/after it has been mounted with :Z?

@jibsaramnim
Copy link

Using the node+typescript preset + runArgs does not work either, files are still owned by root.

There might be something (permission related, perhaps?) going on on your particular system -- who owns the files you are trying to edit?

I just tried it with the same node+typescript preset you mentioned in a test directory, just modifying devcontainer.json to add the workspaceMount and runArgs lines exactly as you wrote them out, and it's looking fine on my end:

{
  "name": "Node.js & TypeScript",
  "image": "mcr.microsoft.com/devcontainers/typescript-node:0-18",

  "workspaceMount": "",
  "runArgs": [
    "--volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z"
  ]
}
node ➜ /workspaces/temp $ ls -Z
system_u:object_r:container_file_t:s0:c390,c979 readme.md  system_u:object_r:container_file_t:s0:c390,c979 test.js
node ➜ /workspaces/temp $ ls -la
total 0
drwxr-xr-x. 1 node node 58 Dec 12 08:30 .
drwxr-xr-x. 1 root root  8 Dec 12 08:32 ..
drwxr-xr-x. 1 node node 34 Dec 12 08:30 .devcontainer
-rw-r--r--. 1 node node  0 Dec 12 08:30 readme.md
-rw-r--r--. 1 node node  0 Dec 12 08:30 test.js

Are you running podman, moby-engine or docker's own set of packages?

@bradydean
Copy link

bradydean commented Dec 12, 2022

Files are owned by my user account. I'm using docker-desktop via the rpm package.

node ➜ /workspaces/temp $ ls -Z
system_u:object_r:container_file_t:s0:c390,c979 readme.md  system_u:object_r:container_file_t:s0:c390,c979 test.js

This is what I get inside the container

node ➜ /workspaces/next-app $ ls -lZ
total 144
drwxr-xr-x 2 root root ?   4096 Dec  7 14:28 app
-rw-r--r-- 1 root root ?    177 Dec  7 14:08 next.config.js
-rw-r--r-- 1 root root ?    201 Jun 22  1984 next-env.d.ts
-rw-r--r-- 1 root root ?    530 Dec  7 21:59 package.json
drwxr-xr-x 3 root root ?   4096 Dec  7 14:18 pages
drwxr-xr-x 2 root root ?   4096 Dec  7 13:47 public
-rw-r--r-- 1 root root ?   1582 Jun 22  1984 README.md
drwxr-xr-x 2 root root ?   4096 Dec  7 14:22 styles
-rw-r--r-- 1 root root ?    647 Dec  7 14:20 tsconfig.json
-rw-r--r-- 1 root root ? 107296 Dec  7 14:37 yarn.lock

This is outside the container

[brady@fedora next-app]$ ls -lZ
total 144
drwxr-xr-x. 2 brady brady unconfined_u:object_r:user_home_t:s0   4096 Dec  7 09:28 app
-rw-r--r--. 1 brady brady unconfined_u:object_r:user_home_t:s0    177 Dec  7 09:08 next.config.js
-rw-r--r--. 1 brady brady unconfined_u:object_r:user_home_t:s0    201 Jun 22  1984 next-env.d.ts
-rw-r--r--. 1 brady brady unconfined_u:object_r:user_home_t:s0    530 Dec  7 16:59 package.json
drwxr-xr-x. 3 brady brady unconfined_u:object_r:user_home_t:s0   4096 Dec  7 09:18 pages
drwxr-xr-x. 2 brady brady unconfined_u:object_r:user_home_t:s0   4096 Dec  7 08:47 public
-rw-r--r--. 1 brady brady unconfined_u:object_r:user_home_t:s0   1582 Jun 22  1984 README.md
drwxr-xr-x. 2 brady brady unconfined_u:object_r:user_home_t:s0   4096 Dec  7 09:22 styles
-rw-r--r--. 1 brady brady unconfined_u:object_r:user_home_t:s0    647 Dec  7 09:20 tsconfig.json
-rw-r--r--. 1 brady brady unconfined_u:object_r:user_home_t:s0 107296 Dec  7 09:37 yarn.lock

@jibsaramnim Are your selinux labels the same inside+outside the container? This is what I meant by I don't think docker is changing the labels correctly.

@bradydean
Copy link

@jibsaramnim What is the output of docker info | grep Security -A3 for you?

@jibsaramnim
Copy link

Are your selinux labels the same inside+outside the container? This is what I meant by I don't think docker is changing the labels correctly.

They are yes:

node ➜ /workspaces/temp $ ls -lZ
total 0
-rw-r--r--. 1 node node system_u:object_r:container_file_t:s0:c390,c979 0 Dec 12 08:30 readme.md
-rw-r--r--. 1 node node system_u:object_r:container_file_t:s0:c390,c979 0 Dec 12 08:30 test.js

Outside the container:

~/P/temp ❯❯❯ ls -lZ
total 0
-rw-r--r--. 1 davejansen davejansen system_u:object_r:container_file_t:s0:c390,c979 0 12월 12일  17:30 readme.md
-rw-r--r--. 1 davejansen davejansen system_u:object_r:container_file_t:s0:c390,c979 0 12월 12일  17:30 test.js

What is the output of docker info | grep Security -A3 for you?

docker info | grep Security -A3
 Security Options:
  seccomp
   Profile: default
  selinux

In case it helps; I am running Fedora Silverblue 37 with moby-engine and docker-compose layered. My Docker setup is as stock as can be, other than having added my own user to the docker user group.

@bradydean
Copy link

bradydean commented Dec 13, 2022

Cool, that's what I expected. I don't have selinux in my docker info. Seems to be an issue with docker desktop, even when I add the config option to enable selinux support. I made an issue for it here docker/desktop-linux#104

I temporarily switched to podman and its selinux support works.

@langdon
Copy link
Author

langdon commented Dec 19, 2022

@bradydean have you considered podman desktop ? (shameless plug)

@bradydean
Copy link

@langdon oh nice, I didn't even know that existed. I'll play around with it.

@bradydean
Copy link

Well, I'm not really sure what happened, but my files inside the container are owned by root again, even using podman...

[brady@fedora foo]$ podman run --rm --user node -v $PWD/file:/file:Z mcr.microsoft.com/devcontainers/typescript-node:0-18 ls -l /
total 76
drwxr-xr-x.   1 root   root    4096 Dec 19 14:07 bin
drwxr-xr-x.   2 root   root    4096 Sep  3 12:10 boot
drwxr-xr-x.   5 root   root     340 Dec 20 23:56 dev
drwxr-xr-x.   1 root   root    4096 Dec 20 23:56 etc
-rw-r--r--.   1 root   root       6 Dec 20 23:47 file
drwxr-xr-x.   1 root   root    4096 Dec  6 09:02 home
drwxr-xr-x.   1 root   root    4096 Dec  6 02:14 lib
drwxr-xr-x.   2 root   root    4096 Dec  5 00:00 lib64
drwxr-xr-x.   2 root   root    4096 Dec  5 00:00 media
drwxr-xr-x.   2 root   root    4096 Dec  5 00:00 mnt
drwxr-xr-x.   1 root   root    4096 Dec  6 09:05 opt
dr-xr-xr-x. 472 nobody nogroup    0 Dec 20 23:56 proc
drwx------.   1 root   root    4096 Dec 19 14:07 root
drwxr-xr-x.   1 root   root    4096 Dec 20 23:56 run
drwxr-xr-x.   1 root   root    4096 Dec 19 14:07 sbin
drwxr-xr-x.   2 root   root    4096 Dec  5 00:00 srv
dr-xr-xr-x.  13 nobody nogroup    0 Dec 20 13:36 sys
drwxrwxrwt.   1 root   root    4096 Dec 19 21:05 tmp
drwxr-xr-x.   1 root   root    4096 Dec  5 00:00 usr
drwxr-xr-x.   1 root   root    4096 Dec  5 00:00 var

@bradydean
Copy link

bradydean commented Dec 21, 2022

Anyways, podman unshare chown 1000:1000 file fixed that, and that reminded me of docker desktop's file sharing options. I already had /home in there, but I removed it, then added /home/brady and docker desktop is working now.

EDIT: It worked once and only once..
EDIT2: Did some more playing around and apparently podman unshare will correct the perms for docker-desktop

@ctron
Copy link

ctron commented Mar 24, 2023

Is there a real solution now to this? The workarounds I saw all seem to require patching the devcontainer configuration. Which may work for one setup, but not for another. So as the original reported mentioned, I would expect some out-of-the-box support for this.

@theonlyfoxy
Copy link

theonlyfoxy commented Apr 28, 2023

as a workaround you could set remoteUser to root.

example devcontainer.json:

{
"remoteUser": "root",
"containerUser": "vscode",
"workspaceMount": "",
"runArgs": ["--volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z"]
}

also see.

@TommyTran732
Copy link

TommyTran732 commented Jun 8, 2023

I ran into this issue with the Docker package from the official Fedora repository. However, when I switched to using the package from the upstream docker repo, the problem goes away. There's no need to manually set the :z or :Z flag. I am not sure what has change though.

@ctron
Copy link

ctron commented Jun 20, 2023

To my understanding that simply drops the SElinux support and runs everything as root. Which might not be everyone's cup of tea.

@sanmai-NL
Copy link

as a workaround you could set remoteUser to root.

example devcontainer.json:

{
"remoteUser": "root",
"containerUser": "vscode",
"workspaceMount": "",
"runArgs": ["--volume=${localWorkspaceFolder}:/workspaces/${localWorkspaceFolderBasename}:Z"]
}

also see.

This does not work when your image has tooling installed specifically configured for the unprivileged user (PATH, standard directories, etc.).

@geoffreysmith
Copy link

No I have a feeling why only containerd+ docker are used in the k8s lightweightvm + containterd. There’s a few other containers that are allowed. Basically from what I gather the eventual work around is to assume containers are run under hypervisor (visor now) ignored by demonic and have selinyx ignore all non objects and containers.

I believe there’s a system call made in docker that ignores bind/volumes and as the hypervisot intercepts all Linux calls it makes more sense to patch it then break all of docker.

Can someone direct me to a github repo they tested this on? I can trace the syscall and see if disabling apparmor seccomp in containers fixes this.

This feels like a historical docker issue that it is easier to rewrite a hypervisor then change docker.

@Malix-Labs
Copy link

Malix-Labs commented Jun 2, 2024

So what should be the default minimal addition to .devcontainer/devcontainer.json file to make Podman run on SELinux ?

So far I've seen 3 versions (excluding the :Z variant which is basically cheating), but I don't know which is the best, and what some options really does and why do they work

  1. vscode docs

    "runArgs": [
    	"--userns=keep-id"
    ],
    "containerEnv": {
    	"HOME": "/home/node"
    }
  2. universal blue - devcontainer setup

    "runArgs": [
    	"--userns=keep-id:uid=1000,gid=1000"
    ],
    "containerUser": "vscode",
    "updateRemoteUserUID": true,
    "containerEnv": {
    	"HOME": "/home/vscode"
    },
  3. universal blue - podman support

    "runArgs": [
    	"--userns=keep-id",
    	"--security-opt=label=disable"
    ],
    "containerEnv": {
    	"HOME": "/home/vscode"
    },

@geoffreysmith
Copy link

Install qVisor run containers in a lightweight hypervisor set SELinux to ignore objects and just binaries. Call it a day. If the containers need to talk to each other there’s tompr or something. Dev containers are made for old school Dockers running root not anything OCI compliant.

@Malix-Labs
Copy link

Malix-Labs commented Jun 2, 2024

  • Comment from @geoffreysmith :

    Install qVisor run containers in a lightweight hypervisor set SELinux to ignore objects and just binaries. Call it a day.

    qVisor would indeed be great, but I want to make everyone of the contributors of my repo to have working podman devcontainers without installing another package

    For that, I apparently have to append one of the three option above

@JaneSmith
Copy link

JaneSmith commented Sep 12, 2024

I have had a nightmare getting VS Code working with dev containers on Fedora Silverblue. I finally got it working with the following in devcontainer.json:

"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
"runArgs": [
	"--userns=keep-id",
	"--security-opt=label=disable"
],
"containerEnv": {
	"HOME": "/home/node"
},

None of the other solutions worked for me. I don't know why they worked for others and not for me.

"runArgs": [
	"--userns=keep-id",
],
"containerEnv": {
	"HOME": "/home/node"
},

The above, without setting --security-opt, let me build and enter the container. However, post-commands and anything that I actually run in the container terminal would fail. For example, ls would give me a permission error.

"runArgs": [
	"--userns=keep-id:uid=1000,gid=1000"
],
"containerUser": "vscode",
"updateRemoteUserUID": true,
"containerEnv": {
	"HOME": "/home/vscode"
},

The above wouldn't even let me build the container. I'd just get a "command failed" error with not much useful information.

Why is this still not working out of the box in 2024? This issue was opened in 2019! I thought the whole point of dev containers was that the project environment "just works", i.e. a contributor can pull a code repository and VS Code will automatically set up the environment so the contributor can get started. Obviously that doesn't work. And I find it quite strange that I have to resort to machine-specific hacks in the devcontainer.json file — I thought that file was supposed to be used for per-project settings, e.g. setting up dependencies needed for the project, not for per-machine settings (as this is a shared file for all contributors?). All very strange. Dev containers were advertised to me as a way to make things easier, but instead it's been the total opposite.

If anyone is interested in configuration for VS Code Flatpak app, I've also made the following changes successfully using Flatseal:

  • Disabled host filesystem access for better security.
  • Added read/write access to /tmp/ (necessary for dev containers to work; not sure if there's an alternative solution).
  • Added read/write access to /var/home/myuser/Projects (necessary for dev containers to work; not happy about this, but it doesn't seem to work when folders are opened via Flatpak portals).
  • Added read access to /var/home/myuser/.local/bin/podman-host:ro , where podman-host is this executable file provided by Distrobox to provide Podman access to the Flatpak app.

All seems to work now, finally... Until I run into the next problem. But I really am not happy with machine-specific tweaks to the devcontainer.json file. That might be fine for my own projects, but what about when I want to contribute to someone else's project?

@theonlyfoxy
Copy link

I have had a nightmare getting VS Code working with dev containers on Fedora Silverblue. I finally got it working with the following in devcontainer.json:

"workspaceFolder": "/workspaces/${localWorkspaceFolderBasename}",
"runArgs": [
	"--userns=keep-id",
	"--security-opt=label=disable"
],
"containerEnv": {
	"HOME": "/home/node"
},

None of the other solutions worked for me. I don't know why they worked for others and not for me.

"runArgs": [
	"--userns=keep-id",
],
"containerEnv": {
	"HOME": "/home/node"
},

The above, without setting --security-opt, let me build and enter the container. However, post-commands and anything that I actually run in the container terminal would fail. For example, ls would give me a permission error.

"runArgs": [
	"--userns=keep-id:uid=1000,gid=1000"
],
"containerUser": "vscode",
"updateRemoteUserUID": true,
"containerEnv": {
	"HOME": "/home/vscode"
},

The above wouldn't even let me build the container. I'd just get a "command failed" error with not much useful information.

Why is this still not working out of the box in 2024? This issue was opened in 2019! I thought the whole point of dev containers was that the project environment "just works", i.e. a contributor can pull a code repository and VS Code will automatically set up the environment so the contributor can get started. Obviously that doesn't work. And I find it quite strange that I have to resort to machine-specific hacks in the devcontainer.json file — I thought that file was supposed to be used for per-project settings, e.g. setting up dependencies needed for the project, not for per-machine settings (as this is a shared file for all contributors?). All very strange. Dev containers were advertised to me as a way to make things easier, but instead it's been the total opposite.

If anyone is interested in configuration for VS Code Flatpak app, I've also made the following changes successfully using Flatseal:

  • Disabled host filesystem access for better security.
  • Added read/write access to /tmp/ (necessary for dev containers to work; not sure if there's an alternative solution).
  • Added read/write access to /var/home/myuser/Projects (necessary for dev containers to work; not happy about this, but it doesn't seem to work when folders are opened via Flatpak portals).
  • Added read access to /var/home/myuser/.local/bin/podman-host:ro , where podman-host is this executable file provided by Distrobox to provide Podman access to the Flatpak app.

All seems to work now, finally... Until I run into the next problem. But I really am not happy with machine-specific tweaks to the devcontainer.json file. That might be fine for my own projects, but what about when I want to contribute to someone else's project?

Check this for a clean workaround that is compatible with others project.

@JaneSmith
Copy link

Check this for a clean workaround that is compatible with others project.

Thanks! Moving the security-opt setting to the Podman wrapper script seems like a good solution.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
containers Issue in vscode-remote containers feature-request Request for new features or functionality upstream Issue identified as 'upstream' component related (exists outside of VS Code Remote)
Projects
None yet
Development

No branches or pull requests