-
Notifications
You must be signed in to change notification settings - Fork 13
integration tests fail to setup overlay within container launch #10
Comments
I see you want to run podman within unprivileged podman. And this is something is rather unreal with gitlab right now. Among another options - gitlab is capable of running dind or the second container as service Both are are GitLab/Docker hacks™, so should work |
for the reference: |
I see no Regarding As I understood from https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=e9be9d5e76e34872f0c37d72e25bc27fe9e2c54c , without lowerdir , whole overlayfs does not make sense? |
What we need is |
I don't think this is correct. Overlay::writable(
iter::once(overlay.toolchain_dir.as_path()),
upper_dir,
work_dir,
&target_dir,
).mount() has one lower dir represented by the iterator as required API desc |
Demo to showcase the issue for faster test cycles https://github.com/drahnr/overlay-fs-gen |
Even the test binary fails with non root (even on the host!), so it has to be a privileged container that is run as root or via the docker daemon. |
Since this is about CI, it should be linked with #1 I've been thinking about our options, the most obvious is |
Other options to run docker in the pipeline include running a shell executor, binding a socket, but they won't be optimal for us. The remaining choice that I can think of right now:
|
I think this is how it was done where I worked in the past, now that you put the links up.
Not sure how we would test networking right now. That would need some additional machinery.
Yikes, that seems to be a bit of an overkill? My experience with kubernetes is limited though. Idea: Docker can run arbitrary executors, and it might make sense to look into using |
I guess you mean, e.g. one another process in container acting as proxy, through which those parts would be talking through, to monitor/analyse etc network through that proxy? (Eventually instrumenting network calls via LD_PRELOAD :P) |
Yes, but we should discuss that in a separate issue and it's also a topic for the a $(distant future) release. |
With Kubernetes we will be able to scale the setup and perform everything we will ever need, like testing networking, distributed multi-source caching etc. I also don't have that much of experience with setting up k8s, luckily we have who to ask for the help. |
I just realized that we are blocked here. |
During the impl of #9 I ran into issues trying to create a overlay mount from withing another container, which is part of the unit test harness.
This https://github.com/paritytech/sccache/blob/bernhard-podman/src/bin/sccache-dist/build.rs#L273-L307 piece of code errors out with the following error:
(shortened uuids to
d9629
, added newlines for readbility)To reproduce:
in branch
bernhard-podman
.Context
The outer container is a rootless
podman
container.podman
has configurable backends,overlay
,vfs
,btrfs
- the first and last were attempted without any effect.Adding
--privileged
or--add-cap CAP_SYS_ADMIN
were also attempted for either backend without effect.Relevant code: https://github.com/paritytech/sccache/blob/bernhard-podman/tests/harness/mod.rs#L354-L387
The text was updated successfully, but these errors were encountered: