-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Running tests locally #148
Comments
I thought I was able to run the tests using the provided Vagrantfile because they at least started running. But now that they've completed I see some (but not all) of the failed tests you are seeing as well. They also started happening in CI (https://github.com/hashicorp/nomad-driver-podman/runs/4740622562?check_suite_focus=true), so I wonder if it's related to some change in Podman itself 🤔 But in general, I would suggest using the Vagrant box for running the tests since it's a way to keep environment consistent. It would be hard for us to support multiple environments at this point. This PR has the changes that I had to make to get the tests running in the Vagrant box. |
Understood, thanks. I will see if I can get the vagrant tests running on Monday in our environment. |
|
CI uses podman from the build containers. Current version is 3.4.2, published November 18th 2021, and is failing. The last commit in master was run 2nd November against podman 3.3.1 and was successful. The kubic repository does not maintain old versions of podman. Given failures across a wide range of podman versions, I'm not sure these failures are being caused by just a change in podman version. |
A quick check on a dev machine with a recent fedora and podman 3.4.0 was fine, "works on my machine". We're further running the plugin in prod since a while. Currently with podman 3.4.2 on Ubuntu with a few thousand containers. So i'm relatively sure that is not a bigger problem/change in podman. I took a glance at the failed GH action runs. My assumption is that GH might inflict some new restrictions on machines. I can see that most tests are network or device as well as mountpoint related. Further i believe that it's useless to try with 3.2.x or even older versions. @optiz0r can you maybe post some details regarding the failed tests on kubic 3.4.2 on ubuntu 20.04? |
Failed tests from the Ubuntu 20.04 vagrant box
And since you say a recent fedora works for you, @towe75 , this is what I get on Fedora 35, go 1.17.6, podman 3.4.4, (selinux enabled, which I mention since I saw an avc denial on ls /dev/net/tun while the test suite was running):
|
You're running the tests as uid=1000. The driver is still not fully adopted to rootless, hence the tests also assume to run against a rootful podman. Please try to rerun as root or with sudo. |
I've had to jump through a few more environmental hoops to get the vagrant box running tests in rootful mode:
I still see similar looking failures however:
Perhaps it would be best generally to skip the tests that require rootful podman, rather than have the test suite fail if running rootless, like the cgroupv2 tests are skipped? |
Can you reproduce the failures if you run one failed test after another? I did not have time to check the code yet. Any chance that all those tests boil down to read the result from stdout and/or stderr? Maybe stdout handling is broken/misconfigured/changed somehow. |
Sorry for the delay in responding, have been looking at other things this week. Running failing tests standalone does not cause them to pass (still in the normal Ubuntu 20.04 vagrant VM). Picking a couple of examples:
|
@optiz0r i took a minute to test on a completely new system. I can reproduce the issue now and the culprit seems to be that podman defaults now to the "journald" logdriver. Hence we're configuring a filename for stdout in some tests but we don't switch the driver to "json-file". Coming back to my previous statements:
I tried this and it fixes the issue:
Have to create a branch/commit later. Sorry for the inconvenience. Some periodic GH action run might have caught this automatically. @lgfa29 : WDYT? |
This looks better. I'm down to just a single failure with the above local modification
At first I thought this might have been because the vagrant box run under VirtualBox doesn't have any swap configured by default. I added 200MB swap via a loop mount, but that didn't change the behaviour fallocate -l 200M /root/swap.img
losetup /dev/loop3 /root/swap.img
mkswap /dev/loop3
swapon /dev/loop3 |
Restores the previous default logging driver to json-file following upstream podman change to journald. Fixes hashicorp#148
Restores the previous default logging driver to json-file following upstream podman change to journald. Fixes hashicorp#148
Restores the previous default logging driver to json-file following upstream podman change to journald. Fixes hashicorp#148
Restores the previous default logging driver to json-file following upstream podman change to journald. Fixes hashicorp#148
Restores the previous default logging driver to json-file following upstream podman change to journald. Fixes #148
Hi,
Thinking about starting to write some PRs against this project, I was looking to run the tests locally to confirm everything was clean before starting, and that any changes I make still pass tests. I'm having a difficult time getting the tests to run at all, and am seeing lots of failures that don't appear to be happening when running under Github Actions. Are there some undocumented environmental steps required to make tests run?
go test
fails witht.Parallel called multiple times
. Running the test suite withenv CI=1
appears to fix this:Stack trace
podman system service
is running. Runningpodman system service --time=0 unix:///run/user/1093000004/podman/podman.sock &
before starting the test suite appears to fix this errordial unix /run/user/1093000004/podman/podman.sock connect: no such file or directory
On my corporate workstation (which requires a web proxy for internet access, the pull test appears to fail, despite having appropriate env vars setup. Manually pulling
busybox:musl
before running the test suite appears to work around the errorOn my corp system (CentOS 8, podman 3.0.1, selinux disabled), I'm getting three failed tests
Failed tests
Failed tests
Warnings
Is there a magic incantation or pre-requisite setup required to make these all pass outside of Github Actions?
The text was updated successfully, but these errors were encountered: